query_id
stringlengths
32
32
query
stringlengths
6
5.38k
positive_passages
listlengths
1
17
negative_passages
listlengths
9
100
subset
stringclasses
7 values
37f4180e63b56480aab4b84a026079f9
Conditional Similarity Networks
[ { "docid": "4eb1636ff952677938114bcf2d81a636", "text": "A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives.", "title": "" } ]
[ { "docid": "c2195ae053d1bbf712c96a442a911e31", "text": "This paper introduces a new method to solve the cross-domain recognition problem. Different from the traditional domain adaption methods which rely on a global domain shift for all classes between the source and target domains, the proposed method is more flexible to capture individual class variations across domains. By adopting a natural and widely used assumption that the data samples from the same class should lay on an intrinsic low-dimensional subspace, even if they come from different domains, the proposed method circumvents the limitation of the global domain shift, and solves the cross-domain recognition by finding the joint subspaces of the source and target domains. Specifically, given labeled samples in the source domain, we construct a subspace for each of the classes. Then we construct subspaces in the target domain, called anchor subspaces, by collecting unlabeled samples that are close to each other and are highly likely to belong to the same class. The corresponding class label is then assigned by minimizing a cost function which reflects the overlap and topological structure consistency between subspaces across the source and target domains, and within the anchor subspaces, respectively. We further combine the anchor subspaces to the corresponding source subspaces to construct the joint subspaces. Subsequently, one-versus-rest support vector machine classifiers are trained using the data samples belonging to the same joint subspaces and applied to unlabeled data in the target domain. We evaluate the proposed method on two widely used datasets: 1) object recognition dataset for computer vision tasks and 2) sentiment classification dataset for natural language processing tasks. Comparison results demonstrate that the proposed method outperforms the comparison methods on both datasets.", "title": "" }, { "docid": "35dc0d377749ebc6a004ce42ee0d55a0", "text": "Two- and four-pole 0.7-1.1-GHz tunable bandpass-to-bandstop filters with bandwidth control are presented. The bandpass-to-bandstop transformation and the bandwidth control are achieved by adjusting the coupling coefficients in an asymmetrically loaded microstrip resonator. The source/load and input/output coupling coefficients are controlled using an RF microelectromechanical systems (RF MEMS) switch and a series coupling varactor, respectively. The two- and four-pole filters are built on a Duroid substrate with ε r=6.15 and h=25 mil. The tuning for the center frequency and the bandwidth is done using silicon varactor diodes, and RF MEMS switches are used for the bandpass-to-bandstop transformation. In the bandpass mode of the two-pole filter, a center frequency tuning of 0.78-1.10 GHz is achieved with a tunable 1-dB bandwidth of 68-120 MHz at 0.95 GHz. The rejection level of the two-pole bandstop mode is higher than 30 dB. The bandpass mode in the four-pole filter has a center frequency tuning of 0.76-1.08 GHz and a tunable 1-dB bandwidth of 64-115 MHz at 0.94 GHz. The rejection level of the four-pole bandstop mode is larger than 40 dB. The application areas are in wideband cognitive radios under high interference environments.", "title": "" }, { "docid": "e43cb8fefc7735aeab0fa40ad44a2e15", "text": "Support vector machine (SVM) is an optimal margin based classification technique in machine learning. SVM is a binary linear classifier which has been extended to non-linear data using Kernels and multi-class data using various techniques like one-versus-one, one-versus-rest, Crammer Singer SVM, Weston Watkins SVM and directed acyclic graph SVM (DAGSVM) etc. SVM with a linear Kernel is called linear SVM and one with a non-linear Kernel is called non-linear SVM. Linear SVM is an efficient technique for high dimensional data applications like document classification, word-sense disambiguation, drug design etc. because under such data applications, test accuracy of linear SVM is closer to non-linear SVM while its training is much faster than non-linear SVM. SVM is continuously evolving since its inception and researchers have proposed many problem formulations, solvers and strategies for solving SVM. Moreover, due to advancements in the technology, data has taken the form of ‘Big Data’ which have posed a challenge for Machine Learning to train a classifier on this large-scale data. In this paper, we have presented a review on evolution of linear support vector machine classification, its solvers, strategies to improve solvers, experimental results, current challenges and research directions.", "title": "" }, { "docid": "3df6d6d8982c338f70e851307fc70948", "text": "Proof-of-Stake (PoS) protocols have been actively researched for the past five years. PoS finds direct applicability in open blockchain platforms and has been seen as a strong candidate to replace the largely inefficient Proof of Work mechanism that is currently plugged in most existing open blockchains. Although a number of PoS variants have been proposed, these protocols suffer from a number of security shortcomings; for instance, most existing PoS variants suffer from the nothing at stake and the long range attacks which considerably degrade security in the blockchain. In this paper, we address these problems and we propose two PoS protocols that allow validators to generate at most one block at any given “height”—-thus alleviating the problem of nothing at stake and preventing attackers from compromising accounts to mount long range attacks. Our first protocol leverages a dedicated digital signature scheme that reveals the identity of the validator if the validator attempts to work on multiple blocks at the same height. On the other hand, our second protocol leverages existing pervasive Trusted Execution Environments (TEEs) to limit the block generation requests by any given validator to a maximum of one at a given height. We analyze the security of our proposals and evaluate their performance by means of implementation; our evaluation results show that our proposals introduce tolerable overhead in the block generation and validation process when compared to existing PoS protocols.", "title": "" }, { "docid": "de5b79a5debac750a4970516778d926c", "text": "Vertical channel (VC) 3D NAND Flash may be categorized into two types of channel formation: (1) \"U-turn\" string, where both BL and source are connected at top thus channel current flows in a U-turn way; (2) \"Bottom source\", where source is connected at the bottom thus channel current flows only in one way. For the single-gate vertical channel (SGVC) 3D NAND architecture [1], it is also possible to develop a bottom source structure. The detailed array decoding method is illustrated. In this work, the challenges of bottom source processing and thin poly channel formation are extensively studied. It is found that the two-step poly formation and the bottom recess control are two key factors governing the device initial performance. In general, the two-step poly formation with additional poly spacer etching technique seems to cause degradation of both the poly mobility and device subthreshold slope. Sufficient thermal annealing is needed to recover the damage. Moreover, the bottom connection needs an elegant recess control for better read current as well as bottom ground-select transistor (GSL) device optimizations.", "title": "" }, { "docid": "1726729c32f43917802b902267769dda", "text": "The creation of micro air vehicles (MAVs) of the same general sizes and weight as natural fliers has spawned renewed interest in flapping wing flight. With a wingspan of approximately 15 cm and a flight speed of a few meters per second, MAVs experience the same low Reynolds number (10–10) flight conditions as their biological counterparts. In this flow regime, rigid fixed wings drop dramatically in aerodynamic performance while flexible flapping wings gain efficacy and are the preferred propulsion method for small natural fliers. Researchers have long realized that steady-state aerodynamics does not properly capture the physical phenomena or forces present in flapping flight at this scale. Hence, unsteady flow mechanisms must dominate this regime. Furthermore, due to the low flight speeds, any disturbance such as gusts or wind will dramatically change the aerodynamic conditions around the MAV. In response, a suitable feedback control system and actuation technology must be developed so that the wing can maintain its aerodynamic efficiency in this extremely dynamic situation; one where the unsteady separated flow field and wing structure are tightly coupled and interact nonlinearly. For instance, birds and bats control their flexible wings with muscle tissue to successfully deal with rapid changes in the flow environment. Drawing from their example, perhaps MAVs can use lightweight actuators in conjunction with adaptive feedback control to shape the wing and achieve active flow control. This article first reviews the scaling laws and unsteady flow regime constraining both biological and man-made fliers. Then a summary of vortex dominated unsteady aerodynamics follows. Next, aeroelastic coupling and its effect on lift and thrust are discussed. Afterwards, flow control strategies found in nature and devised by man to deal with separated flows are examined. Recent work is also presented in using microelectromechanical systems (MEMS) actuators and angular speed variation to achieve active flow control for MAVs. Finally, an explanation for aerodynamic gains seen in flexible versus rigid membrane wings, derived from an unsteady three-dimensional computational fluid dynamics model with an integrated distributed control algorithm, is presented. r 2003 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "309080fa2ef4f959951c08527ec1980d", "text": "Complete scene understanding has been an aspiration of computer vision since its very early days. It has applications in autonomous navigation, aerial imaging, surveillance, human-computer interaction among several other active areas of research. While many methods since the advent of deep learning have taken performance in several scene understanding tasks to respectable levels, the tasks are far from being solved. One problem that plagues scene understanding is low-resolution. Convolutional Neural Networks that achieve impressive results on high resolution struggle when confronted with low resolution because of the inability to learn hierarchical features and weakening of signal with depth. In this thesis, we study the low resolution and suggest approaches that can overcome its consequences on three popular tasks object detection, in-the-wild face recognition, and semantic segmentation. The popular object detectors were designed for, trained, and benchmarked on datasets that have a strong bias towards medium and large sized objects. When these methods are finetuned and tested on a dataset of small objects, they perform miserably. The most successful detection algorithms follow a two-stage pipeline: the first which quickly generates regions of interest that are likely to contain the object and the second, which classifies these proposal regions. We aim to adapt both these stages for the case of small objects; the first by modifying anchor box generation based on theoretical considerations, and the second using a simple-yet-effective super-resolution step. Motivated by the success of being able to detect small objects, we study the problem of detecting and recognising objects with huge variations in resolution, in the problem of face recognition in semistructured scenes. Semi-structured scenes like social settings are more challenging than regular ones: there are several more faces of vastly different scales, there are large variations in illumination, pose and expression, and the existing datasets do not capture these variations. We address the unique challenges in this setting by (i) benchmarking popular methods for the problem of face detection, and (ii) proposing a method based on resolution-specific networks to handle different scales. Semantic segmentation is a more challenging localisation task where the goal is to assign a semantic class label to every pixel in the image. Solving such a problem is crucial for self-driving cars where we need sharper boundaries for roads, obstacles and paraphernalia. For want of a higher receptive field and a more global view of the image, CNN networks forgo resolution. This results in poor segmentation of complex boundaries, small and thin objects. We propose prefixing a super-resolution step before semantic segmentation. Through experiments, we show that a performance boost can be obtained on the popular streetview segmentation dataset, CityScapes.", "title": "" }, { "docid": "d09573af38436e0892695bcda052758f", "text": "Damage to prefrontal cortex (PFC) impairs decision-making, but the underlying value computations that might cause such impairments remain unclear. Here we report that value computations are doubly dissociable among PFC neurons. Although many PFC neurons encoded chosen value, they used opponent encoding schemes such that averaging the neuronal population extinguished value coding. However, a special population of neurons in anterior cingulate cortex (ACC), but not in orbitofrontal cortex (OFC), multiplexed chosen value across decision parameters using a unified encoding scheme and encoded reward prediction errors. In contrast, neurons in OFC, but not ACC, encoded chosen value relative to the recent history of choice values. Together, these results suggest complementary valuation processes across PFC areas: OFC neurons dynamically evaluate current choices relative to recent choice values, whereas ACC neurons encode choice predictions and prediction errors using a common valuation currency reflecting the integration of multiple decision parameters.", "title": "" }, { "docid": "a93833a6ad41bdc5011a992509e77c9a", "text": "We present the implementation of a largevocabulary continuous speech recognition (LVCSR) system on NVIDIA’s Tegra K1 hyprid GPU-CPU embedded platform. The system is trained on a standard 1000hour corpus, LibriSpeech, features a trigram WFST-based language model, and achieves state-of-the-art recognition accuracy. The fact that the system is realtime-able and consumes less than 7.5 watts peak makes the system perfectly suitable for fast, but precise, offline spoken dialog applications, such as in robotics, portable gaming devices, or in-car systems.", "title": "" }, { "docid": "b12defb3d9d7c5ccda8c3e0b0858f55f", "text": "We investigate a simple yet effective method to introduce inhibitory and excitatory interactions between units in the layers of a deep neural network classifier. The method is based on the greedy layer-wise procedure of deep learning algorithms and extends the denoising autoencoder (Vincent et al., 2008) by adding asymmetric lateral connections between its hidden coding units, in a manner that is much simpler and computationally more efficient than previously proposed approaches. We present experiments on two character recognition problems which show for the first time that lateral connections can significantly improve the classification performance of deep networks.", "title": "" }, { "docid": "46cabd836b416be86a18262bc58e9dec", "text": "Encrypting data on client-side before uploading it to a cloud storage is essential for protecting users' privacy. However client-side encryption is at odds with the standard practice of deduplication. Reconciling client-side encryption with cross-user deduplication is an active research topic. We present the first secure cross-user deduplication scheme that supports client-side encryption without requiring any additional independent servers. Interestingly, the scheme is based on using a PAKE (password authenticated key exchange) protocol. We demonstrate that our scheme provides better security guarantees than previous efforts. We show both the effectiveness and the efficiency of our scheme, via simulations using realistic datasets and an implementation.", "title": "" }, { "docid": "0c42d9b5831d9e982c29a0b0b4993309", "text": "Insider threat detection requires the identification of rare anomalies in contexts where evolving behaviors tend to mask such anomalies. This paper proposes and tests an ensemble-based stream mining algorithm based on supervised learning that addresses this challenge by maintaining an evolving collection of multiple models to classify dynamic data streams of unbounded length. The result is a classifier that exhibits substantially increased classification accuracy for real insider threat streams relative to traditional supervised learning (traditional SVM and one-class SVM) and other single-model approaches.", "title": "" }, { "docid": "67ca7b4e38b545cd34ef79f305655a45", "text": "Failsafe performance is clarified for electric vehicles (EVs) with the drive structure driven by front and rear wheels independently, i.e., front and rear wheel independent drive type (FRID) EV. A simulator based on the four-wheel vehicle model, which can be applied to various types of drive systems like four in-wheel motor-drive-type EVs, is used for the clarification. Yaw rate and skid angle, which are related to drivability and steerability of vehicles and which further influence the safety of vehicles during runs, are analyzed under the condition that one of the motor drive systems fails while cornering on wet roads. In comparison with the four in-wheel motor-drive-type EVs, it is confirmed that the EVs with the structure focused in this paper have little change of the yaw rate and that hardly any dangerous phenomena appear, which would cause an increase in the skid angle of vehicles even if the front or rear wheel drive systems fail when running on wet roads with low friction coefficient. Moreover, the failsafe drive performance of the FRID EVs with the aforementioned structure is verified through experiments using a prototype EV.", "title": "" }, { "docid": "d32fdc6d5dd535079b93b2695ca917d5", "text": "We present a discrete spectral framework for the sparse or cardinality-constrained solution of a generalized Rayleigh quotient. This NP-hard combinatorial optimization problem is central to supervised learning tasks such as sparse LDA, feature selection and relevance ranking for classification. We derive a new generalized form of the Inclusion Principle for variational eigenvalue bounds, leading to exact and optimal sparse linear discriminants using branch-and-bound search. An efficient greedy (approximate) technique is also presented. The generalization performance of our sparse LDA algorithms is demonstrated with real-world UCI ML benchmarks and compared to a leading SVM-based gene selection algorithm for cancer classification.", "title": "" }, { "docid": "1d6bc809c0870ea88d7c66d330456da3", "text": "Orodispersible films (ODFs) are intended to disintegrate within seconds when placed onto the tongue. The common way of manufacturing is the solvent casting method. Flexographic printing on drug-free ODFs is introduced as a highly flexible and cost-effective alternative manufacturing method in this study. Rasagiline mesylate and tadalafil were used as model drugs. Printing of rasagiline solutions and tadalafil suspensions was feasible. Up to four printing cycles were performed. The possibility to employ several printing cycles enables a continuous, highly flexible manufacturing process, for example for individualised medicine. The obtained ODFs were characterised regarding their mechanical properties, their disintegration time, API crystallinity and homogeneity. Rasagiline mesylate did not recrystallise after the printing process. Relevant film properties were not affected by printing. Results were comparable to the results of ODFs manufactured with the common solvent casting technique, but the APIs are less stressed through mixing, solvent evaporation and heat. Further, loss of material due to cutting jumbo and daughter rolls can be reduced. Therefore, a versatile new manufacturing technology particularly for processing high-potent low-dose or heat sensitive drugs is introduced in this study.", "title": "" }, { "docid": "055cb9aca6b16308793944154dc7866a", "text": "Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? Optimality is characterized by the criterion and in neural network literature, this is the least addressed component, yet it has a decisive influence in generalization performance. Certainly, the assumptions behind the selection of a criterion should be better understood and investigated. Traditionally, least squares has been the benchmark criterion for regression problems; considering classification as a regression problem towards estimating class posterior probabilities, least squares has been employed to train neural network and other classifier topologies to approximate correct labels. The main motivation to utilize least squares in regression simply comes from the intellectual comfort this criterion provides due to its success in traditional linear least squares regression applications – which can be reduced to solving a system of linear equations. For nonlinear regression, the assumption of Gaussianity for the measurement error combined with the maximum likelihood principle could be emphasized to promote this criterion. In nonparametric regression, least squares principle leads to the conditional expectation solution, which is intuitively appealing. Although these are good reasons to use the mean squared error as the cost, it is inherently linked to the assumptions and habits stated above. Consequently, there is information in the error signal that is not captured during the training of nonlinear adaptive systems under non-Gaussian distribution conditions when one insists on second-order statistical criteria. This argument extends to other linear-second-order techniques such as principal component analysis (PCA), linear discriminant analysis (LDA), and canonical correlation analysis (CCA). Recent work tries to generalize these techniques to nonlinear scenarios by utilizing kernel techniques or other heuristics. This begs the question: what other alternative cost functions could be used to train adaptive systems and how could we establish rigorous techniques for extending useful concepts from linear and second-order statistical techniques to nonlinear and higher-order statistical learning methodologies?", "title": "" }, { "docid": "2b6087cab37980b1363b343eb0f81822", "text": "We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen + touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures produced by the nonpreferred hand, from touch gestures produced by the hand holding the pen, which necessarily impart a detectable motion signal to the stylus. We can sense which hand grips the tablet, and determine the screen's relative orientation to the pen. By selectively combining these signals and using them to complement one another, we can tailor interaction to the context, such as by ignoring unintentional touch inputs while writing, or supporting contextually-appropriate tools such as a magnifier for detailed stroke work that appears when the user pinches with the pen tucked between his fingers. These and other techniques can be used to impart new, previously unanticipated subtleties to pen + touch interaction on tablets.", "title": "" }, { "docid": "bac88254869f9b83aaf539b775d9ec66", "text": "The medicinal herb feverfew [Tanacetum parthenium (L.) Schultz-Bip.] has long been used as a folk remedy for the treatment of migraine and arthritis. Parthenolide, a sesquiterpene lactone, is considered to be the primary bioactive compound in feverfew having anti-migraine, anti-tumor, and anti-inflammatory properties. In this study we determined, through in vitro bioassays, the inhibitory activity of parthenolide and golden feverfew extract against two human breast cancer cell lines (Hs605T and MCF-7) and one human cervical cancer cell line (SiHa). Feverfew ethanolic extract inhibited the growth of all three types of cancer cells with a half-effective concentration (EC50) of 1.5 mg/mL against Hs605T, 2.1 mg/mL against MCF-7, and 0.6 mg/mL against SiHa. Among the tested constituents of feverfew (i.e., parthenolide, camphor, luteolin, and apigenin), parthenolide showed the highest inhibitory effect with an EC50 against Hs605T, MCF-7, and SiHa of 2.6 microg/mL, 2.8 microg/mL, and 2.7 microg/mL, respectively. Interactions between parthenolide and flavonoids (apigenin and luteolin) in feverfew extract also were investigated to elucidate possible synergistic or antagonistic effects. The results revealed that apigenin and luteolin might have moderate to weak synergistic effects with parthenolide on the inhibition of cancer cell growth of Hs605T, MCF-7, and SiHa.", "title": "" }, { "docid": "09568a739d2e354e0a781a1695e9a51e", "text": "BACKGROUND\nThere are clear indications for benefits of stance control orthoses compared to locked knee ankle foot orthoses. However, stance control orthoses still have limited function compared with a sound human leg.\n\n\nOBJECTIVES\nThe aim of this study was to evaluate the potential benefits of a microprocessor stance and swing control orthosis compared to stance control orthoses and locked knee ankle foot orthoses in activities of daily living.\n\n\nSTUDY DESIGN\nSurvey of lower limb orthosis users before and after fitting of a microprocessor stance and swing control orthosis.\n\n\nMETHODS\nThirteen patients with various lower limb pareses completed a baseline survey for their current orthotic device (locked knee ankle foot orthosis or stance control orthosis) and a follow-up for the microprocessor stance and swing control orthosis with the Orthosis Evaluation Questionnaire, a new self-reported outcome measure devised by modifying the Prosthesis Evaluation Questionnaire for use in lower limb orthotics and the Activities of Daily Living Questionnaire.\n\n\nRESULTS\nThe Orthosis Evaluation Questionnaire results demonstrated significant improvements by microprocessor stance and swing control orthosis use in the total score and the domains of ambulation ( p = .001), paretic limb health ( p = .04), sounds ( p = .02), and well-being ( p = .01). Activities of Daily Living Questionnaire results showed significant improvements with the microprocessor stance and swing control orthosis with regard to perceived safety and difficulty of activities of daily living.\n\n\nCONCLUSION\nThe microprocessor stance and swing control orthosis may facilitate an easier, more physiological, and safer execution of many activities of daily living compared to traditional leg orthosis technologies. Clinical relevance This study compared patient-reported outcomes of a microprocessor stance and swing control orthosis (C-Brace) to those with traditional knee ankle foot orthosis and stance control orthosis devices. The C-Brace offers new functions including controlled knee flexion during weight bearing and dynamic swing control, resulting in significant improvements in perceived orthotic mobility and safety.", "title": "" }, { "docid": "8cac4d9b14b0e2918a52f3e71cc440bd", "text": "Cyber-Physical Systems refer to systems that have an interaction between computers, communication channels and physical devices to solve a real-world problem. Towards industry 4.0 revolution, Cyber-Physical Systems currently become one of the main targets of hackers and any damage to them lead to high losses to a nation. According to valid resources, several cases reported involved security breaches on Cyber-Physical Systems. Understanding fundamental and theoretical concept of security in the digital world was discussed worldwide. Yet, security cases in regard to the cyber-physical system are still remaining less explored. In addition, limited tools were introduced to overcome security problems in Cyber-Physical System. To improve understanding and introduce a lot more security solutions for the cyber-physical system, the study on this matter is highly on demand. In this paper, we investigate the current threats on Cyber-Physical Systems and propose a classification and matrix for these threats, and conduct a simple statistical analysis of the collected data using a quantitative approach. We confirmed four components i.e., (the type of attack, impact, intention and incident categories) main contributor to threat taxonomy of Cyber-Physical Systems. Keywords—Cyber-Physical Systems; threats; incidents; security; cybersecurity; taxonomies; matrix; threats analysis", "title": "" } ]
scidocsrr
e12f1dea29965bfcd5908d69671d7e49
Access Control Models for Virtual Object Communication in Cloud-Enabled IoT
[ { "docid": "c2571afd6f2b9e9856c8f8c4eeb60b81", "text": "In the Internet of Things, services can be provisioned using centralized architectures, where central entities acquire, process, and provide information. Alternatively, distributed architectures, where entities at the edge of the network exchange information and collaborate with each other in a dynamic way, can also be used. In order to understand the applicability and viability of this distributed approach, it is necessary to know its advantages and disadvantages – not only in terms of features but also in terms of security and privacy challenges. The purpose of this paper is to show that the distributed approach has various challenges that need to be solved, but also various interesting properties and strengths.", "title": "" }, { "docid": "a08fe0c015f5fc02b7654f3fd00fb599", "text": "Recently, there has been considerable interest in attribute based access control (ABAC) to overcome the limitations of the dominant access control models (i.e, discretionary-DAC, mandatory-MAC and role based-RBAC) while unifying their advantages. Although some proposals for ABAC have been published, and even implemented and standardized, there is no consensus on precisely what is meant by ABAC or the required features of ABAC. There is no widely accepted ABAC model as there are for DAC, MAC and RBAC. This paper takes a step towards this end by constructing an ABAC model that has “just sufficient” features to be “easily and naturally” configured to do DAC, MAC and RBAC. For this purpose we understand DAC to mean owner-controlled access control lists, MAC to mean lattice-based access control with tranquility and RBAC to mean flat and hierarchical RBAC. Our central contribution is to take a first cut at establishing formal connections between the three successful classical models and desired ABAC models.", "title": "" } ]
[ { "docid": "8eb96ae8116a16e24e6a3b60190cc632", "text": "IT professionals are finding that more of their IT investments are being measured against a knowledge management (KM) metric. Those who want to deploy foundation technologies such as groupware, CRM or decision support tools, but fail to justify them on the basis of their contribution to KM, may find it difficult to get funding unless they can frame them within the KM context. Determining KM's pervasiveness and impact is analogous to measuring the contribution of marketing, employee development, or any other management or organizational competency. This paper addresses the problem of developing measurement models for KM metrics and discusses what current KM metrics are in use, and examine their sustainability and soundness in assessing knowledge utilization and retention of generating revenue. The paper will then discuss the use of a Balanced Scorecard approach to determine a business-oriented relationship between strategic KM usage and IT strategy and implementation.", "title": "" }, { "docid": "8a6e062d17ee175e00288dd875603a9c", "text": "Code summarization, aiming to generate succinct natural language description of source code, is extremely useful for code search and code comprehension. It has played an important role in software maintenance and evolution. Previous approaches generate summaries by retrieving summaries from similar code snippets. However, these approaches heavily rely on whether similar code snippets can be retrieved, how similar the snippets are, and fail to capture the API knowledge in the source code, which carries vital information about the functionality of the source code. In this paper, we propose a novel approach, named TL-CodeSum, which successfully uses API knowledge learned in a different but related task to code summarization. Experiments on large-scale real-world industry Java projects indicate that our approach is effective and outperforms the state-of-the-art in code summarization.", "title": "" }, { "docid": "398c791338adf824a81a2bfb8f35c6bb", "text": "Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual environments and high resolution tiled display walls. This paper outlines the design and implementation of the CAVE2 TM Hybrid Reality Environment. CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewingallowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) a system for supporting 2D tiled displays, with Omegalib a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.", "title": "" }, { "docid": "acddf623a4db29f60351f41eb8d0b113", "text": "In an age where people are becoming increasing likely to trust information found through online media, journalists have begun employing techniques to lure readers to articles by using catchy headlines, called clickbait. These headlines entice the user into clicking through the article whilst not providing information relevant to the headline itself. Previous methods of detecting clickbait have explored techniques heavily dependent on feature engineering, with little experimentation having been tried with neural network architectures. We introduce a novel model combining recurrent neural networks, attention layers and image embeddings. Our model uses a combination of distributed word embeddings derived from unannotated corpora, character level embeddings calculated through Convolutional Neural Networks. These representations are passed through a bidirectional LSTM with an attention layer. The image embeddings are also learnt from large data using CNNs. Experimental results show that our model achieves an F1 score of 65.37% beating the previous benchmark of 55.21%.", "title": "" }, { "docid": "fc875b50a03dcae5cbde23fa7f9b16bf", "text": "Although considerable research has shown the importance of social connection for physical health, little is known about the higher-level neurocognitive processes that link experiences of social connection or disconnection with health-relevant physiological responses. Here we review the key physiological systems implicated in the link between social ties and health and the neural mechanisms that may translate social experiences into downstream health-relevant physiological responses. Specifically, we suggest that threats to social connection may tap into the same neural and physiological 'alarm system' that responds to other critical survival threats, such as the threat or experience of physical harm. Similarly, experiences of social connection may tap into basic reward-related mechanisms that have inhibitory relationships with threat-related responding. Indeed, the neurocognitive correlates of social disconnection and connection may be important mediators for understanding the relationships between social ties and health.", "title": "" }, { "docid": "3f5097b33aab695678caca712b649a8f", "text": "I quantitatively measure the nature of the media’s interactions with the stock market using daily content from a popular Wall Street Journal column. I find that high media pessimism predicts downward pressure on market prices followed by a reversion to fundamentals, and unusually high or low pessimism predicts high market trading volume. These results and others are consistent with theoretical models of noise and liquidity traders. However, the evidence is inconsistent with theories of media content as a proxy for new information about fundamental asset values, as a proxy for market volatility, or as a sideshow with no relationship to asset markets. ∗Tetlock is at the McCombs School of Business, University of Texas at Austin. I am indebted to Robert Stambaugh (the editor), an anonymous associate editor and an anonymous referee for their suggestions. I am grateful to Aydogan Alti, John Campbell, Lorenzo Garlappi, Xavier Gabaix, Matthew Gentzkow, John Griffin, Seema Jayachandran, David Laibson, Terry Murray, Alvin Roth, Laura Starks, Jeremy Stein, Philip Tetlock, Sheridan Titman and Roberto Wessels for their comments. I thank Philip Stone for providing the General Inquirer software and Nathan Tefft for his technical expertise. I appreciate Robert O’Brien’s help in providing information about the Wall Street Journal. I also acknowledge the National Science Foundation, Harvard University and the University of Texas at Austin for their financial support. All mistakes in this article are my own.", "title": "" }, { "docid": "d6602271d7024f7d894b14da52299ccc", "text": "BACKGROUND\nMost articles on face composite tissue allotransplantation have considered ethical and immunologic aspects. Few have dealt with the technical aspects of graft procurement. The authors report the technical difficulties involved in procuring a lower face graft for allotransplantation.\n\n\nMETHODS\nAfter a preclinical study of 20 fresh cadavers, the authors carried out an allotransplantation of the lower two-thirds of the face on a patient in January of 2007. The graft included all the perioral muscles, the facial nerves (VII, V2, and V3) and, for the first time, the parotid glands.\n\n\nRESULTS\nThe preclinical study and clinical results confirm that complete revascularization of a graft consisting of the lower two-thirds of the face is possible from a single facial pedicle. All dissections were completed within 3 hours. Graft procurement for the clinical study took 4 hours. The authors harvested the soft tissues of the face en bloc to save time and to prevent tissue injury. They restored the donor's face within approximately 4 hours, using a resin mask colored to resemble the donor's skin tone. All nerves were easily reattached. Voluntary activity was detected on clinical examination 5 months postoperatively, and electromyography confirmed nerve regrowth, with activity predominantly on the left side. The patient requested local anesthesia for biopsies performed in month 4.\n\n\nCONCLUSIONS\nPartial facial composite tissue allotransplantation of the lower two-thirds of the face is technically feasible, with a good cosmetic and functional outcome in selected clinical cases. Flaps of this type establish vascular and neurologic connections in a reliable manner and can be procured with a rapid, standardized procedure.", "title": "" }, { "docid": "bba6fad7d1d32683e95e475632c9a9e5", "text": "A great variety of text tasks such as topic or spam identification, user profiling, and sentiment analysis can be posed as a supervised learning problem and tackle using a text classifier. A text classifier consists of several subprocesses, some of them are general enough to be applied to any supervised learning problem, whereas others are specifically designed to tackle a particular task, using complex and computational expensive processes such as lemmatization, syntactic analysis, etc. Contrary to traditional approaches, we propose a minimalistic and wide system able to tackle text classification tasks independent of domain and language, namely μTC. It is composed by some easy to implement text transformations, text representations, and a supervised learning algorithm. These pieces produce a competitive classifier even in the domain of informally written text. We provide a detailed description of μTC along with an extensive experimental comparison with relevant state-of-the-art methods. μTC was compared on 30 different datasets. Regarding accuracy, μTC obtained the best performance in 20 datasets while achieves competitive results in the remaining 10. The compared datasets include several problems like topic and polarity classification, spam detection, user profiling and authorship attribution. Furthermore, it is important to state that our approach allows the usage of the technology even without knowledge of machine learning and natural language processing. ∗CONACyT Consejo Nacional de Ciencia y Tecnoloǵıa, Dirección de Cátedras, Insurgentes Sur 1582, Crédito Constructor 03940, Ciudad de México, México. †INFOTEC Centro de Investigación e Innovación en Tecnoloǵıas de la Información y Comunicación, Circuito Tecnopolo Sur No 112, Fracc. Tecnopolo Pocitos II, Aguascalientes 20313, México. ‡Centro de Investigación en Geograf́ıa y Geomática “Ing. Jorge L. Tamayo”, A.C. Circuito Tecnopolo Norte No. 117, Col. Tecnopolo Pocitos II, C.P. 20313,. Aguascalientes, Ags, México. 1 ar X iv :1 70 4. 01 97 5v 2 [ cs .C L ] 1 4 Se p 20 17", "title": "" }, { "docid": "07425e53be0f6314d52e3b4de4d1b601", "text": "Delay discounting was investigated in opioid-dependent and non-drug-using control participants. The latter participants were matched to the former on age, gender, education, and IQ. Participants in both groups chose between hypothetical monetary rewards available either immediately or after a delay. Delayed rewards were $1,000, and the immediate-reward amount was adjusted until choices reflected indifference. This procedure was repeated at each of 7 delays (1 week to 25 years). Opioid-dependent participants were given a second series of choices between immediate and delayed heroin, using the same procedures (i.e., the amount of delayed heroin was that which could be purchased with $1,000). Opioid-dependent participants discounted delayed monetary rewards significantly more than did non-drug-using participants. Furthermore opioid-dependent participants discounted delayed heroin significantly more than delayed money.", "title": "" }, { "docid": "7ca7ec2efe89bc031cc8aa5ce549c7f5", "text": "Conventional reverse vending machines use complex image processing technology to detect the bottles which make it more expensive. In this paper the design of a Smart Bottle Recycle Machine (SBRM) is presented. It is designed on a Field Programmable Gate Array (FPGA) using an ultrasonic range sensor which is readily available at a low cost. The sensor was used to calculate the number of bottles and distinguish between them. The main objective of this project is to build a SBRM at a cheaper production cost. This project was implemented on Altera DE2-115 board using Verilog HDL. This prototype enables the user to recycle plastic bottles and receive reward points. FPGA was chosen because hardware based implementation on a FPGA is usually much faster than the software based implementation on a microcontroller. The former is also capable of executing concurrent parallel processes at a high speed where the latter can only do a limited amount of parallel execution. So, overall FPGAs are more efficient than the microcontrollers for development of reliable and real time applications. The developed project is environment friendly and cost effective.", "title": "" }, { "docid": "61d506905286fc3297622d1ac39534f0", "text": "In this paper we present the setup of an extensive Wizard-of-Oz environment used for the data collection and the development of a dialogue system. The envisioned Perception and Interaction Assistant will act as an independent dialogue partner. Passively observing the dialogue between the two human users with respect to a limited domain, the system should take the initiative and get meaningfully involved in the communication process when required by the conversational situation. The data collection described here involves audio and video data. We aim at building a rich multi-media data corpus to be used as a basis for our research which includes, inter alia, speech and gaze direction recognition, dialogue modelling and proactivity of the system. We further aspire to obtain data with emotional content to perfom research on emotion recognition, psychopysiological and usability analysis.", "title": "" }, { "docid": "6c5a5bc775316efc278285d96107ddc6", "text": "STUDY DESIGN\nRetrospective study of 55 consecutive patients with spinal metastases secondary to breast cancer who underwent surgery.\n\n\nOBJECTIVE\nTo evaluate the predictive value of the Tokuhashi score for life expectancy in patients with breast cancer with spinal metastases.\n\n\nSUMMARY OF BACKGROUND DATA\nThe score, composed of 6 parameters each rated from 0 to 2, has been proposed by Tokuhashi and colleagues for the prognostic assessment of patients with spinal metastases.\n\n\nMETHODS\nA total of 55 patients surgically treated for vertebral metastases secondary to breast cancer were studied. The score was calculated for each patient and, according to Tokuhashi, the patients were divided into 3 groups with different life expectancy according to their total number of scoring points. In a second step, the grouping for prognosis was modified to get a better correlation of the predicted and definitive survival.\n\n\nRESULTS\nApplying the Tokuhashi score for the estimation of life expectancy of patients with breast cancer with vertebral metastases provided very reliable results. However, the original analysis by Tokuhashi showed a limited correlation between predicted and real survival for each prognostic group. Therefore, our patients were divided into modified prognostic groups regarding their total number of scoring points, leading to a higher significance of the predicted prognosis in each group (P < 0.0001), and a better correlation of the predicted and real survival.\n\n\nCONCLUSION\nThe modified Tokuhashi score assists in decision making based on reliable estimators of life expectancy in patients with spinal metastases secondary to breast cancer.", "title": "" }, { "docid": "c61c350d6c7bfe7eaae2cd4b2aa452cf", "text": "It is a well-established finding that the central executive is fractionated in at least three separable component processes: Updating, Shifting, and Inhibition of information (Miyake et al., 2000). However, the fractionation of the central executive among the elderly has been less well explored, and Miyake's et al. latent structure has not yet been integrated with other models that propose additional components, such as access to long-term information. Here we administered a battery of classic and newer neuropsychological tests of executive functions to 122 healthy individuals aged between 48 and 91 years. The test scores were subjected to a latent variable analysis (LISREL), and yielded four factors. The factor structure obtained was broadly consistent with Miyake et al.'s three-factor model. However, an additional factor, which was labeled 'efficiency of access to long-term memory', and a mediator factor ('speed of processing') were apparent in our structural equation analysis. Furthermore, the best model that described executive functioning in our sample of healthy elderly adults included a two-factor solution, thus indicating a possible mechanism of dedifferentiation, which involves larger correlations and interdependence of latent variables as a consequence of cognitive ageing. These results are discussed in the light of current models of prefrontal cortex functioning.", "title": "" }, { "docid": "2e66317dfe4005c069ceac2d4f9e3877", "text": "The Semantic Web presents the vision of a distributed, dynamically growing knowledge base founded on formal logic. Common users, however, seem to have problems even with the simplest Boolean expression. As queries from web search engines show, the great majority of users simply do not use Boolean expressions. So how can we help users to query a web of logic that they do not seem to understand? We address this problem by presenting Ginseng, a quasi natural language guided query interface to the Semantic Web. Ginseng relies on a simple question grammar which gets dynamically extended by the structure of an ontology to guide users in formulating queries in a language seemingly akin to English. Based on the grammar Ginseng then translates the queries into a Semantic Web query language (RDQL), which allows their execution. Our evaluation with 20 users shows that Ginseng is extremely simple to use without any training (as opposed to any logic-based querying approach) resulting in very good query performance (precision = 92.8%, recall = 98.4%). We, furthermore, found that even with its simple grammar/approach Ginseng could process over 40% of questions from a query corpus without modification.", "title": "" }, { "docid": "739aaf487d6c5a7b7fe9d0157d530382", "text": "A blockchain framework is presented for addressing the privacy and security challenges associated with the Big Data in smart mobility. It is composed of individuals, companies, government and universities where all the participants collect, own, and control their data. Each participant shares their encrypted data to the blockchain network and can make information transactions with other participants as long as both party agrees to the transaction rules (smart contract) issued by the owner of the data. Data ownership, transparency, auditability and access control are the core principles of the proposed blockchain for smart mobility Big Data.", "title": "" }, { "docid": "a15c94c0ec40cb8633d7174b82b70a16", "text": "Koenigs, Young and colleagues [1] recently tested patients with emotion-related damage in the ventromedial prefrontal cortex (VMPFC) usingmoral dilemmas used in previous neuroimaging studies [2,3]. These patients made unusually utilitarian judgments (endorsing harmful actions that promote the greater good). My collaborators and I have proposed a dual-process theory of moral judgment [2,3] that we claim predicts this result. In a Research Focus article published in this issue of Trends in Cognitive Sciences, Moll and de Oliveira-Souza [4] challenge this interpretation. Our theory aims to explain some puzzling patterns in commonsense moral thought. For example, people usually approve of diverting a runaway trolley thatmortally threatens five people onto a side-track, where it will kill only one person. And yet people usually disapprove of pushing someone in front of a runaway trolley, where this will kill the person pushed, but save five others [5]. Our theory, in a nutshell, is this: the thought of pushing someone in front of a trolley elicits a prepotent, negative emotional response (supported in part by the medial prefrontal cortex) that drives moral disapproval [2,3]. People also engage in utilitarian moral reasoning (aggregate cost–benefit analysis), which is likely subserved by the dorsolateral prefrontal cortex (DLPFC) [2,3]. When there is no prepotent emotional response, utilitarian reasoning prevails (as in the first case), but sometimes prepotent emotions and utilitarian reasoning conflict (as in the second case). This conflict is detected by the anterior cingulate cortex, which signals the need for cognitive control, to be implemented in this case by the anterior DLPFC [Brodmann’s Areas (BA) 10/46]. Overriding prepotent emotional responses requires additional cognitive control and, thus, we find increased activity in the anterior DLPFC when people make difficult utilitarian moral judgments [3]. More recent studies support this theory: if negative emotions make people disapprove of pushing the man to his death, then inducing positive emotion might lead to more utilitarian approval, and this is indeed what happens [6]. Likewise, patients with frontotemporal dementia (known for their ‘emotional blunting’) should more readily approve of pushing the man in front of the trolley, and they do [7]. This finding directly foreshadows the hypoemotional VMPFC patients’ utilitarian responses to this and other cases [1]. Finally, we’ve found that cognitive load selectively interferes with utilitarian moral judgment,", "title": "" }, { "docid": "fe25930abd98cba844a6e7a849dae621", "text": "Research in Autonomous Mobile Manipulation critically depends on the availability of adequate experimental platforms. In this paper, we describe an ongoing effort at the University of Massachusetts Amherst to construct a hardware platform with redundant kinematic degrees of freedom, a comprehensive sensor suite, and significant end-effector capabilities for manipulation. In our research, we pursue an end-effector centric view of autonomous mobile manipulation. In support of this view, we are developing a comprehensive software suite to provide a high level of competency in robot control and perception. This software suite is based on a multi-objective, tasklevel motion control framework. We use this control framework to integrate a variety of motion capabilities, including taskbased force or position control of the end-effector, collision-free global motion for the entire mobile manipulator, and mapping and navigation for the mobile base. We also discuss our efforts in developing perception capabilities targeted to problems in autonomous mobile manipulation. Preliminary experiments on our UMass Mobile Manipulator (UMan) are presented.", "title": "" }, { "docid": "c4332dfb8e8117c3deac7d689b8e259b", "text": "Learning through experience is time-consuming, inefficient and often bad for your cortisol levels. To address this problem, a number of recently proposed teacherstudent methods have demonstrated the benefits of private tuition, in which a single model learns from an ensemble of more experienced tutors. Unfortunately, the cost of such supervision restricts good representations to a privileged minority. Unsupervised learning can be used to lower tuition fees, but runs the risk of producing networks that require extracurriculum learning to strengthen their CVs and create their own LinkedIn profiles1. Inspired by the logo on a promotional stress ball at a local recruitment fair, we make the following three contributions. First, we propose a novel almost no supervision training algorithm that is effective, yet highly scalable in the number of student networks being supervised, ensuring that education remains affordable. Second, we demonstrate our approach on a typical use case: learning to bake, developing a method that tastily surpasses the current state of the art. Finally, we provide a rigorous quantitive analysis of our method, proving that we have access to a calculator2. Our work calls into question the long-held dogma that life is the best teacher. Give a student a fish and you feed them for a day, teach a student to gatecrash seminars and you feed them until the day they move to Google.", "title": "" }, { "docid": "021bed3f2c2f09db1bad7d11108ee430", "text": "This is a review of Introduction to Circle Packing: The Theory of Discrete Analytic Functions, by Kenneth Stephenson, Cambridge University Press, Cambridge UK, 2005, pp. i-xii, 1–356, £42, ISBN-13 978-0-521-82356-2. 1. The Context: A Personal Reminiscence Two important stories in the recent history of mathematics are those of the geometrization of topology and the discretization of geometry. Having come of age during the unfolding of these stories as both observer and practitioner, this reviewer does not hold the detachment of the historian and, perhaps, can be forgiven the personal accounting that follows, along with its idiosyncratic telling. The first story begins at a time when the mathematical world is entrapped by abstraction. Bourbaki reigns and generalization is the cry of the day. Coxeter is a curious doddering uncle, at best tolerated, at worst vilified as a practitioner of the unsophisticated mathematics of the nineteenth century. 1.1. The geometrization of topology. It is 1978 and I have just begun my graduate studies in mathematics. There is some excitement in the air over ideas of Bill Thurston that purport to offer a way to resolve the Poincaré conjecture by using nineteenth century mathematics—specifically, the noneuclidean geometry of Lobachevski and Bolyai—to classify all 3-manifolds. These ideas finally appear in a set of notes from Princeton a couple of years later, and the notes are both fascinating and infuriating—theorems are left unstated and often unproved, chapters are missing never to be seen, the particular dominates—but the notes are bulging with beautiful and exciting ideas, often with but sketches of intricate arguments to support the landscape that Thurston sees as he surveys the topology of 3-manifolds. Thurston’s vision is a throwback to the previous century, having much in common with the highly geometric, highly particular landscape that inspired Felix Klein and Max Dehn. These geometers walked around and within Riemann surfaces, one of the hot topics of the day, knew them intimately, and understood them in their particularity, not from the rarified heights that captured the mathematical world in general, and topology in particular, in the period from the 1930’s until the 1970’s. The influence of Thurston’s Princeton notes on the development of topology over the next 30 years would be pervasive, not only in its mathematical content, but AMS SUBJECT CLASSIFICATION: 52C26", "title": "" } ]
scidocsrr
8fe3e1cc772d1b40d7f05384341d7b98
Independent motion detection with event-driven cameras
[ { "docid": "609cc8dd7323e817ddfc5314070a68bf", "text": "We present EVO, an event-based visual odometry algorithm. Our algorithm successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semidense three-dimensional (3-D) map of the environment. The implementation runs in real time on a standard CPU and outputs up to several hundred pose estimates per second. Due to the nature of event cameras, our algorithm is unaffected by motion blur and operates very well in challenging, high dynamic range conditions with strong illumination changes. To achieve this, we combine a novel, event-based tracking approach based on image-to-model alignment with a recent event-based 3-D reconstruction algorithm in a parallel fashion. Additionally, we show that the output of our pipeline can be used to reconstruct intensity images from the binary event stream, though our algorithm does not require such intensity information. We believe that this work makes significant progress in simultaneous localization and mapping by unlocking the potential of event cameras. This allows us to tackle challenging scenarios that are currently inaccessible to standard cameras.", "title": "" } ]
[ { "docid": "cbae4d5eb347a8136f34fb370d28f46b", "text": "Available online 18 November 2013", "title": "" }, { "docid": "3f98deae1ccf36f9758958ee785bb294", "text": "The Thrombolysis In Myocardial Infarction (TIMI) risk score predicts adverse clinical outcomes in patients with non-ST-elevation acute coronary syndromes (NSTEACS). Whether this score correlates with the coronary anatomy is unknown. We sought to determine whether the TIMI risk score correlates with the angiographic extent and severity of coronary artery disease (CAD) in patients with NSTEACS undergoing cardiac catheterization. We conducted a retrospective review of 688 consecutive medical records of patients who underwent coronary angiography secondary to NSTEACS. Patients were classified into 3 categories according to TIMI risk score: TIMI scores 0 to 2 (n = 284), 3 to 4 (n = 301), and 5 to 7 (n = 103). One-vessel disease was found in patients with TIMI score 3 to 4 as often as in patients with TIMI score 0 to 2 (odds ratio [OR] 1.08, 95% confidence interval [CI] 0.74 to 1.56; p = 0.66). However, 1-vessel disease was found more often in patients with TIMI score 3 to 4 than in patients with TIMI score 5 to 7 (OR 2.16, 95% CI 1.18 to 3.95; p = 0.01), and in patients with TIMI score 0 to 2 than in those with TIMI score 5 to 7 (OR 1.99, 95% CI 1.08 to 3.66; p = 0.02). Two-vessel disease was more likely found in patients with TIMI score 3 to 4 than in those with TIMI scores 0 to 2 (OR 3.96, 95% CI 2.41 to 6.53; p <0.001) and 5 to 7 (OR 2.05, 95% CI 1.12 to 3.75; p = 0.004). Three-vessel or left main disease was more likely found in patients with TIMI score 3 to 4 than in patients with TIMI score 0 to 2 (OR 3.19, 95% CI 2.00 to 5.10; p <0.001), and in patients with TIMI score 5 to 7 than in patients with TIMI score 3 to 4 (OR 6.34, 95% CI 3.88 to 10.36; p <0.001). In patients with NSTEACS undergoing cardiac catheterization, the TIMI risk score correlated with the extent and severity of CAD.", "title": "" }, { "docid": "37f14a10e08cbb4d4034d19a7d3bf24e", "text": "Development of Mobile handset applications, new standard for cellular networks have been defined. In this Paper, author intend to propose a Novel mobile Antenna that can cover more of LTE (Long Term Evolution) Bands (4G cellular networks). The proposed antenna uses structure of planar monopole antenna. Bandwidth of antenna is 0.87-0.99 GHz, 1.65-3.14 GHz and has high efficiency unlike the previous structures. The dimension of the antenna is 18mm×21mm and has FR4 substrate by 1.5mm thickness that is very compact antenna respect to the other expressed antenna.", "title": "" }, { "docid": "b947bfe4a4cd38b880ae96ad607479c1", "text": "In order to solve the emergency decision management problem with uncertainty, an Emergency Bayesian decision network (EBDN) model is used in this paper. By computing the probability of each node, the EBDN can solve the uncertainty of different response measures. Using Gray system theory to determine the weight of all kinds of emergency losses. And then use genetic algorithm to search the best combination measure by comparing the value of output loss. For illustration, a typhoon example is utilized to show the feasibility of EBDN model. Empirical results show that the EBDN model can combine expert's knowledge and historic data to predict expected effects under different combinations of response measures, and then choose the best one. The proposed EBDN model can combine the decision process into a diagrammatic form, and thus the uncertainty of emergency events in solving emergency dynamic decision making is solved.", "title": "" }, { "docid": "c1694750a148296c8b907eb6d1a86074", "text": "A field experiment was carried out to implement a remote sensing energy balance (RSEB) algorithm for estimating the incoming solar radiation (Rsi), net radiation (Rn), sensible heat flux (H), soil heat flux (G) and latent heat flux (LE) over a drip-irrigated olive (cv. Arbequina) orchard located in the Pencahue Valley, Maule Region, Chile (35 ̋251S; 71 ̋441W; 90 m above sea level). For this study, a helicopter-based unmanned aerial vehicle (UAV) was equipped with multispectral and infrared thermal cameras to obtain simultaneously the normalized difference vegetation index (NDVI) and surface temperature (Tsurface) at very high resolution (6 cm ˆ 6 cm). Meteorological variables and surface energy balance components were measured at the time of the UAV overpass (near solar noon). The performance of the RSEB algorithm was evaluated using measurements of H and LE obtained from an eddy correlation system. In addition, estimated values of Rsi and Rn were compared with ground-truth measurements from a four-way net radiometer while those of G were compared with soil heat flux based on flux plates. Results indicated that RSEB algorithm estimated LE and H with errors of 7% and 5%, respectively. Values of the root mean squared error (RMSE) and mean absolute error (MAE) for LE were 50 and 43 W m ́2 while those for H were 56 and 46 W m ́2, respectively. Finally, the RSEB algorithm computed Rsi, Rn and G with error less than 5% and with values of RMSE and MAE less than 38 W m ́2. Results demonstrated that multispectral and thermal cameras placed on an UAV could provide an excellent tool to evaluate the intra-orchard spatial variability of Rn, G, H, LE, NDVI and Tsurface over the tree canopy and soil surface between rows.", "title": "" }, { "docid": "55a37995369fe4f8ddb446d83ac0cecf", "text": "With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the visual-unpleasant appearance of QR codes, existing works have developed a series of techniques. However, these works still leave much to be desired, such as personalization, artistry, and robustness. To address these issues, in this paper, we propose a novel type of aesthetic QR codes, SEE (Stylize aEsthEtic) QR code, and a three-stage approach to automatically produce such robust style-oriented codes. Specifically, in the first stage, we propose a method to generate an optimized baseline aesthetic QR code, which reduces the visual contrast between the noise-like black/white modules and the blended image. In the second stage, to obtain art style QR code, we tailor an appropriate neural style transformation network to endow the baseline aesthetic QR code with artistic elements. In the third stage, we design an error-correction mechanism by balancing two competing terms, visual quality and readability, to ensure the performance robust. Extensive experiments demonstrate that SEE QR code has high quality in terms of both visual appearance and robustness, and also offers a greater variety of personalized choices to users.", "title": "" }, { "docid": "770e08dc6a56019d3420a82d9f0e4ea8", "text": "This paper studies how close random graphs are typically to their expectations. We interpret this question through the concentration of the adjacency and Laplacian matrices in the spectral norm. We study inhomogeneous Erdös-Rényi random graphs on n vertices, where edges form independently and possibly with different probabilities pij . Sparse random graphs whose expected degrees are o(logn) fail to concentrate; the obstruction is caused by vertices with abnormally high and low degrees. We show that concentration can be restored if we regularize the degrees of such vertices, and one can do this in various ways. As an example, let us reweight or remove enough edges to make all degrees bounded above by O(d) where d = maxnpij . Then we show that the resulting adjacency matrix A ′ concentrates with the optimal rate: ‖A′ − EA‖ = O( √ d). Similarly, if we make all degrees bounded below by d by adding weight d/n to all edges, then the resulting Laplacian concentrates with the optimal rate: ‖L(A′)−L(EA′)‖ = O(1/ √ d). Our approach is based on Grothendieck-Pietsch factorization, using which we construct a new decomposition of random graphs. These results improve and considerably simplify the recent work of E. Levina and the authors. We illustrate the concentration results with an application to the community detection problem in the analysis of networks.", "title": "" }, { "docid": "4c2f9f9681a1d3bc6d9a27a59c2a01d6", "text": "BACKGROUND\nStatin therapy reduces low-density lipoprotein (LDL) cholesterol levels and the risk of cardiovascular events, but whether the addition of ezetimibe, a nonstatin drug that reduces intestinal cholesterol absorption, can reduce the rate of cardiovascular events further is not known.\n\n\nMETHODS\nWe conducted a double-blind, randomized trial involving 18,144 patients who had been hospitalized for an acute coronary syndrome within the preceding 10 days and had LDL cholesterol levels of 50 to 100 mg per deciliter (1.3 to 2.6 mmol per liter) if they were receiving lipid-lowering therapy or 50 to 125 mg per deciliter (1.3 to 3.2 mmol per liter) if they were not receiving lipid-lowering therapy. The combination of simvastatin (40 mg) and ezetimibe (10 mg) (simvastatin-ezetimibe) was compared with simvastatin (40 mg) and placebo (simvastatin monotherapy). The primary end point was a composite of cardiovascular death, nonfatal myocardial infarction, unstable angina requiring rehospitalization, coronary revascularization (≥30 days after randomization), or nonfatal stroke. The median follow-up was 6 years.\n\n\nRESULTS\nThe median time-weighted average LDL cholesterol level during the study was 53.7 mg per deciliter (1.4 mmol per liter) in the simvastatin-ezetimibe group, as compared with 69.5 mg per deciliter (1.8 mmol per liter) in the simvastatin-monotherapy group (P<0.001). The Kaplan-Meier event rate for the primary end point at 7 years was 32.7% in the simvastatin-ezetimibe group, as compared with 34.7% in the simvastatin-monotherapy group (absolute risk difference, 2.0 percentage points; hazard ratio, 0.936; 95% confidence interval, 0.89 to 0.99; P=0.016). Rates of prespecified muscle, gallbladder, and hepatic adverse effects and cancer were similar in the two groups.\n\n\nCONCLUSIONS\nWhen added to statin therapy, ezetimibe resulted in incremental lowering of LDL cholesterol levels and improved cardiovascular outcomes. Moreover, lowering LDL cholesterol to levels below previous targets provided additional benefit. (Funded by Merck; IMPROVE-IT ClinicalTrials.gov number, NCT00202878.).", "title": "" }, { "docid": "ecfb05d557ebe524e3821fcf6ce0f985", "text": "This paper presents a novel active-source-pump (ASP) circuit technique to significantly lower the ESD sensitivity of ultrathin gate inputs in advanced sub-90nm CMOS technologies. As demonstrated by detailed experimental analysis, an ESD design window expansion of more than 100% can be achieved. This revives conventional ESD solutions for ultrasensitive input protection also enabling low-capacitance RF protection schemes with a high ESD design flexibility at IC-level. ASP IC application examples, and the impact of ASP on normal RF operation performance, are discussed.", "title": "" }, { "docid": "ed0d2151f5f20a233ed8f1051bc2b56c", "text": "This paper discloses development and evaluation of die attach material using base metals (Cu and Sn) by three different type of composite. Mixing them into paste or sheet shape for die attach, we have confirmed that one of Sn-Cu components having IMC network near its surface has major role to provide robust interconnect especially for high temperature applications beyond 200°C after sintering.", "title": "" }, { "docid": "c4e94803ae52dbbf4ac58831ff381467", "text": "Dynamic Adaptive Streaming over HTTP (DASH) is broadly deployed on the Internet for live and on-demand video streaming services. Recently, a new version of HTTP was proposed, named HTTP/2. One of the objectives of HTTP/2 is to improve the end-user perceived latency compared to HTTP/1.1. HTTP/2 introduces the possibility for the server to push resources to the client. This paper focuses on using the HTTP/2 protocol and the server push feature to reduce the start-up delay in a DASH streaming session. In addition, the paper proposes a new approach for video adaptation, which consists in estimating the bandwidth, using WebSocket (WS) over HTTP/2, and in making partial adaptation on the server side. Obtained results show that, using the server push feature and WebSocket layered over HTTP/2 allow faster loading time and faster convergence to the nominal state. Proposed solution is studied in the context of a direct client-server HTTP/2 connection. Intermediate caches are not considered in this study.", "title": "" }, { "docid": "df896e48cb4b5a364006b3a8e60a96ac", "text": "This paper describes a monocular vision based parking-slot-markings recognition algorithm, which is used to automate the target position selection of automatic parking assist system. Peak-pair detection and clustering in Hough space recognize marking lines. Specially, one-dimensional filter in Hough space is designed to utilize a priori knowledge about the characteristics of marking lines in bird's eye view edge image. Modified distance between point and line-segment is used to distinguish guideline from recognized marking line-segments. Once the guideline is successfully recognized, T-shape template matching easily recognizes dividing marking line-segments. Experiments show that proposed algorithm successfully recognizes parking slots even when adjacent vehicles occlude parking-slot-markings severely", "title": "" }, { "docid": "bc4a72d96daf03f861b187fa73f57ff6", "text": "BACKGROUND\nShort-term preoperative radiotherapy and total mesorectal excision have each been shown to improve local control of disease in patients with resectable rectal cancer. We conducted a multicenter, randomized trial to determine whether the addition of preoperative radiotherapy increases the benefit of total mesorectal excision.\n\n\nMETHODS\nWe randomly assigned 1861 patients with resectable rectal cancer either to preoperative radiotherapy (5 Gy on each of five days) followed by total mesorectal excision (924 patients) or to total mesorectal excision alone (937 patients). The trial was conducted with the use of standardization and quality-control measures to ensure the consistency of the radiotherapy, surgery, and pathological techniques.\n\n\nRESULTS\nOf the 1861 patients randomly assigned to one of the two treatment groups, 1805 were eligible to participate. The overall rate of survival at two years among the eligible patients was 82.0 percent in the group assigned to both radiotherapy and surgery and 81.8 percent in the group assigned to surgery alone (P=0.84). Among the 1748 patients who underwent a macroscopically complete local resection, the rate of local recurrence at two years was 5.3 percent. The rate of local recurrence at two years was 2.4 percent in the radiotherapy-plus-surgery group and 8.2 percent in the surgery-only group (P<0.001).\n\n\nCONCLUSIONS\nShort-term preoperative radiotherapy reduces the risk of local recurrence in patients with rectal cancer who undergo a standardized total mesorectal excision.", "title": "" }, { "docid": "74acfe91e216c8494b7304cff03a8c66", "text": "Diagnostic accuracy of the talar tilt test is not well established in a chronic ankle instability (CAI) population. Our purpose was to determine the diagnostic accuracy of instrumented and manual talar tilt tests in a group with varied ankle injury history compared with a reference standard of self-report questionnaire. Ninety-three individuals participated, with analysis occurring on 88 (39 CAI, 17 ankle sprain copers, and 32 healthy controls). Participants completed the Cumberland Ankle Instability Tool, arthrometer inversion talar tilt tests (LTT), and manual medial talar tilt stress tests (MTT). The ability to determine CAI status using the LTT and MTT compared with a reference standard was performed. The sensitivity (95% confidence intervals) of LTT and MTT was low [LTT = 0.36 (0.23-0.52), MTT = 0.49 (0.34-0.64)]. Specificity was good to excellent (LTT: 0.72-0.94; MTT: 0.78-0.88). Positive likelihood ratio (+ LR) values for LTT were 1.26-6.10 and for MTT were 2.23-4.14. Negative LR for LTT were 0.68-0.89 and for MTT were 0.58-0.66. Diagnostic odds ratios ranged from 1.43 to 8.96. Both clinical and arthrometer laxity testing appear to have poor overall diagnostic value for evaluating CAI as stand-alone measures. Laxity testing to assess CAI may only be useful to rule in the condition.", "title": "" }, { "docid": "626408161aa06de1cb50253094d4d8f8", "text": "In this communication, a corporate stacked microstrip and substrate integrated waveguide (SIW) feeding structure is reported to be used to broaden the impedance bandwidth of a <inline-formula> <tex-math notation=\"LaTeX\">$4 \\times 4$ </tex-math></inline-formula> patch array antenna. The proposed array antenna is based on a multilayer printed circuit board structure containing two dielectric substrates and four copper cladding layers. The radiating elements, which consist of slim rectangular patches with surrounding U-shaped parasitic patches, are located on the top layer. Every four radiation elements are grouped together as a <inline-formula> <tex-math notation=\"LaTeX\">$2 \\times 2$ </tex-math></inline-formula> subarray and fed by a microstrip power divider on the next copper layer through metalized blind vias. Four such subarrays are corporate-fed by an SIW feeding network underneath. The design process and analysis of the array antenna are discussed. A prototype of the proposed array antenna is fabricated and measured, showing a good agreement between the simulation and measurement, thus validating the correctness of the design. The measured results indicate that the proposed array antenna exhibits a wide <inline-formula> <tex-math notation=\"LaTeX\">$\\vert \\text {S}_{11}\\vert < -10$ </tex-math></inline-formula> dB bandwidth of 17.7%, i.e., 25.3–30.2 GHz, a peak gain of 16.4 dBi, a high radiation efficiency above 80%, and a good orthogonal polarization discrimination of higher than 30 dB. In addition, the use of low-profile substrate in the SIW feeding network makes this array antenna easier to be integrated directly with millimeter-wave front-end integrated circuits. The demonstrated array antenna can be a good candidate for various <italic>Ka</italic>-band wireless applications, such as 5G, satellite communications and so on.", "title": "" }, { "docid": "e3a766bad255bc3f4ad095cece45c637", "text": "We introduce a new task called Multimodal Named Entity Recognition (MNER) for noisy user-generated data such as tweets or Snapchat captions, which comprise short text with accompanying images. These social media posts often come in inconsistent or incomplete syntax and lexical notations with very limited surrounding textual contexts, bringing significant challenges for NER. To this end, we create a new dataset for MNER called SnapCaptions (Snapchat image-caption pairs submitted to public and crowd-sourced stories with fully annotated named entities). We then build upon the state-of-the-art Bi-LSTM word/character based NER models with 1) a deep image network which incorporates relevant visual context to augment textual information, and 2) a generic modality-attention module which learns to attenuate irrelevant modalities while amplifying the most informative ones to extract contexts from, adaptive to each sample and token. The proposed MNER model with modality attention significantly outperforms the state-of-the-art text-only NER models by successfully leveraging provided visual contexts, opening up potential applications of MNER on myriads of social media platforms.", "title": "" }, { "docid": "ee81c38d65c6ff2988c5519c77ffb13e", "text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i", "title": "" }, { "docid": "13ac4474f01136b2603f2b7ee9eedf19", "text": "Teamwork is best achieved when members of the team understand one another. Human-robot collaboration poses a particular challenge to this goal due to the differences between individual team members, both mentally/computationally and physically. One way in which this challenge can be addressed is by developing explicit models of human teammates. Here, we discuss, compare and contrast the many techniques available for modeling human cognition and behavior, and evaluate their benefits and drawbacks in the context of human-robot collaboration.", "title": "" }, { "docid": "832a208d5f0e0c9d965bf6037d002bb3", "text": "Littering constitutes a major societal problem, and any simple intervention that reduces its prevalence would be widely beneficial. In previous research, we have found that displaying images of watching eyes in the environment makes people less likely to litter. Here, we investigate whether the watching eyes images can be transferred onto the potential items of litter themselves. In two field experiments on a university campus, we created an opportunity to litter by attaching leaflets that either did or did not feature an image of watching eyes to parked bicycles. In both experiments, the watching eyes leaflets were substantially less likely to be littered than control leaflets (odds ratios 0.22-0.32). We also found that people were less likely to litter when there other people in the immediate vicinity than when there were not (odds ratios 0.04-0.25) and, in one experiment but not the other, that eye leaflets only reduced littering when there no other people in the immediate vicinity. We suggest that designing cues of observation into packaging could be a simple but fruitful strategy for reducing littering.", "title": "" }, { "docid": "d1f24e3461ae9bcf9bece544f1ed3bd2", "text": "The goal of this study was to examine the mediating role of negative emotions in the link between academic stress and Internet addiction among Korean adolescents. We attempted to extend the general strain theory to Internet addiction by exploring psychological pathways from academic stress to Internet addiction using a national and longitudinal panel study. A total of 512 adolescents completed self-reported scales for academic stress, negative emotions, and Internet addiction. We found that academic stress was positively associated with negative emotions and Internet addiction, and negative emotions were positively associated with Internet addiction. Further, the results of structural equation modeling revealed that adolescents’ academic stress had indirectly influenced Internet addiction through negative emotions. The results of this study suggest that adolescents who experience academic stress might be at risk for Internet addiction, particularly when accompanied with negative emotions. These findings provided significant implications for counselors and policymakers to prevent adolescents’ Internet addiction, and extended the general strain theory to Internet addiction which is typically applicable to deviant behavior. 2015 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
d75cf922e9d16103f54658fa33352c86
Distributed Data Streams
[ { "docid": "872f556cb441d9c8976e2bf03ebd62ee", "text": "Monitoring is an issue of primary concern in current and next generation networked systems. For ex, the objective of sensor networks is to monitor their surroundings for a variety of different applications like atmospheric conditions, wildlife behavior, and troop movements among others. Similarly, monitoring in data networks is critical not only for accounting and management, but also for detecting anomalies and attacks. Such monitoring applications are inherently continuous and distributed, and must be designed to minimize the communication overhead that they introduce. In this context we introduce and study a fundamental class of problems called \"thresholded counts\" where we must return the aggregate frequency count of an event that is continuously monitored by distributed nodes with a user-specified accuracy whenever the actual count exceeds a given threshold value.In this paper we propose to address the problem of thresholded counts by setting local thresholds at each monitoring node and initiating communication only when the locally observed data exceeds these local thresholds. We explore algorithms in two categories: static and adaptive thresholds. In the static case, we consider thresholds based on a linear combination of two alternate strategies, and show that there exists an optimal blend of the two strategies that results in minimum communication overhead. We further show that this optimal blend can be found using a steepest descent search. In the adaptive case, we propose algorithms that adjust the local thresholds based on the observed distributions of updated information. We use extensive simulations not only to verify the accuracy of our algorithms and validate our theoretical results, but also to evaluate the performance of our algorithms. We find that both approaches yield significant savings over the naive approach of centralized processing.", "title": "" }, { "docid": "7bdc7740124adab60c726710a003eb87", "text": "We have developed Gigascope, a stream database for network applications including traffic analysis, intrusion detection, router configuration analysis, network research, network monitoring, and performance monitoring and debugging. Gigascope is undergoing installation at many sites within the AT&T network, including at OC48 routers, for detailed monitoring. In this paper we describe our motivation for and constraints in developing Gigascope, the Gigascope architecture and query language, and performance issues. We conclude with a discussion of stream database research problems we have found in our application.", "title": "" } ]
[ { "docid": "5f5960cf7621f95687cbbac48dfdb0c5", "text": "We present the first controller that allows our small hexapod robot, RHex, to descend a wide variety of regular sized, “real-world” stairs. After selecting one of two sets of trajectories, depending on the slope of the stairs, our open-loop, clock-driven controllers require no further operator input nor task level feedback. Energetics for stair descent is captured via specific resistance values and compared to stair ascent and other behaviors. Even though the algorithms developed and validated in this paper were developed for a particular robot, the basic motion strategies, and the phase relationships between the contralateral leg pairs are likely applicable to other hexapod robots of similar size as well.", "title": "" }, { "docid": "476c1e503065f3d1638f6f2302dc6bbb", "text": "The increasing popularity and ubiquity of various large graph datasets has caused renewed interest for graph partitioning. Existing graph partitioners either scale poorly against large graphs or disregard the impact of the underlying hardware topology. A few solutions have shown that the nonuniform network communication costs may affect the performance greatly. However, none of them considers the impact of resource contention on the memory subsystems (e.g., LLC and Memory Controller) of modern multicore clusters. They all neglect the fact that the bandwidth of modern high-speed networks (e.g., Infiniband) has become comparable to that of the memory subsystems. In this paper, we provide an in-depth analysis, both theoretically and experimentally, on the contention issue for distributed workloads. We found that the slowdown caused by the contention can be as high as 11x. We then design an architecture-aware graph partitioner, Argo, to allow the full use of all cores of multicore machines without suffering from either the contention or the communication heterogeneity issue. Our experimental study showed (1) the effectiveness of Argo, achieving up to 12x speedups on three classic workloads: Breadth First Search, Single Source Shortest Path, and PageRank; and (2) the scalability of Argo in terms of both graph size and the number of partitions on two billion-edge real-world graphs.", "title": "" }, { "docid": "0186c053103d06a8ddd054c3c05c021b", "text": "The brain-gut axis is a bidirectional communication system between the central nervous system and the gastrointestinal tract. Serotonin functions as a key neurotransmitter at both terminals of this network. Accumulating evidence points to a critical role for the gut microbiome in regulating normal functioning of this axis. In particular, it is becoming clear that the microbial influence on tryptophan metabolism and the serotonergic system may be an important node in such regulation. There is also substantial overlap between behaviours influenced by the gut microbiota and those which rely on intact serotonergic neurotransmission. The developing serotonergic system may be vulnerable to differential microbial colonisation patterns prior to the emergence of a stable adult-like gut microbiota. At the other extreme of life, the decreased diversity and stability of the gut microbiota may dictate serotonin-related health problems in the elderly. The mechanisms underpinning this crosstalk require further elaboration but may be related to the ability of the gut microbiota to control host tryptophan metabolism along the kynurenine pathway, thereby simultaneously reducing the fraction available for serotonin synthesis and increasing the production of neuroactive metabolites. The enzymes of this pathway are immune and stress-responsive, both systems which buttress the brain-gut axis. In addition, there are neural processes in the gastrointestinal tract which can be influenced by local alterations in serotonin concentrations with subsequent relay of signals along the scaffolding of the brain-gut axis to influence CNS neurotransmission. Therapeutic targeting of the gut microbiota might be a viable treatment strategy for serotonin-related brain-gut axis disorders.", "title": "" }, { "docid": "6087e066b04b9c3ac874f3c58979f89a", "text": "What does it mean for a machine learning model to be ‘fair’, in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different assumptions about what terms like discrimination and fairness mean and how they can be defined in mathematical terms. Questions of discrimination, egalitarianism and justice are of significant interest to moral and political philosophers, who have expended significant efforts in formalising and defending these central concepts. It is therefore unsurprising that attempts to formalise ‘fairness’ in machine learning contain echoes of these old philosophical debates. This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.", "title": "" }, { "docid": "8a9603a10e5e02f6edfbd965ee11bbb9", "text": "The alerts produced by network-based intrusion detection systems, e.g. Snort, can be difficult for network administrators to efficiently review and respond to due to the enormous number of alerts generated in a short time frame. This work describes how the visualization of raw IDS alert data assists network administrators in understanding the current state of a network and quickens the process of reviewing and responding to intrusion attempts. The project presented in this work consists of three primary components. The first component provides a visual mapping of the network topology that allows the end-user to easily browse clustered alerts. The second component is based on the flocking behavior of birds such that birds tend to follow other birds with similar behaviors. This component allows the end-user to see the clustering process and provides an efficient means for reviewing alert data. The third component discovers and visualizes patterns of multistage attacks by profiling the attacker’s behaviors.", "title": "" }, { "docid": "029cca0b7e62f9b52e3d35422c11cea4", "text": "This letter presents the design of a novel wideband horizontally polarized omnidirectional printed loop antenna. The proposed antenna consists of a loop with periodical capacitive loading and a parallel stripline as an impedance transformer. Periodical capacitive loading is realized by adding interlaced coupling lines at the end of each section. Similarly to mu-zero resonance (MZR) antennas, the periodical capacitive loaded loop antenna proposed in this letter allows current along the loop to remain in phase and uniform. Therefore, it can achieve a horizontally polarized omnidirectional pattern in the far field, like a magnetic dipole antenna, even though the perimeter of the loop is comparable to the operating wavelength. Furthermore, the periodical capacitive loading is also useful to achieve a wide impedance bandwidth. A prototype of the proposed periodical capacitive loaded loop antenna is fabricated and measured. It can provide a wide impedance bandwidth of about 800 MHz (2170-2970 MHz, 31.2%) and a horizontally polarized omnidirectional pattern in the azimuth plane.", "title": "" }, { "docid": "6fb48ddc2f14cdb9371aad67e9c8abe0", "text": "Being able to predict the course of arbitrary chemical react ions is essential to the theory and applications of organic chemistry. Previous app roaches are not highthroughput, are not generalizable or scalable, or lack suffi cient data to be effective. We describe single mechanistic reactions as concerted elec tron movements from an electron orbital source to an electron orbital sink. We us e an existing rule-based expert system to derive a dataset consisting of 2,989 productive mechanistic steps and6.14 million non-productive mechanistic steps. We then pose ide nt fying productive mechanistic steps as a ranking problem: rank potent ial orbital interactions such that the top ranked interactions yield the major produc ts. The machine learning implementation follows a two-stage approach, in which w e first train atom level reactivity filters to prune94.0% of non-productive reactions with less than a 0.1% false negative rate. Then, we train an ensemble of ranking mo dels n pairs of interacting orbitals to learn a relative productivity func tion over single mechanistic reactions in a given system. Without the use of explicit t ransformation patterns, the ensemble perfectly ranks the productive mechanisms at t he top89.1% of the time, rising to99.9% of the time when top ranked lists with at most four nonproductive reactions are considered. The final system allow s multi-step reaction prediction. Furthermore, it is generalizable, making reas on ble predictions over reactants and conditions which the rule-based expert syste m does not handle.", "title": "" }, { "docid": "5ddcfb5404ceaffd6957fc53b4b2c0d8", "text": "A router's main function is to allow communication between different networks as quickly as possible and in efficient manner. The communication can be between LAN or between LAN and WAN. A firewall's function is to restrict unwanted traffic. In big networks, routers and firewall tasks are performed by different network devices. But in small networks, we want both functions on same device i.e. one single device performing both routing and firewalling. We call these devices as routing firewall. In Traditional networks, the devices are already available. But the next generation networks will be powered by Software Defined Networks. For wide adoption of SDN, we need northbound SDN applications such as routers, load balancers, firewalls, proxy servers, Deep packet inspection devices, routing firewalls running on OpenFlow based physical and virtual switches. But the SDN is still in early stage, so still there is very less availability of these applications. There already exist simple L3 Learning application which provides very elementary router function and also simple stateful firewalls providing basic access control. In this paper, we are implementing one SDN Routing Firewall Application which will perform both the routing and firewall function.", "title": "" }, { "docid": "6b1adc1da6c75f6cc0cb17820add8ef1", "text": "Many different classification tasks need to manage structured data, which are usually modeled as graphs. Moreover, these graphs can be dynamic, meaning that the vertices/edges of each graph may change during time. Our goal is to jointly exploit structured data and temporal information through the use of a neural network model. To the best of our knowledge, this task has not been addressed using these kind of architectures. For this reason, we propose two novel approaches, which combine Long Short-Term Memory networks and Graph Convolutional Networks to learn long short-term dependencies together with graph structure. The quality of our methods is confirmed by the promising results achieved.", "title": "" }, { "docid": "e0f0ccb0e1c2f006c5932f6b373fb081", "text": "This paper proposes a methodology to be used in the segmentation of infrared thermography images for the detection of bearing faults in induction motors. The proposed methodology can be a helpful tool for preventive and predictive maintenance of the induction motor. This methodology is based on manual threshold image processing to obtain a segmentation of an infrared thermal image, which is used for the detection of critical points known as hot spots on the system under test. From these hot spots, the parameters of interest that describe the thermal behavior of the induction motor were obtained. With the segmented image, it is possible to compare and analyze the thermal conditions of the system.", "title": "" }, { "docid": "e95541d0401a196b03b94dd51dd63a4b", "text": "In the information age, computer applications have become part of modern life and this has in turn encouraged the expectations of friendly interaction with them. Speech, as “the” communication mode, has seen the successful development of quite a number of applications using automatic speech recognition (ASR), including command and control, dictation, dialog systems for people with impairments, translation, etc. But the actual challenge goes beyond the use of speech in control applications or to access information. The goal is to use speech as an information source, competing, for example, with text online. Since the technology supporting computer applications is highly dependent on the performance of the ASR system, research into ASR is still an active topic, as is shown by the range of research directions suggested in (Baker et al., 2009a, 2009b). Automatic speech recognition – the recognition of the information embedded in a speech signal and its transcription in terms of a set of characters, (Junqua & Haton, 1996) – has been object of intensive research for more than four decades, achieving notable results. It is only to be expected that speech recognition advances make spoken language as convenient and accessible as online text when the recognizers reach error rates near zero. But while digit recognition has already reached a rate of 99.6%, (Li, 2008), the same cannot be said of phone recognition, for which the best rates are still under 80% 1,(Mohamed et al., 2011; Siniscalchi et al., 2007). Speech recognition based on phones is very attractive since it is inherently free from vocabulary limitations. Large Vocabulary ASR (LVASR) systems’ performance depends on the quality of the phone recognizer. That is why research teams continue developing phone recognizers, in order to enhance their performance as much as possible. Phone recognition is, in fact, a recurrent problem for the speech recognition community. Phone recognition can be found in a wide range of applications. In addition to typical LVASR systems like (Morris & Fosler-Lussier, 2008; Scanlon et al., 2007; Schwarz, 2008), it can be found in applications related to keyword detection, (Schwarz, 2008), language recognition, (Matejka, 2009; Schwarz, 2008), speaker identification, (Furui, 2005) and applications for music identification and translation, (Fujihara & Goto, 2008; Gruhne et al., 2007). The challenge of building robust acoustic models involves applying good training algorithms to a suitable set of data. The database defines the units that can be trained and", "title": "" }, { "docid": "e59b203f3b104553a84603240ea467eb", "text": "Experimental art deployed in the Augmented Reality (AR) medium is contributing to a reconfiguration of traditional perceptions of interface, audience participation, and perceptual experience. Artists, critical engineers, and programmers, have developed AR in an experimental topology that diverges from both industrial and commercial uses of the medium. In a general technical sense, AR is considered as primarily an information overlay, a datafied window that situates virtual information in the physical world. In contradistinction, AR as experimental art practice activates critical inquiry, collective participation, and multimodal perception. As an emergent hybrid form that challenges and extends already established 'fine art' categories, augmented reality art deployed on Portable Media Devices (PMD’s) such as tablets & smartphones fundamentally eschews models found in the conventional 'art world.' It should not, however, be considered as inscribing a new 'model:' rather, this paper posits that the unique hybrids advanced by mobile augmented reality art–– also known as AR(t)–– are closely related to the notion of the 'machinic assemblage' ( Deleuze & Guattari 1987), where a deep capacity to re-assemble marks each new artevent. This paper develops a new formulation, the 'software assemblage,’ to explore some of the unique mixed reality situations that AR(t) has set in motion.", "title": "" }, { "docid": "b71197073ea33bb8c61973e8cd7d2775", "text": "This paper discusses the latest developments in the optimization and fabrication of 3.3kV SiC vertical DMOSFETs. The devices show superior on-state and switching losses compared to the even the latest generation of 3.3kV fast Si IGBTs and promise to extend the upper switching frequency of high-voltage power conversion systems beyond several tens of kHz without the need to increase part count with 3-level converter stacks of faster 1.7kV IGBTs.", "title": "" }, { "docid": "2a1d77e0c5fe71c3c5eab995828ef113", "text": "Local modular control (LMC) is an approach to the supervisory control theory (SCT) of discrete-event systems that exploits the modularity of plant and specifications. Recently, distinguishers and approximations have been associated with SCT to simplify modeling and reduce synthesis effort. This paper shows how advantages from LMC, distinguishers, and approximations can be combined. Sufficient conditions are presented to guarantee that local supervisors computed by our approach lead to the same global closed-loop behavior as the solution obtained with the original LMC, in which the modeling is entirely handled without distinguishers. A further contribution presents a modular way to design distinguishers and a straightforward way to construct approximations to be used in local synthesis. An example of manufacturing system illustrates our approach. Note to Practitioners—Distinguishers and approximations are alternatives to simplify modeling and reduce synthesis cost in SCT, grounded on the idea of event-refinements. However, this approach may entangle the modular structure of a plant, so that LMC does not keep the same efficiency. This paper shows how distinguishers and approximations can be locally combined such that synthesis cost is reduced and LMC advantages are preserved.", "title": "" }, { "docid": "9b0114697dc6c260610d0badc1d7a2a4", "text": "This review captures the synthesis, assembly, properties, and applications of copper chalcogenide NCs, which have achieved significant research interest in the last decade due to their compositional and structural versatility. The outstanding functional properties of these materials stems from the relationship between their band structure and defect concentration, including charge carrier concentration and electronic conductivity character, which consequently affects their optoelectronic, optical, and plasmonic properties. This, combined with several metastable crystal phases and stoichiometries and the low energy of formation of defects, makes the reproducible synthesis of these materials, with tunable parameters, remarkable. Further to this, the review captures the progress of the hierarchical assembly of these NCs, which bridges the link between their discrete and collective properties. Their ubiquitous application set has cross-cut energy conversion (photovoltaics, photocatalysis, thermoelectrics), energy storage (lithium-ion batteries, hydrogen generation), emissive materials (plasmonics, LEDs, biolabelling), sensors (electrochemical, biochemical), biomedical devices (magnetic resonance imaging, X-ray computer tomography), and medical therapies (photochemothermal therapies, immunotherapy, radiotherapy, and drug delivery). The confluence of advances in the synthesis, assembly, and application of these NCs in the past decade has the potential to significantly impact society, both economically and environmentally.", "title": "" }, { "docid": "7bfbcf62f9ff94e80913c73e069ace26", "text": "This paper presents an online highly accurate system for automatic number plate recognition (ANPR) that can be used as a basis for many real-world ITS applications. The system is designed to deal with unclear vehicle plates, variations in weather and lighting conditions, different traffic situations, and high-speed vehicles. This paper addresses various issues by presenting proper hardware platforms along with real-time, robust, and innovative algorithms. We have collected huge and highly inclusive data sets of Persian license plates for evaluations, comparisons, and improvement of various involved algorithms. The data sets include images that were captured from crossroads, streets, and highways, in day and night, various weather conditions, and different plate clarities. Over these data sets, our system achieves 98.7%, 99.2%, and 97.6% accuracies for plate detection, character segmentation, and plate recognition, respectively. The false alarm rate in plate detection is less than 0.5%. The overall accuracy on the dirty plates portion of our data sets is 91.4%. Our ANPR system has been installed in several locations and has been tested extensively for more than a year. The proposed algorithms for each part of the system are highly robust to lighting changes, size variations, plate clarity, and plate skewness. The system is also independent of the number of plates in captured images. This system has been also tested on three other Iranian data sets and has achieved 100% accuracy in both detection and recognition parts. To show that our ANPR is not language dependent, we have tested our system on available English plates data set and achieved 97% overall accuracy.", "title": "" }, { "docid": "90d9360a3e769311a8d7611d8c8845d9", "text": "We introduce a learning-based approach to detect repeatable keypoints under drastic imaging changes of weather and lighting conditions to which state-of-the-art keypoint detectors are surprisingly sensitive. We first identify good keypoint candidates in multiple training images taken from the same viewpoint. We then train a regressor to predict a score map whose maxima are those points so that they can be found by simple non-maximum suppression. As there are no standard datasets to test the influence of these kinds of changes, we created our own, which we will make publicly available. We will show that our method significantly outperforms the state-of-the-art methods in such challenging conditions, while still achieving state-of-the-art performance on untrained standard datasets.", "title": "" }, { "docid": "6ddb475ef1529ab496ab9f40dc51cb99", "text": "While inexpensive depth sensors are becoming increasingly ubiquitous, field of view and self-occlusion constraints limit the information a single sensor can provide. For many applications one may instead require a network of depth sensors, registered to a common world frame and synchronized in time. Historically such a setup has required a tedious manual calibration procedure, making it infeasible to deploy these networks in the wild, where spatial and temporal drift are common. In this work, we propose an entirely unsupervised procedure for calibrating the relative pose and time offsets of a pair of depth sensors. So doing, we make no use of an explicit calibration target, or any intentional activity on the part of a user. Rather, we use the unstructured motion of objects in the scene to find potential correspondences between the sensor pair. This yields a rough transform which is then refined with an occlusion-aware energy minimization. We compare our results against the standard checkerboard technique, and provide qualitative examples for scenes in which such a technique would be impossible.", "title": "" }, { "docid": "9d5c258e4a2d315d3e462ab333f3a6df", "text": "The modern smart phone and car concepts provide a fertile ground for new location-aware applications, ranging from traffic management to social services. While the functionality is partly implemented at the mobile terminal, there is a rising need for efficient backend processing of high-volume, high update rate location streams. It is in this environment that geofencing, the detection of objects traversing virtual fences, is becoming a universal primitive required by an ever-growing number of applications. To satisfy the functionality and performance requirements of large-scale geofencing applications, we present in this work a backend system for indexing massive quantities of mobile objects and geofences. Our system runs on a cluster of servers, achieving a throughput of location updates that scales linearly with number of machines. The key ingredients to achieve a high performance are a specialized spatial index, a dynamic caching mechanism, and a load-sharing principle that reduces communication overhead to a minimum and enables a shared-nothing architecture. The throughput of the spatial index as well as the performance of the overall system are demonstrated by experiments using simulations of large-scale geofencing applications.", "title": "" } ]
scidocsrr
f616c0706ac0074e8238c7f33fa8dcef
Trajectory Tracking Control for a 3-DOF Parallel Manipulator Using Fractional-Order $\hbox{PI}^{\lambda}\hbox{D}^{\mu}$ Control
[ { "docid": "55b3fe6f2b93fd958d0857b485927bc9", "text": "In this paper, in order to satisfy multiple closed-loop performance specifications simultaneously while improving tracking accuracy during high-speed, high-acceleration tracking motions of a 3-degree-of-freedom (3-DOF) planar parallel manipulator, we propose a new control approach, termed convex synchronized (C-S) control. This control strategy is based on the so-called convex combination method, in which the synchronized control method is adopted. Through the adoption of a set of n synchronized controllers, each of which is tuned to satisfy at least one of a set of n closed-loop performance specifications, the resultant set of n closed-loop transfer functions are combined in a convex manner, from which a C-S controller is solved algebraically. Significantly, the resultant C-S controller simultaneously satisfies all n closed-loop performance specifications. Since each synchronized controller is only required to satisfy at least one of the n closed-loop performance specifications, the convex combination method is more efficient than trial-and-error methods, where the gains of a single controller are tuned to satisfy all n closed-loop performance specifications simultaneously. Furthermore, during the design of each synchronized controller, a feedback signal, termed the synchronization error, is employed. Different from the traditional tracking errors, this synchronization error represents the degree of coordination of the active joints in the parallel manipulator based on the manipulator kinematics. As a result, the trajectory tracking accuracy of each active joint and that of the manipulator end-effector is improved. Thus, possessing both the advantages of the convex combination method and synchronized control, the proposed C-S control method can satisfy multiple closed-loop performance specifications simultaneously while improving tracking accuracy. In addition, unavoidable dynamic modeling errors are addressed through the introduction of a robust performance specification, which ensures that all performance specifications are satisfied despite allowable variations in dynamic parameters, or modeling errors. Experiments conducted on a 3-DOF P-R-R-type planar parallel manipulator demonstrate the aforementioned claims.", "title": "" } ]
[ { "docid": "eea9332a263b7e703a60c781766620e5", "text": "The use of topic models to analyze domainspecific texts often requires manual validation of the latent topics to ensure that they are meaningful. We introduce a framework to support such a large-scale assessment of topical relevance. We measure the correspondence between a set of latent topics and a set of reference concepts to quantify four types of topical misalignment: junk, fused, missing, and repeated topics. Our analysis compares 10,000 topic model variants to 200 expertprovided domain concepts, and demonstrates how our framework can inform choices of model parameters, inference algorithms, and intrinsic measures of topical quality.", "title": "" }, { "docid": "9a87f11fed489f58b0cdd15b329e5245", "text": "BACKGROUND\nBracing is an effective strategy for scoliosis treatment, but there is no consensus on the best type of brace, nor on the way in which it should act on the spine to achieve good correction. The aim of this paper is to present the family of SPoRT (Symmetric, Patient-oriented, Rigid, Three-dimensional, active) braces: Sforzesco (the first introduced), Sibilla and Lapadula.\n\n\nMETHODS\nThe Sforzesco brace was developed following specific principles of correction. Due to its overall symmetry, the brace provides space over pathological depressions and pushes over elevations. Correction is reached through construction of the envelope, pushes, escapes, stops, and drivers. The real novelty is the drivers, introduced for the first time with the Sforzesco brace; they allow to achieve the main action of the brace: a three-dimensional elongation pushing the spine in a down-up direction.Brace prescription is made plane by plane: frontal (on the \"slopes\", another novelty of this concept, i.e. the laterally flexed sections of the spine), horizontal, and sagittal. The brace is built modelling the trunk shape obtained either by a plaster cast mould or by CAD-CAM construction. Brace checking is essential, since SPoRT braces are adjustable and customisable according to each individual curve pattern.Treatment time and duration is individually tailored (18-23 hours per day until Risser 3, then gradual reduction). SEAS (Scientific Exercises Approach to Scoliosis) exercises are a key factor to achieve success.\n\n\nRESULTS\nThe Sforzesco brace has shown to be more effective than the Lyon brace (matched case/control), equally effective as the Risser plaster cast (prospective cohort with retrospective controls), more effective than the Risser cast + Lyon brace in treating curves over 45 degrees Cobb (prospective cohort), and is able to improve aesthetic appearance (prospective cohort).\n\n\nCONCLUSIONS\nThe SPoRT concept of bracing (three-dimensional elongation pushing in a down-up direction) is different from the other corrective systems: 3-point, traction, postural, and movement-based. The Sforzesco brace, being comparable to casting, may be the best brace for the worst cases.", "title": "" }, { "docid": "a96d6649a2274a919fbeb5b2221d69c6", "text": "In this paper, a novel center frequency and bandwidth tunable, cross-coupled waveguide resonator filter is presented. The coupling between adjacent resonators can be adjusted using non-resonating coupling resonators. The negative sign for the cross coupling, which is required to generate transmission zeros, is enforced by choosing an appropriate resonant frequency for the cross-coupling resonator. The coupling iris design itself is identical regardless of the sign of the coupling. The design equations for the novel coupling elements are given in this paper. A four pole filter breadboard with two transmission zeros (elliptic filter function) has been built up and measured at various bandwidth and center frequency settings. It operates at Ka-band frequencies and can be tuned to bandwidths from 36 to 72 MHz in the frequency range 19.7-20.2 GHz.", "title": "" }, { "docid": "c1ba049befffa94e358555056df15cc2", "text": "People design what they say specifically for their conversational partners, and they adapt to their partners over the course of a conversation. A comparison of keyboard conversations involving a simulated computer partner (as in a natural language interface) with those involving a human partner (as in teleconferencing) yielded striking differences and some equally striking similarities. For instance, there were significantly fewer acknowledgments in human/computer dialogue than in human/human. However, regardless of the conversational partner, people expected connectedness across conversational turns. In addition, the style of a partner's response shaped what people subsequently typed. These results suggest some issues that need to be addressed before a natural language computer interface will be able to hold up its end of a conversation.", "title": "" }, { "docid": "7ce147a433a376dd1cc0f7f09576e1bd", "text": "Introduction Dissolution testing is routinely carried out in the pharmaceutical industry to determine the rate of dissolution of solid dosage forms. In addition to being a regulatory requirement, in-vitro dissolution testing is used to assist with formulation design, process development, and the demonstration of batch-to-batch reproducibility in production. The most common of such dissolution test apparatuses is the USP Dissolution Test Apparatus II, consisting of an unbaffled vessel stirred by a paddle, whose dimensions, characteristics, and operating conditions are detailed by the USP (Cohen et al., 1990; The United States Pharmacopeia & The National Formulary, 2004).", "title": "" }, { "docid": "2b1048b3bdb52c006437b18d7b458871", "text": "A road interpretation module is presented! which is part of a real-time vehicle guidance system for autonomous driving. Based on bifocal computer vision, the complete system is able to drive a vehicle on marked or unmarked roads, to detect obstacles, and to react appropriately. The hardware is a network of 23 transputers, organized in modular clusters. Parallel modules performing image analysis, feature extraction, object modelling, sensor data integration and vehicle control, are organized in hierarchical levels. The road interpretation module is based on the principle of recursive state estimation by Kalman filter techniques. Internal 4-D models of the road, vehicle position, and orientation are updated using data produced by the image-processing module. The system has been implemented on two vehicles (VITA and VaMoRs) and demonstrated in the framework of PROMETHEUS, where the ability of autonomous driving through narrow curves and of lane changing were demonstrated. Meanwhile, the system has been tested on public roads in real traffic situations, including travel on a German Autobahn autonomously at speeds up to 85 km/h. Belcastro, C.M., Fischl, R., and M. Kam. “Fusion Techniques Using Distributed Kalman Filtering for Detecting Changes in Systems.” Proceedings of the 1991 American Control Conference. 26-28 June 1991: Boston, MA. American Autom. Control Council, 1991. Vol. 3: (2296-2298).", "title": "" }, { "docid": "181530396a384e0e8c8ed00bcd195e81", "text": "Numerous problems encountered in real life cannot be actually formulated as a single objective problem; hence the requirement of Multi-Objective Optimization (MOO) had arisen several years ago. Due to the complexities in such type of problems powerful heuristic techniques were needed, which has been strongly satisfied by Swarm Intelligence (SI) techniques. Particle Swarm Optimization (PSO) has been established in 1995 and became a very mature and most popular domain in SI. MultiObjective PSO (MOPSO) established in 1999, has become an emerging field for solving MOOs with a large number of extensive literature, software, variants, codes and applications. This paper reviews all the applications of MOPSO in miscellaneous areas followed by the study on MOPSO variants in our next publication. An introduction to the key concepts in MOO is followed by the main body of review containing survey of existing work, organized by application area along with their multiple objectives, variants and further categorized variants.", "title": "" }, { "docid": "479b124662755d8b07f2f5f9baabef9a", "text": "The ARINC 653 specification defines the functionality that an operating system (OS) must guarantee to enforce robust spatial and temporal partitioning as well as an avionics application programming interface for the system. The standard application interface - the ARINC 653 application executive (APEX) - is defined as a set of software services a compliant OS must provide to avionics application developers. The ARINC 653 specification defines the interfaces and the behavior of the APEX but leaves implementation details to OS vendors. This paper describes an OS independent design approach of a portable APEX interface. POSIX, as a programming interface available on a wide range of modern OS, will be used to implement the APEX layer. This way the standardization of the APEX is taken a step further: not only the definition of services is standardized but also its interface to the underlying OS. Therefore, the APEX operation does not depend on a particular OS but relies on a well defined set of standardized components.", "title": "" }, { "docid": "4a4a868d64a653fac864b5a7a531f404", "text": "Metropolitan areas have come under intense pressure to respond to federal mandates to link planning of land use, transportation, and environmental quality; and from citizen concerns about managing the side effects of growth such as sprawl, congestion, housing affordability, and loss of open space. The planning models used by Metropolitan Planning Organizations (MPOs) were generally not designed to address these questions, creating a gap in the ability of planners to systematically assess these issues. UrbanSim is a new model system that has been developed to respond to these emerging requirements, and has now been applied in three metropolitan areas. This paper describes the model system and its application to Eugene-Springfield, Oregon.", "title": "" }, { "docid": "12b8dac3e97181eb8ca9c0406f2fa456", "text": "INTRODUCTION\nThis paper discusses some of the issues and challenges of implementing appropriate and coordinated District Health Management Information System (DHMIS) in environments dependent on external support especially when insufficient attention has been given to the sustainability of systems. It also discusses fundamental issues which affect the usability of DHMIS to support District Health System (DHS), including meeting user needs and user education in the use of information for management; and the need for integration of data from all health-providing and related organizations in the district.\n\n\nMETHODS\nThis descriptive cross-sectional study was carried out in three DHSs in Kenya. Data was collected through use of questionnaires, focus group discussions and review of relevant literature, reports and operational manuals of the studied DHMISs.\n\n\nRESULTS\nKey personnel at the DHS level were not involved in the development and implementation of the established systems. The DHMISs were fragmented to the extent that their information products were bypassing the very levels they were created to serve. None of the DHMISs was computerized. Key resources for DHMIS operation were inadequate. The adequacy of personnel was 47%, working space 40%, storage space 34%, stationery 20%, 73% of DHMIS staff were not trained, management support was 13%. Information produced was 30% accurate, 19% complete, 26% timely, 72% relevant; the level of confidentiality and use of information at the point of collection stood at 32% and 22% respectively and information security at 48%. Basic DHMIS equipment for information processing was not available. This inhibited effective and efficient provision of information services.\n\n\nCONCLUSIONS\nAn effective DHMIS is essential for DHS planning, implementation, monitoring and evaluation activities. Without accurate, timely, relevant and complete information the existing information systems are not capable of facilitating the DHS managers in their day-today operational management. The existing DHMISs were found not supportive of the DHS managers' strategic and operational management functions. Consequently DHMISs were found to be plagued by numerous designs, operational, resources and managerial problems. There is an urgent need to explore the possibilities of computerizing the existing manual systems to take advantage of the potential uses of microcomputers for DHMIS operations within the DHS. Information system designers must also address issues of cooperative partnership in information activities, systems compatibility and sustainability.", "title": "" }, { "docid": "5a583fe6fae9f0624bcde5043c56c566", "text": "In this paper, a microstrip dipole antenna on a flexible organic substrate is proposed. The antenna arms are tilted to make different variations of the dipole with more compact size and almost same performance. The antennas are fed using a coplanar stripline (CPS) geometry (Simons, 2001). The antennas are then conformed over cylindrical surfaces and their performances are compared to their flat counterparts. Good performance is achieved for both the flat and conformal antennas.", "title": "" }, { "docid": "a321a7709188c741b34824c8b9084d47", "text": "We offer a fluctuation smoothing computational approach for unsupervised automatic short answer grading (ASAG) techniques in the educational ecosystem. A major drawback of the existing techniques is the significant effect that variations in model answers could have on their performances. The proposed fluctuation smoothing approach, based on classical sequential pattern mining, exploits lexical overlap in students’ answers to any typical question. We empirically demonstrate using multiple datasets that the proposed approach improves the overall performance and significantly reduces (up to 63%) variation in performance (standard deviation) of unsupervised ASAG techniques. We bring in additional benchmarks such as (a) paraphrasing of model answers and (b) using answers by k top performing students as model answers, to amplify the benefits of the proposed approach.", "title": "" }, { "docid": "93464384fa3c20cec1bfae7b4dc7a216", "text": "Among the various solutions for the series association of high power IGBTs, the active clamping circuit insures both protection and voltage balancing, within good reliability and compactness. Therefore, this structure has been chosen to be integrated closed to the IGBTs. The design of this circuit leads to the resolution of a compromise between a good balancing and limited additional losses. The aim of this paper is to optimise this circuit, in order to reduce the losses, in the IGBTs as well as in the active clamping circuit. This design has been validated in a 3 kV 400 A test bench, using three 1.7 kV components in series.", "title": "" }, { "docid": "7c9d35fb9cec2affbe451aed78541cef", "text": "Dental caries, also known as dental cavities, is the most widespread pathology in the world. Up to a very recent period, almost all individuals had the experience of this pathology at least once in their life. Early detection of dental caries can help in a sharp decrease in the dental disease rate. Thanks to the growing accessibility to medical imaging, the clinical applications now have better impact on patient care. Recently, there has been interest in the application of machine learning strategies for classification and analysis of image data. In this paper, we propose a new method to detect and identify dental caries using X-ray images as dataset and deep neural network as technique. This technique is based on stacked sparse auto-encoder and a softmax classifier. Those techniques, sparse auto-encoder and softmax, are used to train a deep neural network. The novelty here is to apply deep neural network to diagnosis of dental caries. This approach was tested on a real dataset and has demonstrated a good performance of detection. Keywords-dental X-ray; classification; Deep Neural Networks; Stacked sparse auto-encoder; Softmax.", "title": "" }, { "docid": "dc8180cdc6344f1dc5bfa4dbf048912c", "text": "Image analysis is a key area in the computer vision domain that has many applications. Genetic Programming (GP) has been successfully applied to this area extensively, with promising results. Highlevel features extracted from methods such as Speeded Up Robust Features (SURF) and Histogram of Oriented Gradients (HoG) are commonly used for object detection with machine learning techniques. However, GP techniques are not often used with these methods, despite being applied extensively to image analysis problems. Combining the training process of GP with the powerful features extracted by SURF or HoG has the potential to improve the performance by generating high-level, domaintailored features. This paper proposes a new GP method that automatically detects di↵erent regions of an image, extracts HoG features from those regions, and simultaneously evolves a classifier for image classification. By extending an existing GP region selection approach to incorporate the HoG algorithm, we present a novel way of using high-level features with GP for image classification. The ability of GP to explore a large search space in an e cient manner allows all stages of the new method to be optimised simultaneously, unlike in existing approaches. The new approach is applied across a range of datasets, with promising results when compared to a variety of well-known machine learning techniques. Some high-performing GP individuals are analysed to give insight into how GP can e↵ectively be used with high-level features for image classification.", "title": "" }, { "docid": "78786193b4f7521b05f43997218f6778", "text": "The design and fabrication of an Ultra broadband square quad-ridge polarizer is discussed here. The principal advantages of this topology relay on both the instantaneous bandwidth and the axial ratio improvement. Experimental measurements exhibit very good agreement with the predicted results given by Mode Matching techniques. The structure provides an extremely flat axial ratio (AR< 0.4dB) and good return losses >25dB at both square ports over the extended Ku band (= 60%). Moreover, yield analysis and scaling properties demonstrate the robustness of this design against fabrication tolerances.", "title": "" }, { "docid": "081faf749f5e996c70f91a77ecae2a88", "text": "Hyponatremia associated with diuretic use can be clinically difficult to differentiate from the syndrome of inappropriate antidiuretic hormone secretion (SIADH). We report a case of a 28-year-old man with HIV (human immunodeficiency virus) and Pneumocystis pneumonia who developed hyponatremia while receiving trimethoprim-sulfamethoxazole (TMP/SMX). Serum sodium level on admission was 135 mEq/L (with a history of hyponatremia) and decreased to 117 mEq/L by day 7 of TMP/SMX treatment. In the setting of suspected euvolemia and Pneumocystis pneumonia, he was treated initially for SIADH with fluid restriction and tolvaptan without improvement in serum sodium level. A diagnosis of hyponatremia secondary to the diuretic effect of TMP subsequently was confirmed, with clinical hypovolemia and high renin, aldosterone, and urinary sodium levels. Subsequent therapy with sodium chloride stabilized serum sodium levels in the 126- to 129-mEq/L range. After discontinuation of TMP/SMX treatment, serum sodium, renin, and aldosterone levels normalized. TMP/SMX-related hyponatremia likely is underdiagnosed and often mistaken for SIADH. It should be considered for patients on high-dose TMP/SMX treatment and can be differentiated from SIADH by clinical hypovolemia (confirmed by high renin and aldosterone levels). TMP-associated hyponatremia can be treated with sodium supplementation to offset ongoing urinary losses if the TMP/SMX therapy cannot be discontinued. In this Acid-Base and Electrolyte Teaching Case, a less common cause of hyponatremia is presented, and a stepwise approach to the diagnosis is illustrated.", "title": "" }, { "docid": "6384c31adaf8b28ca7a6dd97d3eb571a", "text": ".....................................................................................................3 Introduction...................................................................................................4 Chapter 1. History of Origami............................................................................. 5 Chapter 2. Evolution of Origami tessellations in 20-th century architecture........................7 Chapter 3. Kinetic system and Origami...................................................................9 3.1. Kinetic system................................................................................. 9 3.2. Geometric Origami............................................................................ 9 Chapter 4. Folding patterns................................................................................ 10 4.1. Yoshimura pattern (diamond pattern)........................................................ 11 4.2. Diagonal pattern..............................................................................11 4.3. Miura Ori pattern (herringbone pattern)...................................................11 Chapter 5. The origami house and impact on the furniture design.................................... 13 Conclusion.................................................................................................... 16 References...................................................................................................17 Annex 1....................................................................................................... 18 Annex 2...................................................................................................... 19", "title": "" }, { "docid": "03dc5f33c4735680902c3cd190a07962", "text": "Natural systems from snowflakes to mollusc shells show a great diversity of complex patterns. The origins of such complexity can be investigated through mathematical models termed ‘cellular automata’. Cellular automata consist of many identical components, each simple., but together capable of complex behaviour. They are analysed both as discrete dynamical systems, and as information-processing systems. Here some of their universal features are discussed, and some general principles are suggested.", "title": "" }, { "docid": "293e2cd2647740bb65849fed003eb4ac", "text": "In this paper we apply the Local Binary Pattern on Three Orthogonal Planes (LBP-TOP) descriptor to the field of human action recognition. A video sequence is described as a collection of spatial-temporal words after the detection of space-time interest points and the description of the area around them. Our contribution has been in the description part, showing LBP-TOP to be a promising descriptor for human action classification purposes. We have also developed several extensions to the descriptor to enhance its performance in human action recognition, showing the method to be computationally efficient.", "title": "" } ]
scidocsrr
9bb02d8f26d1a73a2e11ef6a8c6fe2b9
A CPPS Architecture approach for Industry 4.0
[ { "docid": "13c0f622205a67e2d026e9eb097df0e3", "text": "This paper presents an approach to how existing production systems that are not Industry 4.0-ready can be expanded to participate in an Industry 4.0 factory. Within this paper, a concept is presented how production systems can be discovered and included into an Industry 4.0 (I4.0) environment, even though they did not have I4.0interfaces when they have been manufactured. The concept is based on a communication gateway and an information server. Besides the concept itself, this paper presents a validation that demonstrates applicability of the developed concept.", "title": "" } ]
[ { "docid": "45a92ab90fabd875a50229921e99dfac", "text": "This paper describes an empirical study of the problems encountered by 32 blind users on the Web. Task-based user evaluations were undertaken on 16 websites, yielding 1383 instances of user problems. The results showed that only 50.4% of the problems encountered by users were covered by Success Criteria in the Web Content Accessibility Guidelines 2.0 (WCAG 2.0). For user problems that were covered by WCAG 2.0, 16.7% of websites implemented techniques recommended in WCAG 2.0 but the techniques did not solve the problems. These results show that few developers are implementing the current version of WCAG, and even when the guidelines are implemented on websites there is little indication that people with disabilities will encounter fewer problems. The paper closes by discussing the implications of this study for future research and practice. In particular, it discusses the need to move away from a problem-based approach towards a design principle approach for web accessibility.", "title": "" }, { "docid": "59db435e906db2c198afdc5cc7c7de2c", "text": "Although the recent advances in the sparse representations of images have achieved outstanding denosing results, removing real, structured noise in digital videos remains a challenging problem. We show the utility of reliable motion estimation to establish temporal correspondence across frames in order to achieve high-quality video denoising. In this paper, we propose an adaptive video denosing framework that integrates robust optical flow into a non-local means (NLM) framework with noise level estimation. The spatial regularization in optical flow is the key to ensure temporal coherence in removing structured noise. Furthermore, we introduce approximate K-nearest neighbor matching to significantly reduce the complexity of classical NLM methods. Experimental results show that our system is comparable with the state of the art in removing AWGN, and significantly outperforms the state of the art in removing real, structured noise.", "title": "" }, { "docid": "1b29aa20e82dba0992634d3a178ad0c5", "text": "This paper presents the approach developed for the partial MASPS level document DO-344 “Operational and Functional Requirements and Safety Objectives” for the UAS standards. Previous RTCA1 work led to the production of an Operational Services Environment Description document, from which operational requirements were extracted and refined. Following the principles described in the Department of Defense Architecture Framework, the overall UAS architecture and major interfaces were defined. Interacting elements included the unmanned aircraft (airborne component), the ground control station (ground component), the Air Traffic Control (ATC), the Air Traffic Service besides ATC, other traffic in the NAS, and the UAS ground support. Furthering the level of details, a functional decomposition was produced prior to the allocation onto the UAS architecture. These functions cover domains including communication, control, navigation, surveillance, and health monitoring. The communication function addressed all elements in the UAS connected with external interfaces: the airborne component, the ground component, the ATC, the other traffic and the ground support. The control function addressed the interface between the ground control station and the unmanned aircraft for the purpose of flying in the NAS. The navigation function covered the capability to determine and fly a trajectory using conventional and satellite based navigation means. The surveillance function addressed the capability to detect and avoid collisions with hazards, including other traffic, terrain and obstacles, and weather. Finally, the health monitoring function addressed the capability to oversee UAS systems, probe for their status and feedback issues related to degradation or loss of performance. An additional function denoted `manage' was added to the functional decomposition to complement the heath monitoring coverage and included manual modes for the operation of the UAS.", "title": "" }, { "docid": "f8c6906f4d0deb812e42aaaff457a6d9", "text": "By the early 1900s, Euro-Americans had extirpated gray wolves (Canis lupus) from most of the contiguous United States. Yellowstone National Park was not immune to wolf persecution and by the mid-1920s they were gone. After seven decades of absence in the park, gray wolves were reintroduced in 1995–1996, again completing the large predator guild (Smith et al. 2003). Yellowstone’s ‘‘experiment in time’’ thus provides a rare opportunity for studying potential cascading effects associated with the extirpation and subsequent reintroduction of an apex predator. Wolves represent a particularly important predator of large mammalian prey in northern hemisphere ecosystems by virtue of their group hunting and year-round activity (Peterson et al. 2003) and can have broad top-down effects on the structure and functioning of these systems (Miller et al. 2001, Soulé et al. 2003, Ray et al. 2005). If a tri-trophic cascade involving wolves–elk (Cervus elaphus)–plants is again underway in northern Yellowstone, theory would suggest two primary mechanisms: (1) density mediation through prey mortality and (2) trait mediation involving changes in prey vigilance, habitat use, and other behaviors (Brown et al. 1999, Berger 2010). Both predator-caused reductions in prey numbers and fear responses they elicit in prey can lead to cascading trophic-level effects across a wide range of biomes (Beschta and Ripple 2009, Laundré et al. 2010, Terborgh and Estes 2010). Thus, the occurrence of a trophic cascade could have important implications not only to the future structure and functioning of northern Yellowstone’s ecosystems but also for other portions of the western United States where wolves have been reintroduced, are expanding their range, or remain absent. However, attempting to identify the occurrence of a trophic cascade in systems with large mammalian predators, as well as the relative importance of density and behavioral mediation, represents a continuing scientific challenge. In Yellowstone today, there is an ongoing effort by various researchers to evaluate ecosystem processes in the park’s two northern ungulate winter ranges: (1) the ‘‘Northern Range’’ along the northern edge of the park (NRC 2002, Barmore 2003) and (2) the ‘‘Upper Gallatin Winter Range’’ along the northwestern corner of the park (Ripple and Beschta 2004b). Previous studies in northern Yellowstone have generally found that elk, in the absence of wolves, caused a decrease in aspen (Populus tremuloides) recruitment (i.e., the growth of seedlings or root sprouts above the browse level of elk). Within this context, Kauffman et al. (2010) initiated a study to provide additional understanding of factors such as elk density, elk behavior, and climate upon historical and contemporary patterns of aspen recruitment in the park’s Northern Range. Like previous studies, Kauffman et al. (2010) concluded that, irrespective of historical climatic conditions, elk have had a major impact on long-term aspen communities after the extirpation of wolves. But, unlike other studies that have seen improvement in the growth or recruitment of young aspen and other browse species in recent years, Kauffman et al. (2010) concluded in their Abstract: ‘‘. . . our estimates of relative survivorship of young browsable aspen indicate that aspen are not currently recovering in Yellowstone, even in the presence of a large wolf population.’’ In the interest of clarifying the potential role of wolves on woody plant community dynamics in Yellowstone’s northern winter ranges, we offer several counterpoints to the conclusions of Kauffman et al. (2010). We do so by readdressing several tasks identified in their Introduction (p. 2744): (1) the history of aspen recruitment failure, (2) contemporary aspen recruitment, and (3) aspen recruitment and predation risk. Task 1 covers the period when wolves were absent from Yellowstone and tasks 2 and 3 focus on the period when wolves were again present. We also include some closing comments regarding trophic cascades and ecosystem recovery. 1. History of aspen recruitment failure.—Although records of wolf and elk populations in northern Yellowstone are fragmentary for the early 1900s, the Northern Range elk population averaged ;10 900 animals (7.3 elk/km; Fig. 1A) as the last wolves were being removed in the mid 1920s. Soon thereafter increased browsing by elk of aspen and other woody species was noted in northern Yellowstone’s winter ranges (e.g., Rush 1932, Lovaas 1970). In an attempt to reduce the effects this large herbivore was having on vegetation, soils, and wildlife habitat in the Northern Manuscript received 13 January 2011; revised 10 June 2011; accepted 20 June 2011. Corresponding Editor: C. C. Wilmers. 1 Department of Forest Ecosystems and Society, Oregon State University, Corvallis, Oregon 97331 USA. 2 E-mail: Robert.Beschta@oregonstate.edu", "title": "" }, { "docid": "2d822e022363b371f62a803d79029f09", "text": "AIM\nTo explore the relationship between sources of stress and psychological burn-out and to consider the moderating and mediating role played sources of stress and different coping resources on burn-out.\n\n\nBACKGROUND\nMost research exploring sources of stress and coping in nursing students construes stress as psychological distress. Little research has considered those sources of stress likely to enhance well-being and, by implication, learning.\n\n\nMETHOD\nA questionnaire was administered to 171 final year nursing students. Questions were asked which measured sources of stress when rated as likely to contribute to distress (a hassle) and rated as likely to help one achieve (an uplift). Support, control, self-efficacy and coping style were also measured, along with their potential moderating and mediating effect on burn-out.\n\n\nFINDINGS\nThe sources of stress likely to lead to distress were more often predictors of well-being than sources of stress likely to lead to positive, eustress states. However, placement experience was an important source of stress likely to lead to eustress. Self-efficacy, dispositional control and support were other important predictors. Avoidance coping was the strongest predictor of burn-out and, even if used only occasionally, it can have an adverse effect on burn-out. Initiatives to promote support and self-efficacy are likely to have the more immediate benefits in enhancing student well-being.\n\n\nCONCLUSION\nNurse educators need to consider how course experiences contribute not just to potential distress but to eustress. How educators interact with their students and how they give feedback offers important opportunities to promote self-efficacy and provide valuable support. Peer support is a critical coping resource and can be bolstered through induction and through learning and teaching initiatives.", "title": "" }, { "docid": "14b7c4f8a3fa7089247f1d4a26186c5d", "text": "System Dynamics is often used for dealing with dynamically complex issues that are also uncertain. This paper reviews how uncertainty is dealt with in System Dynamics modeling, where uncertainties are located in models, which types of uncertainties are dealt with, and which levels of uncertainty could be handled. Shortcomings of System Dynamics and its practice in dealing with uncertainty are distilled from this review and reframed as opportunities. Potential opportunities for dealing with uncertainty in System Dynamics that are discussed here include (i) dealing explicitly with difficult sorts of uncertainties, (ii) using multi-model approaches for dealing with alternative assumptions and multiple perspectives, (iii) clearly distinguishing sensitivity analysis from uncertainty analysis and using them for different purposes, (iv) moving beyond invariant model boundaries, (v) using multi-method approaches, advanced techniques and new tools, and (vi) further developing and using System Dynamics strands for dealing with deep uncertainty.", "title": "" }, { "docid": "8582c4a040e4dec8fd141b00eaa45898", "text": "Emerging airborne networks require domainspecific routing protocols to cope with the challenges faced by the highly-dynamic aeronautical environment. We present an ns-3 based performance comparison of the AeroRP protocol with conventional MANET routing protocols. To simulate a highly-dynamic airborne network, accurate mobility models are needed for the physical movement of nodes. The fundamental problem with many synthetic mobility models is their random, memoryless behavior. Airborne ad hoc networks require a flexible memory-based 3-dimensional mobility model. Therefore, we have implemented a 3-dimensional Gauss-Markov mobility model in ns-3 that appears to be more realistic than memoryless models such as random waypoint and random walk. Using this model, we are able to simulate the airborne networking environment with greater realism than was previously possible and show that AeroRP has several advantages over other MANET routing protocols.", "title": "" }, { "docid": "dc2c952b5864a167c19b34be6db52389", "text": "Data mining is popularly used to combat frauds because of its effectiveness. It is a well-defined procedure that takes data as input and produces models or patterns as output. Neural network, a data mining technique was used in this study. The design of the neural network (NN) architecture for the credit card detection system was based on unsupervised method, which was applied to the transactions data to generate four clusters of low, high, risky and high-risk clusters. The self-organizing map neural network (SOMNN) technique was used for solving the problem of carrying out optimal classification of each transaction into its associated group, since a prior output is unknown. The receiver-operating curve (ROC) for credit card fraud (CCF) detection watch detected over 95% of fraud cases without causing false alarms unlike other statistical models and the two-stage clusters. This shows that the performance of CCF detection watch is in agreement with other detection software, but performs better.", "title": "" }, { "docid": "aaf1aac789547c1bf2f918368b43c955", "text": "Music is full of structure, including sections, sequences of distinct musical textures, and the repetition of phrases or entire sections. The analysis of music audio relies upon feature vectors that convey information about music texture or pitch content. Texture generally refers to the average spectral shape and statistical fluctuation, often reflecting the set of sounding instruments, e.g. strings, vocal, or drums. Pitch content reflects melody and harmony, which is often independent of texture. Structure is found in several ways. Segment boundaries can be detected by observing marked changes in locally averaged texture. Similar sections of music can be detected by clustering segments with similar average textures. The repetition of a sequence of music often marks a logical segment. Repeated phrases and hierarchical structures can be discovered by finding similar sequences of feature vectors within a piece of music. Structure analysis can be used to construct music summaries and to assist music browsing. Introduction Probably everyone would agree that music has structure, but most of the interesting musical information that we perceive lies hidden below the complex surface of the audio signal. From this signal, human listeners perceive vocal and instrumental lines, orchestration, rhythm, harmony, bass lines, and other features. Unfortunately, music audio signals have resisted our attempts to extract this kind of information. Researchers are making progress, but so far, computers have not come near to human levels of performance in detecting notes, processing rhythms, or identifying instruments in a typical (polyphonic) music audio texture. On a longer time scale, listeners can hear structure including the chorus and verse in songs, sections in other types of music, repetition, and other patterns. One might think that without the reliable detection and identification of short-term features such as notes and their sources, that it would be impossible to deduce any information whatsoever about even higher levels of abstraction. Surprisingly, it is possible to automatically detect a great deal of information concerning music structure. For example, it is possible to label the structure of a song as AABA, meaning that opening material (the “A” part) is repeated once, then contrasting material (the “B” part) is played, and then the opening material is played again at the end. This structural description may be deduced from low-level audio signals. Consequently, a computer might locate the “chorus” of a song without having any representation of the melody or rhythm that characterizes the chorus. Underlying almost all work in this area is the concept that structure is induced by the repetition of similar material. This is in contrast to, say, speech recognition, where there is a common understanding of words, their structure, and their meaning. A string of unique words can be understood using prior knowledge of the language. Music, however, has no language or dictionary (although there are certainly known forms and conventions). In general, structure can only arise in music through repetition or systematic transformations of some kind. Repetition implies there is some notion of similarity. Similarity can exist between two points in time (or at least two very short time intervals), similarity can exist between two sequences over longer time intervals, and similarity can exist between the longer-term statistical behaviors of acoustical features. Different approaches to similarity will be described. Similarity can be used to segment music: contiguous regions of similar music can be grouped together into segments. Segments can then be grouped into clusters. The segmentation of a musical work and the grouping of these segments into clusters is a form of analysis or “explanation” of the music. R. Dannenberg and M. Goto Music Structure 16 April 2005 2 Features and Similarity Measures A variety of approaches are used to measure similarity, but it should be clear that a direct comparison of the waveform data or individual samples will not be useful. Large differences in waveforms can be imperceptible, so we need to derive features of waveform data that are more perceptually meaningful and compare these features with an appropriate measure of similarity. Feature Vectors for Spectrum, Texture, and Pitch Different features emphasize different aspects of the music. For example, mel-frequency cepstral coefficients (MFCCs) seem to work well when the general shape of the spectrum but not necessarily pitch information is important. MFCCs generally capture overall “texture” or timbral information (what instruments are playing in what general pitch range), but some pitch information is captured, and results depend upon the number of coefficients used as well as the underlying musical signal. When pitch is important, e.g. when searching for similar harmonic sequences, the chromagram is effective. The chromagram is based on the idea that tones separated by octaves have the same perceived value of chroma (Shepard 1964). Just as we can describe the chroma aspect of pitch, the short term frequency spectrum can be restructured into the chroma spectrum by combining energy at different octaves into just one octave. The chroma vector is a discretized version of the chroma spectrum where energy is summed into 12 log-spaced divisions of the octave corresponding to pitch classes (C, C#, D, ... B). By analogy to the spectrogram, the discrete chromagram is a sequence of chroma vectors. It should be noted that there are several variations of the chromagram. The computation typically begins with a short-term Fourier transform (STFT) which is used to compute the magnitude spectrum. There are different ways to “project” this onto the 12-element chroma vector. Each STFT bin can be mapped directly to the most appropriate chroma vector element (Bartsch and Wakefield 2001), or the STFT bin data can be interpolated or windowed to divide the bin value among two neighboring vector elements (Goto 2003a). Log magnitude values can be used to emphasize the presence of low-energy harmonics. Values can also be averaged, summed, or the vector can be computed to conserve the total energy. The chromagram can also be computed by using the Wavelet transform. Regardless of the exact details, the primary attraction of the chroma vector is that, by ignoring octaves, the vector is relatively insensitive to overall spectral energy distribution and thus to timbral variations. However, since fundamental frequencies and lower harmonics of tones feature prominently in the calculation of the chroma vector, it is quite sensitive to pitch class content, making it ideal for the detection of similar harmonic sequences in music. While MFCCs and chroma vectors can be calculated from a single short term Fourier transform, features can also be obtained from longer sequences of spectral frames. Tzanetakis and Cook (1999) use means and variances of a variety of features in a one second window. The features include the spectral centroid, spectral rolloff, spectral flux, and RMS energy. Peeters, La Burthe, and Rodet (2002) describe “dynamic” features, which model the variation of the short term spectrum over windows of about one second. In this approach, the audio signal is passed through a bank of Mel filters. The time-varying magnitudes of these filter outputs are each analyzed by a short term Fourier transform. The resulting set of features, the Fourier coefficients from each Mel filter output, is large, so a supervised learning scheme is used to find features that maximize the mutual information between feature values and hand-labeled music structures. Measures of Similarity Given a feature vector such as the MFCC or chroma vector, some measure of similarity is needed. One possibility is to compute the (dis)similarity using the Euclidean distance between feature vectors. Euclidean distance will be dependent upon feature magnitude, which is often a measure of the overall R. Dannenberg and M. Goto Music Structure 16 April 2005 3 music signal energy. To avoid giving more weight to the louder moments of music, feature vectors can be normalized, for example, to a mean of zero and a standard deviation of one or to a maximum element of one. Alternatively, similarity can be measured using the scalar (dot) product of the feature vectors. This measure will be larger when feature vectors have a similar direction. As with Euclidean distance, the scalar product will also vary as a function of the overall magnitude of the feature vectors. If the dot product is normalized by the feature vector magnitudes, the result is equal to the cosine of the angle between the vectors. If the feature vectors are first normalized to have a mean of zero, the cosine angle is equivalent to the correlation, another measure that has been used with success. Lu, Wang, and Zhang (Lu, Wang, and Zhang 2004) use a constant-Q transform (CQT), and found that CQT outperforms chroma and MFCC features using a cosine distance measure. They also introduce a “structure-based” distance measure that takes into account the harmonic structure of spectra to emphasize pitch similarity over timbral similarity, resulting in additional improvement in a music structure analysis task. Similarity can be calculated between individual feature vectors, as suggested above, but similarity can also be computed over a window of feature vectors. The measure suggested by Foote (1999) is vector correlation:", "title": "" }, { "docid": "fe012505cc7a2ea36de01fc92924a01a", "text": "The wide usage of Machine Learning (ML) has lead to research on the attack vectors and vulnerability of these systems. The defenses in this area are however still an open problem, and often lead to an arms race. We define a naive, secure classifier at test time and show that a Gaussian Process (GP) is an instance of this classifier given two assumptions: one concerns the distances in the training data, the other rejection at test time. Using these assumptions, we are able to show that a classifier is either secure, or generalizes and thus learns. Our analysis also points towards another factor influencing robustness, the curvature of the classifier. This connection is not unknown for linear models, but GP offer an ideal framework to study this relationship for nonlinear classifiers. We evaluate on five security and two computer vision datasets applying test and training time attacks and membership inference. We show that we only change which attacks are needed to succeed, instead of alleviating the threat. Only for membership inference, there is a setting in which attacks are unsuccessful (< 10% increase in accuracy over random guess). Given these results, we define a classification scheme based on voting, ParGP. This allows us to decide how many points vote and how large the agreement on a class has to be. This ensures a classification output only in cases when there is evidence for a decision, where evidence is parametrized. We evaluate this scheme and obtain promising results.", "title": "" }, { "docid": "1fa6ee7cf37d60c182aa7281bd333649", "text": "To cope with the explosion of information in mathematics and physics, we need a unified mathematical language to integrate ideas and results from diverse fields. Clifford Algebra provides the key to a unifled Geometric Calculus for expressing, developing, integrating and applying the large body of geometrical ideas running through mathematics and physics.", "title": "" }, { "docid": "1b4019d0f2eb9e392b5dfeea8370b625", "text": "Intellectual capital is becoming the preeminent resource for creating economic wealth. Tangible assets such as property, plant, and equipment continue to be important factors in the production of both goods and services. However, their relative importance has decreased through time as the importance of intangible, knowledge-based assets has increased. This shift in importance has raised a number of accounting questions critical for managing assets such as brand names, trade secrets, production processes, distribution channels, and work-related competencies. This paper develops a working definition of intellectual capital and a framework for identifying and classifying the various components of intellectual capital. In addition, methods of measuring intellectual capital at both the individual-component and organization levels are presented. This provides an exploratory foundation for accounting systems and processes useful for meaningful management of intellectual assets. INTELLECTUAL CAPITAL AND ITS MEASUREMENT", "title": "" }, { "docid": "fcd30a667cb2f4e89d9174cc37ac698c", "text": "v TABLE OF CONTENTS vii", "title": "" }, { "docid": "4d91ac570bec700f78521754c7e5d0ce", "text": "Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. The basic concept of CAD is to provide a computer output as a second opinion to assist radiologists' image interpretation by improving the accuracy and consistency of radiological diagnosis and also by reducing the image reading time. In this article, a number of CAD schemes are presented, with emphasis on potential clinical applications. These schemes include: (1) detection and classification of lung nodules on digital chest radiographs; (2) detection of nodules in low dose CT; (3) distinction between benign and malignant nodules on high resolution CT; (4) usefulness of similar images for distinction between benign and malignant lesions; (5) quantitative analysis of diffuse lung diseases on high resolution CT; and (6) detection of intracranial aneurysms in magnetic resonance angiography. Because CAD can be applied to all imaging modalities, all body parts and all kinds of examinations, it is likely that CAD will have a major impact on medical imaging and diagnostic radiology in the 21st century.", "title": "" }, { "docid": "6d882c210047b3851cb0514083cf448e", "text": "Child sexual abuse is a serious global problem and has gained public attention in recent years. Due to the popularity of digital cameras, many perpetrators take images of their sexual activities with child victims. Traditionally, it was difficult to use cutaneous vascular patterns for forensic identification, because they were nearly invisible in color images. Recently, this limitation was overcome using a computational method based on an optical model to uncover vein patterns from color images for forensic verification. This optical-based vein uncovering (OBVU) method is sensitive to the power of the illuminant and does not utilize skin color in images to obtain training parameters to optimize the vein uncovering performance. Prior publications have not included an automatic vein matching algorithm for forensic identification. As a result, the OBVU method only supported manual verification. In this paper, we propose two new schemes to overcome limitations in the OBVU method. Specifically, a color optimization scheme is used to derive the range of biophysical parameters to obtain training parameters and an automatic intensity adjustment scheme is used to enhance the robustness of the vein uncovering algorithm. We also developed an automatic matching algorithm for vein identification. This algorithm can handle rigid and non-rigid deformations and has an explicit pruning function to remove outliers in vein patterns. The proposed algorithms were examined on a database with 300 pairs of color and near infrared (NIR) images collected from the forearms of 150 subjects. The experimental results are encouraging and indicate that the proposed vein uncovering algorithm performs better than the OBVU method and that the uncovered patterns can potentially be used for automatic criminal and victim identification.", "title": "" }, { "docid": "8f7d2c365f6272a7e681a48b500299c7", "text": "In today's world, opinions and reviews accessible to us are one of the most critical factors in formulating our views and influencing the success of a brand, product or service. With the advent and growth of social media in the world, stakeholders often take to expressing their opinions on popular social media, namely Twitter. While Twitter data is extremely informative, it presents a challenge for analysis because of its humongous and disorganized nature. This paper is a thorough effort to dive into the novel domain of performing sentiment analysis of people's opinions regarding top colleges in India. Besides taking additional preprocessing measures like the expansion of net lingo and removal of duplicate tweets, a probabilistic model based on Bayes' theorem was used for spelling correction, which is overlooked in other research studies. This paper also highlights a comparison between the results obtained by exploiting the following machine learning algorithms: Naïve Bayes and Support Vector Machine and an Artificial Neural Network model: Multilayer Perceptron. Furthermore, a contrast has been presented between four different kernels of SVM: RBF, linear, polynomial and sigmoid.", "title": "" }, { "docid": "98ca1c0100115646bb14a00f19c611a5", "text": "The interconnected nature of graphs often results in difficult to interpret clutter. Typically techniques focus on either decluttering by clustering nodes with similar properties or grouping edges with similar relationship. We propose using mapper, a powerful topological data analysis tool, to summarize the structure of a graph in a way that both clusters data with similar properties and preserves relationships. Typically, mapper operates on a given data by utilizing a scalar function defined on every point in the data and a cover for scalar function codomain. The output of mapper is a graph that summarize the shape of the space. In this paper, we outline how to use this mapper construction on an input graphs, outline three filter functions that capture important structures of the input graph, and provide an interface for interactively modifying the cover. To validate our approach, we conduct several case studies on synthetic and real world data sets and demonstrate how our method can give meaningful summaries for graphs with various", "title": "" }, { "docid": "8410b8b76ab690ed4389efae15608d13", "text": "The most natural way to speed-up the training of large networks is to use dataparallelism on multiple GPUs. To scale Stochastic Gradient (SG) based methods to more processors, one need to increase the batch size to make full use of the computational power of each GPU. However, keeping the accuracy of network with increase of batch size is not trivial. Currently, the state-of-the art method is to increase Learning Rate (LR) proportional to the batch size, and use special learning rate with \"warm-up\" policy to overcome initial optimization difficulty. By controlling the LR during the training process, one can efficiently use largebatch in ImageNet training. For example, Batch-1024 for AlexNet and Batch-8192 for ResNet-50 are successful applications. However, for ImageNet-1k training, state-of-the-art AlexNet only scales the batch size to 1024 and ResNet50 only scales it to 8192. The reason is that we can not scale the learning rate to a large value. To enable large-batch training to general networks or datasets, we propose Layer-wise Adaptive Rate Scaling (LARS). LARS LR uses different LRs for different layers based on the norm of the weights (||w||) and the norm of the gradients (||∇w||). By using LARS algoirithm, we can scale the batch size to 32768 for ResNet50 and 8192 for AlexNet. Large batch can make full use of the system’s computational power. For example, batch-4096 can achieve 3× speedup over batch-512 for ImageNet training by AlexNet model on a DGX-1 station (8 P100 GPUs).", "title": "" }, { "docid": "fc12ac921348a77714bff6ec39b0e052", "text": "For decades, nurses (RNs) have identified barriers to providing the optimal pain management that children deserve; yet no studies were found in the literature that assessed these barriers over time or across multiple pediatric hospitals. The purpose of this study was to reassess barriers that pediatric RNs perceive, and how they describe optimal pain management, 3 years after our initial assessment, collect quantitative data regarding barriers identified through comments during our initial assessment, and describe any changes over time. The Modified Barriers to Optimal Pain Management survey was used to measure barriers in both studies. RNs were invited via e-mail to complete an electronic survey. Descriptive and inferential statistics were used to compare results over time. Four hundred forty-two RNs responded, representing a 38% response rate. RNs continue to describe optimal pain management most often in terms of patient comfort and level of functioning. While small changes were seen for several of the barriers, the most significant barriers continued to involve delays in the availability of medications, insufficient physician medication orders, and insufficient orders and time allowed to pre-medicate patients before procedures. To our knowledge, this is the first study to reassess RNs' perceptions of barriers to pediatric pain management over time. While little change was seen in RNs' descriptions of optimal pain management or in RNs' perceptions of barriers, no single item was rated as more than a moderate barrier to pain management. The implications of these findings are discussed in the context of improvement strategies.", "title": "" } ]
scidocsrr
c60fb0a942c51ee8af163e87d5cd7965
"Breaking" Disasters: Predicting and Characterizing the Global News Value of Natural and Man-made Disasters
[ { "docid": "2116414a3e7996d4701b9003a6ccfd15", "text": "Informal genres such as tweets provide large quantities of data in real time, which can be exploited to obtain, through ranking and classification, a succinct summary of the events that occurred. Previous work on tweet ranking and classification mainly focused on salience and social network features or rely on web documents such as online news articles. In this paper, we exploit language independent journalism and content based features to identify news from tweets. We propose a novel newsworthiness classifier trained through active learning and investigate human assessment and automatic methods to encode it on both the tweet and trending topic levels. Our findings show that content and journalism based features proved to be effective for ranking and classifying content on Twitter.", "title": "" }, { "docid": "1274ab286b1e3c5701ebb73adc77109f", "text": "In this paper, we propose the first real time rumor debunking algorithm for Twitter. We use cues from 'wisdom of the crowds', that is, the aggregate 'common sense' and investigative journalism of Twitter users. We concentrate on identification of a rumor as an event that may comprise of one or more conflicting microblogs. We continue monitoring the rumor event and generate real time updates dynamically based on any additional information received. We show using real streaming data that it is possible, using our approach, to debunk rumors accurately and efficiently, often much faster than manual verification by professionals.", "title": "" } ]
[ { "docid": "e9a66ce7077baf347d325bca7b008d6b", "text": "Recent research have shown that the Wavelet Transform (WT) can potentially be used to extract Partial Discharge (PD) signals from severe noise like White noise, Random noise and Discrete Spectral Interferences (DSI). It is important to define that noise is a significant problem in PD detection. Accordingly, the paper mainly deals with denoising of PD signals, based on improved WT techniques namely Translation Invariant Wavelet Transform (TIWT). The improved WT method is distinct from other traditional method called as Fast Fourier Transform (FFT). The TIWT not only remain the edge of the original signal efficiently but also reduce impulsive noise to some extent. Additionally Translation Invariant (TI) Wavelet Transform denoising is used to suppress Pseudo Gibbs phenomenon. In this paper an attempt has been made to review the methodology of denoising the partial discharge signals and shows that the proposed denoising method results are better when compared to other wavelet-based approaches like FFT, wavelet hard thresholding, wavelet soft thresholding, by evaluating five different parameters like, Signal to noise ratio, Cross correlation coefficient, Pulse amplitude distortion, Mean square error, Reduction in noise level.", "title": "" }, { "docid": "bacb761bc173a07bf13558e2e5419c2b", "text": "Rejection sensitivity is the disposition to anxiously expect, readily perceive, and intensely react to rejection. In response to perceived social exclusion, highly rejection sensitive people react with increased hostile feelings toward others and are more likely to show reactive aggression than less rejection sensitive people in the same situation. This paper summarizes work on rejection sensitivity that has provided evidence for the link between anxious expectations of rejection and hostility after rejection. We review evidence that rejection sensitivity functions as a defensive motivational system. Thus, we link rejection sensitivity to attentional and perceptual processes that underlie the processing of social information. A range of experimental and diary studies shows that perceiving rejection triggers hostility and aggressive behavior in rejection sensitive people. We review studies that show that this hostility and reactive aggression can perpetuate a vicious cycle by eliciting rejection from those who rejection sensitive people value most. Finally, we summarize recent work suggesting that this cycle can be interrupted with generalized self-regulatory skills and the experience of positive, supportive relationships.", "title": "" }, { "docid": "6bfc3d00fe6e9fcdb09ad8993b733dfd", "text": "This article presents the upper-torso design issue of Affeto who can physically interact with humans, which biases the perception of affinity beyond the uncanny valley effect. First, we review the effect and hypothesize that the experience of physical interaction with Affetto decreases the effect. Then, the reality of physical existence is argued with existing platforms. Next, the design concept and a very preliminary experiment are shown. Finally, future issues are given. I. THE UNCANNY VALLEY REVISITED The term “Uncanny” is a translation of Freud’s term “Der Unheimliche” and applied to a phenomenon noted by Masahiro Mori who mentioned that the presence of movement steepens the slopes of the uncanny valley (Figure 2 in [1]). Several studies on this effect can be summarised as follows1. 1) Multimodal impressions such as visual appearance, body motion, sounds (speech and others), and tactile sensation should be congruent to decrease the valley steepness. 2) Antipathetic expressions may exaggerate the valley effect. The current technologies enable us to minimize the gap caused by mismatch among cross-modal factors. Therefore, the valley effect is expected to be reduced gradually. For example, facial expressions and tactile sensations of Affetto [2] are realistic and congruent due to baby-like face skin mask of urethane elastomer gel (See Figure 1). Generated facial expressions almost conquered the uncanny valley. Further, baby-like facial expressions may contribute to the reduction of the valley effect due to 2). In addition to these, we suppose that the motor experience of physical interactions with robots biases the perception of affinity as motor experiences biases the perception of movements [3]. To verify this hypothesis, Affetto needs its body which realizes physical interactions naturally. The rest of this article is organized as follows. The next section argues about the reality of physical existence with existing platforms. Then, the design concept and a very preliminary experiment are shown, and the future issues are given.", "title": "" }, { "docid": "5527521d567290192ea26faeb6e7908c", "text": "With the rapid development of spectral imaging techniques, classification of hyperspectral images (HSIs) has attracted great attention in various applications such as land survey and resource monitoring in the field of remote sensing. A key challenge in HSI classification is how to explore effective approaches to fully use the spatial–spectral information provided by the data cube. Multiple kernel learning (MKL) has been successfully applied to HSI classification due to its capacity to handle heterogeneous fusion of both spectral and spatial features. This approach can generate an adaptive kernel as an optimally weighted sum of a few fixed kernels to model a nonlinear data structure. In this way, the difficulty of kernel selection and the limitation of a fixed kernel can be alleviated. Various MKL algorithms have been developed in recent years, such as the general MKL, the subspace MKL, the nonlinear MKL, the sparse MKL, and the ensemble MKL. The goal of this paper is to provide a systematic review of MKL methods, which have been applied to HSI classification. We also analyze and evaluate different MKL algorithms and their respective characteristics in different cases of HSI classification cases. Finally, we discuss the future direction and trends of research in this area.", "title": "" }, { "docid": "8c34f43e7d3f760173257fbbc58c22ca", "text": "High voltage pulse generators can be used effectively in water treatment applications, as applying a pulsed electric field on the infected sample guarantees killing of harmful germs and bacteria. In this paper, a new high voltage pulse generator with closed loop control on its output voltage is proposed. The proposed generator is based on DC-to-DC boost converter in conjunction with capacitor-diode voltage multiplier (CDVM), and can be fed from low-voltage low-frequency AC supply, i.e. utility mains. The proposed topology provides transformer-less operation which reduces size and enhances the overall efficiency. A Detailed design of the proposed pulse generator has been presented as well. The proposed approach is validated by simulation as well as experimental results.", "title": "" }, { "docid": "9b2291ef3e605d85b6d0dba326aa10ef", "text": "We propose a multi-objective method for avoiding premature convergence in evolutionary algorithms, and demonstrate a three-fold performance improvement over comparable methods. Previous research has shown that partitioning an evolving population into age groups can greatly improve the ability to identify global optima and avoid converging to local optima. Here, we propose that treating age as an explicit optimization criterion can increase performance even further, with fewer algorithm implementation parameters. The proposed method evolves a population on the two-dimensional Pareto front comprising (a) how long the genotype has been in the population (age); and (b) its performance (fitness). We compare this approach with previous approaches on the Symbolic Regression problem, sweeping the problem difficulty over a range of solution complexities and number of variables. Our results indicate that the multi-objective approach identifies the exact target solution more often that the age-layered population and standard population methods. The multi-objective method also performs better on higher complexity problems and higher dimensional datasets -- finding global optima with less computational effort.", "title": "" }, { "docid": "a57b2e8b24cced6f8bfad942dd530499", "text": "With the tremendous growth of network-based services and sensitive information on networks, network security is getting more and more importance than ever. Intrusion poses a serious security risk in a network environment. The ever growing new intrusion types posses a serious problem for their detection. The human labelling of the available network audit data instances is usually tedious, time consuming and expensive. In this paper, we apply one of the efficient data mining algorithms called naïve bayes for anomaly based network intrusion detection. Experimental results on the KDD cup’99 data set show the novelty of our approach in detecting network intrusion. It is observed that the proposed technique performs better in terms of false positive rate, cost, and computational time when applied to KDD’99 data sets compared to a back propagation neural network based approach.", "title": "" }, { "docid": "72c0cef98023dd5b6c78e9c347798545", "text": "Several works have shown that Convolutional Neural Networks (CNNs) can be easily adapted to different datasets and tasks. However, for extracting the deep features from these pre-trained deep CNNs a fixedsize (e.g., 227×227) input image is mandatory. Now the state-of-the-art datasets like MIT-67 and SUN-397 come with images of different sizes. Usage of CNNs for these datasets enforces the user to bring different sized images to a fixed size either by reducing or enlarging the images. The curiosity is obvious that “Isn’t the conversion to fixed size image is lossy ?”. In this work, we provide a mechanism to keep these lossy fixed size images aloof and process the images in its original form to get set of varying size deep feature maps, hence being lossless. We also propose deep spatial pyramid match kernel (DSPMK) which amalgamates set of varying size deep feature maps and computes a matching score between the samples. Proposed DSPMK act as a dynamic kernel in the classification framework of scene dataset using support vector machine. We demonstrated the effectiveness of combining the power of varying size CNN-based set of deep feature maps with dynamic kernel by achieving state-of-the-art results for high-level visual recognition tasks such as scene classification on standard datasets like MIT67 and SUN397.", "title": "" }, { "docid": "5da804fa4c1474e27a1c91fcf5682e20", "text": "We present an overview of Candide, a system for automatic translat ion of French text to English text. Candide uses methods of information theory and statistics to develop a probabili ty model of the translation process. This model, which is made to accord as closely as possible with a large body of French and English sentence pairs, is then used to generate English translations of previously unseen French sentences. This paper provides a tutorial in these methods, discussions of the training and operation of the system, and a summary of test results. 1. I n t r o d u c t i o n Candide is an experimental computer program, now in its fifth year of development at IBM, for translation of French text to Enghsh text. Our goal is to perform fuRy-automatic, high-quality text totext translation. However, because we are still far from achieving this goal, the program can be used in both fully-automatic and translator 's-assistant modes. Our approach is founded upon the statistical analysis of language. Our chief tools axe the source-channel model of communication, parametric probabili ty models of language and translation, and an assortment of numerical algorithms for training such models from examples. This paper presents elementary expositions of each of these ideas, and explains how they have been assembled to produce Caadide. In Section 2 we introduce the necessary ideas from information theory and statistics. The reader is assumed to know elementary probabili ty theory at the level of [1]. In Sections 3 and 4 we discuss our language and translation models. In Section 5 we describe the operation of Candide as it translates a French document. In Section 6 we present results of our internal evaluations and the AB.PA Machine Translation Project evaluations. Section 7 is a summary and conclusion. 2 . Stat is t ical Trans la t ion Consider the problem of translating French text to English text. Given a French sentence f , we imagine that it was originally rendered as an equivalent Enghsh sentence e. To obtain the French, the Enghsh was t ransmit ted over a noisy communication channel, which has the curious property that English sentences sent into it emerge as their French translations. The central assumption of Candide's design is that the characteristics of this channel can be determined experimentally, and expressed mathematically. *Current address: Renaissance Technologies, Stony Brook, NY ~ English-to-French I f e Channel \" _[ French-to-English -] Decoder 6 Figure 1: The Source-Channel Formalism of Translation. Here f is the French text to be translated, e is the putat ive original English rendering, and 6 is the English translation. This formalism can be exploited to yield French-to-English translations as follows. Let us write P r (e I f ) for the probability that e was the original English rendering of the French f. Given a French sentence f, the problem of automatic translation reduces to finding the English sentence tha t maximizes P.r(e I f) . That is, we seek 6 = argmsx e Pr (e I f) . By virtue of Bayes' Theorem, we have = argmax Pr(e If ) = argmax Pr(f I e)Pr(e) (1) e e The term P r ( f l e ) models the probabili ty that f emerges from the channel when e is its input. We call this function the translation model; its domain is all pairs (f, e) of French and English word-strings. The term Pr (e ) models the a priori probability that e was supp led as the channel input. We call this function the language model. Each of these fac tors the translation model and the language model independent ly produces a score for a candidate English translat ion e. The translation model ensures that the words of e express the ideas of f, and the language model ensures that e is a grammatical sentence. Candide sehcts as its translat ion the e that maximizes their product. This discussion begs two impor tant questions. First , where do the models P r ( f [ e) and Pr (e ) come from? Second, even if we can get our hands on them, how can we search the set of all English strings to find 6? These questions are addressed in the next two sections. 2.1. P robab i l i ty Models We begin with a brief detour into probabili ty theory. A probability model is a mathematical formula that purports to express the chance of some observation. A parametric model is a probability model with adjustable parameters, which can be changed to make the model bet ter match some body of data. Let us write c for a body of da ta to be modeled, and 0 for a vector of parameters. The quanti ty Prs (c ) , computed according to some formula involving c and 0, is called the hkelihood 157 [Human Language Technology, Plainsboro, 1994]", "title": "" }, { "docid": "5c0f2bcde310b7b76ed2ca282fde9276", "text": "With the increasing prevalence of Alzheimer's disease, research focuses on the early computer-aided diagnosis of dementia with the goal to understand the disease process, determine risk and preserving factors, and explore preventive therapies. By now, large amounts of data from multi-site studies have been made available for developing, training, and evaluating automated classifiers. Yet, their translation to the clinic remains challenging, in part due to their limited generalizability across different datasets. In this work, we describe a compact classification approach that mitigates overfitting by regularizing the multinomial regression with the mixed ℓ1/ℓ2 norm. We combine volume, thickness, and anatomical shape features from MRI scans to characterize neuroanatomy for the three-class classification of Alzheimer's disease, mild cognitive impairment and healthy controls. We demonstrate high classification accuracy via independent evaluation within the scope of the CADDementia challenge. We, furthermore, demonstrate that variations between source and target datasets can substantially influence classification accuracy. The main contribution of this work addresses this problem by proposing an approach for supervised domain adaptation based on instance weighting. Integration of this method into our classifier allows us to assess different strategies for domain adaptation. Our results demonstrate (i) that training on only the target training set yields better results than the naïve combination (union) of source and target training sets, and (ii) that domain adaptation with instance weighting yields the best classification results, especially if only a small training component of the target dataset is available. These insights imply that successful deployment of systems for computer-aided diagnostics to the clinic depends not only on accurate classifiers that avoid overfitting, but also on a dedicated domain adaptation strategy.", "title": "" }, { "docid": "c8305675ba4bb16f26abf820db4b8a38", "text": "Microbes are dominant drivers of biogeochemical processes, yet drawing a global picture of functional diversity, microbial community structure, and their ecological determinants remains a grand challenge. We analyzed 7.2 terabases of metagenomic data from 243 Tara Oceans samples from 68 locations in epipelagic and mesopelagic waters across the globe to generate an ocean microbial reference gene catalog with >40 million nonredundant, mostly novel sequences from viruses, prokaryotes, and picoeukaryotes. Using 139 prokaryote-enriched samples, containing >35,000 species, we show vertical stratification with epipelagic community composition mostly driven by temperature rather than other environmental factors or geography. We identify ocean microbial core functionality and reveal that >73% of its abundance is shared with the human gut microbiome despite the physicochemical differences between these two ecosystems.", "title": "" }, { "docid": "29236d00bde843ff06e0f1a3e0ab88e4", "text": "■ The advent of the modern cruise missile, with reduced radar observables and the capability to fly at low altitudes with accurate navigation, placed an enormous burden on all defense weapon systems. Every element of the engagement process, referred to as the kill chain, from detection to target kill assessment, was affected. While the United States held the low-observabletechnology advantage in the late 1970s, that early lead was quickly challenged by advancements in foreign technology and proliferation of cruise missiles to unfriendly nations. Lincoln Laboratory’s response to the various offense/defense trade-offs has taken the form of two programs, the Air Vehicle Survivability Evaluation program and the Radar Surveillance Technology program. The radar developments produced by these two programs, which became national assets with many notable firsts, is the subject of this article.", "title": "" }, { "docid": "5cdb981566dfd741c9211902c0c59d50", "text": "Since parental personality traits are assumed to play a role in parenting behaviors, the current study examined the relation between parental personality and parenting style among 688 Dutch parents of adolescents in the SMILE study. The study assessed Big Five personality traits and derived parenting styles (authoritative, authoritarian, indulgent, and uninvolved) from scores on the underlying dimensions of support and strict control. Regression analyses were used to determine which personality traits were associated with parenting dimensions and styles. As regards dimensions, the two aspects of personality reflecting interpersonal interactions (extraversion and agreeableness) were related to supportiveness. Emotional stability was associated with lower strict control. As regards parenting styles, extraverted, agreeable, and less emotionally stable individuals were most likely to be authoritative parents. Conscientiousness and openness did not relate to general parenting, but might be associated with more content-specific acts of parenting.", "title": "" }, { "docid": "ac1d1bf198a178cb5655768392c3d224", "text": "-This paper discusses the two major query evaluation strategies used in large text retrieval systems and analyzes the performance of these strategies. We then discuss several optimization techniques that can be used to reduce evaluation costs and present simulation results to compare the performance of these optimization techniques when evaluating natural language queries with a collection of full text legal materials.", "title": "" }, { "docid": "7167964274b05da06beddb1aef119b2c", "text": "A great variety of systems in nature, society and technology—from the web of sexual contacts to the Internet, from the nervous system to power grids—can be modeled as graphs of vertices coupled by edges. The network structure, describing how the graph is wired, helps us understand, predict and optimize the behavior of dynamical systems. In many cases, however, the edges are not continuously active. As an example, in networks of communication via email, text messages, or phone calls, edges represent sequences of instantaneous or practically instantaneous contacts. In some cases, edges are active for non-negligible periods of time: e.g., the proximity patterns of inpatients at hospitals can be represented by a graph where an edge between two individuals is on throughout the time they are at the same ward. Like network topology, the temporal structure of edge activations can affect dynamics of systems interacting through the network, from disease contagion on the network of patients to information diffusion over an e-mail network. In this review, we present the emergent field of temporal networks, and discuss methods for analyzing topological and temporal structure and models for elucidating their relation to the behavior of dynamical systems. In the light of traditional network theory, one can see this framework as moving the information of when things happen from the dynamical system on the network, to the network itself. Since fundamental properties, such as the transitivity of edges, do not necessarily hold in temporal networks, many of these methods need to be quite different from those for static networks. The study of temporal networks is very interdisciplinary in nature. Reflecting this, even the object of study has many names—temporal graphs, evolving graphs, time-varying graphs, time-aggregated graphs, time-stamped graphs, dynamic networks, dynamic graphs, dynamical graphs, and so on. This review covers different fields where temporal graphs are considered, but does not attempt to unify related terminology—rather, we want to make papers readable across disciplines.", "title": "" }, { "docid": "71576ab1edd5eadbda1f34baba91b687", "text": "Visualization can make a wide range of mobile applications more intuitive and productive. The mobility context and technical limitations such as small screen size make it impossible to simply port visualization applications from desktop computers to mobile devices, but researchers are starting to address these challenges. From a purely technical point of view, building more sophisticated mobile visualizations become easier due to new, possibly standard, software APIs such as OpenGLES and increasingly powerful devices. Although ongoing improvements would not eliminate most device limitations or alter the mobility context, they make it easier to create and experiment with alternative approaches.", "title": "" }, { "docid": "1e8f25674dc66a298c277d80dd031c20", "text": "DeepQ Arrhythmia Database, the first generally available large-scale dataset for arrhythmia detector evaluation, contains 897 annotated single-lead ECG recordings from 299 unique patients. DeepQ includes beat-by-beat, rhythm episodes, and heartbeats fiducial points annotations. Each patient was engaged in a sequence of lying down, sitting, and walking activities during the ECG measurement and contributed three five-minute records to the database. Annotations were manually labeled by a group of certified cardiographic technicians and audited by a cardiologist at Taipei Veteran General Hospital, Taiwan. The aim of this database is in three folds. First, from the scale perspective, we build this database to be the largest representative reference set with greater number of unique patients and more variety of arrhythmic heartbeats. Second, from the diversity perspective, our database contains fully annotated ECG measures from three different activity modes and facilitates the arrhythmia classifier training for wearable ECG patches and AAMI assessment. Thirdly, from the quality point of view, it serves as a complement to the MIT-BIH Arrhythmia Database in the development and evaluation of the arrhythmia detector. The addition of this dataset can help facilitate the exhaustive studies using machine learning models and deep neural networks, and address the inter-patient variability. Further, we describe the development and annotation procedure of this database, as well as our on-going enhancement. We plan to make DeepQ database publicly available to advance medical research in developing outpatient, mobile arrhythmia detectors.", "title": "" }, { "docid": "844116dc8302aac5076c95ac2218b5bd", "text": "Virtual reality and augmented reality technology has existed in various forms for over two decades. However, high cost proved to be one of the main barriers to its adoption in education, outside of experimental studies. The creation and widespread sale of low-cost virtual reality devices using smart phones has made virtual reality technology available to the common person. This paper reviews how virtual reality and augmented reality has been used in education, discusses the advantages and disadvantages of using these technologies in the classroom, and describes how virtual reality and augmented reality technologies can be used to enhance teaching at the United States Military Academy.", "title": "" }, { "docid": "243391e804c06f8a53af906b31d4b99a", "text": "As key decisions are often made based on information contained in a database, it is important for the database to be as complete and correct as possible. For this reason, many data cleaning tools have been developed to automatically resolve inconsistencies in databases. However, data cleaning tools provide only best-effort results and usually cannot eradicate all errors that may exist in a database. Even more importantly, existing data cleaning tools do not typically address the problem of determining what information is missing from a database.\n To overcome the limitations of existing data cleaning techniques, we present QOCO, a novel query-oriented system for cleaning data with oracles. Under this framework, incorrect (resp. missing) tuples are removed from (added to) the result of a query through edits that are applied to the underlying database, where the edits are derived by interacting with domain experts which we model as oracle crowds. We show that the problem of determining minimal interactions with oracle crowds to derive database edits for removing (adding) incorrect (missing) tuples to the result of a query is NP-hard in general and present heuristic algorithms that interact with oracle crowds. Finally, we implement our algorithms in our prototype system QOCO and show that it is effective and efficient through a comprehensive suite of experiments.", "title": "" }, { "docid": "9c8648843bfc33f6c66845cd63df94d0", "text": "BACKGROUND\nThe safety and short-term benefits of laparoscopic colectomy for cancer remain debatable. The multicentre COLOR (COlon cancer Laparoscopic or Open Resection) trial was done to assess the safety and benefit of laparoscopic resection compared with open resection for curative treatment of patients with cancer of the right or left colon.\n\n\nMETHODS\n627 patients were randomly assigned to laparoscopic surgery and 621 patients to open surgery. The primary endpoint was cancer-free survival 3 years after surgery. Secondary outcomes were short-term morbidity and mortality, number of positive resection margins, local recurrence, port-site or wound-site recurrence, metastasis, overall survival, and blood loss during surgery. Analysis was by intention to treat. Here, clinical characteristics, operative findings, and postoperative outcome are reported.\n\n\nFINDINGS\nPatients assigned laparoscopic resection had less blood loss compared with those assigned open resection (median 100 mL [range 0-2700] vs 175 mL [0-2000], p<0.0001), although laparoscopic surgery lasted 30 min longer than did open surgery (p<0.0001). Conversion to open surgery was needed for 91 (17%) patients undergoing the laparoscopic procedure. Radicality of resection as assessed by number of removed lymph nodes and length of resected oral and aboral bowel did not differ between groups. Laparoscopic colectomy was associated with earlier recovery of bowel function (p<0.0001), need for fewer analgesics, and with a shorter hospital stay (p<0.0001) compared with open colectomy. Morbidity and mortality 28 days after colectomy did not differ between groups.\n\n\nINTERPRETATION\nLaparoscopic surgery can be used for safe and radical resection of cancer in the right, left, and sigmoid colon.", "title": "" } ]
scidocsrr
e8f7ea82049f1d52c4b99239d3a193f0
Geometric modeling using octree encoding
[ { "docid": "d004f3eb6dad2276a8754612ef977ccc", "text": "Most results in the field of algorithm design are single algorithms that solve single problems. In this paper we discuss multidimensional divide-and-conquer, an algorithmic paradigm that can be instantiated in many different ways to yield a number of algorithms and data structures for multidimensional problems. We use this paradigm to give best-known solutions to such problems as the ECDF, maxima, range searching, closest pair, and all nearest neighbor problems. The contributions of the paper are on two levels. On the first level are the particular algorithms and data structures given by applying the paradigm. On the second level is the more novel contribution of this paper: a detailed study of an algorithmic paradigm that is specific enough to be described precisely yet general enough to solve a wide variety of problems.", "title": "" } ]
[ { "docid": "7c9cd59a4bb14f678c57ad438f1add12", "text": "This paper proposes a new ensemble method built upon a deep neural network architecture. We use a set of meteorological models for rain forecast as base predictors. Each meteorological model is provided to a channel of the network and, through a convolution operator, the prediction models are weighted and combined. As a result, the predicted value produced by the ensemble depends on both the spatial neighborhood and the temporal pattern. We conduct some computational experiments in order to compare our approach to other ensemble methods widely used for daily rainfall prediction. The results show that our architecture based on ConvLSTM networks is a strong candidate to solve the problem of combining predictions in a spatiotemporal context.", "title": "" }, { "docid": "a162d5e622bb7fa8f281e7c9b5943346", "text": "The Legionellae are Gram-negative bacteria able to survive and replicate in a wide range of protozoan hosts in natural environments, but they also occur in man-made aquatic systems, which are the major source of infection. After transmission to humans via aerosols, Legionella spp. can cause pneumonia (Legionnaires’ disease) or influenza-like respiratory infections (Pontiac fever). In children, Legionnaires’ disease is uncommon and is mainly diagnosed in children with immunosuppression. The clinical picture of Legionella pneumonia does not allow differentiation from pneumonia caused by others pathogens. The key to diagnosis is performing appropriate microbiological testing. The clinical presentation and the natural course of Legionnaires’ disease in children are not clear due to an insufficient number of samples, but morbidity and mortality caused by this infection are extremely high. The mortality rate for legionellosis depends on the promptness of an appropriate antibiotic therapy. Fluoroquinolones are the most efficacious drugs against Legionella. A combination of these drugs with macrolides seems to be promising in the treatment of immunosuppressed patients and individuals with severe legionellosis. Although all Legionella species are considered potentially pathogenic for humans, Legionella pneumophila is the etiological agent responsible for most reported cases of community-acquired and nosocomial legionellosis.", "title": "" }, { "docid": "ba58ba95879516c00d91cf75754eb131", "text": "In order to assess the current knowledge on the therapeutic potential of cannabinoids, a meta-analysis was performed through Medline and PubMed up to July 1, 2005. The key words used were cannabis, marijuana, marihuana, hashish, hashich, haschich, cannabinoids, tetrahydrocannabinol, THC, dronabinol, nabilone, levonantradol, randomised, randomized, double-blind, simple blind, placebo-controlled, and human. The research also included the reports and reviews published in English, French and Spanish. For the final selection, only properly controlled clinical trials were retained, thus open-label studies were excluded. Seventy-two controlled studies evaluating the therapeutic effects of cannabinoids were identified. For each clinical trial, the country where the project was held, the number of patients assessed, the type of study and comparisons done, the products and the dosages used, their efficacy and their adverse effects are described. Cannabinoids present an interesting therapeutic potential as antiemetics, appetite stimulants in debilitating diseases (cancer and AIDS), analgesics, and in the treatment of multiple sclerosis, spinal cord injuries, Tourette's syndrome, epilepsy and glaucoma.", "title": "" }, { "docid": "cd549297cb4644aaf24c28b5bbdadb24", "text": "This study identifies the difference in the perceptions of academic stress and reaction to stressors based on gender among first year university students in Nigeria. Student Academic Stress Scale (SASS) was the instrument used to collect data from 2,520 first year university students chosen through systematic random sampling from Universities in the six geo-political zones of Nigeria. To determine gender differences among the respondents, independent samples t-test was used via SPSS version 15.0. The results of research showed that male and female respondents differed significantly in their perceptions of frustrations, financials, conflicts and selfexpectations stressors but did not significantly differ in their perceptions of pressures and changesrelated stressors. Generally, no significant difference was found between male and female respondents in their perceptions of academic stressors, however using the mean scores as basis, female respondents scored higher compared to male respondents. Regarding reaction to stressors, male and female respondents differ significantly in their perceptions of emotional and cognitive reactions but did not differ significantly in their perceptions of physiological and behavioural reaction to stressors.", "title": "" }, { "docid": "8dadd14f0de2a17ca5066703a19f1aff", "text": "Human gait provides a way of locomotion by combined efforts of the brain, nerves, and muscles. Conventionally, the human gait has been considered subjectively through visual observations but now with advanced technology, human gait analysis can be done objectively and empirically for the better quality of life. In this paper, the literature of the past survey on gait analysis has been discussed. This is followed by discussion on gait analysis methods. Vision-based human motion analysis has the potential to provide an inexpensive, non-obtrusive solution for the estimation of body poses. Data parameters for gait analysis have been discussed followed by preprocessing steps. Then the implemented machine learning techniques have been discussed in detail. The objective of this survey paper is to present a comprehensive analysis of contemporary gait analysis. This paper presents a framework (parameters, techniques, available database, machine learning techniques, etc.) for researchers in identifying the infertile areas of gait analysis. The authors expect that the overview presented in this paper will help advance the research in the field of gait analysis. Introduction to basic taxonomies of human gait is presented. Applications in clinical diagnosis, geriatric care, sports, biometrics, rehabilitation, and industrial area are summarized separately. Available machine learning techniques are also presented with available datasets for gait analysis. Future prospective in gait analysis are discussed in the end.", "title": "" }, { "docid": "e473e6b4c5d825582f3a5afe00a005de", "text": "This paper explores and quantifies garbage collection behavior for three whole heap collectors and generational counterparts: copying semi-space, mark-sweep, and reference counting, the canonical algorithms from which essentially all other collection algorithms are derived. Efficient implementations in MMTk, a Java memory management toolkit, in IBM's Jikes RVM share all common mechanisms to provide a clean experimental platform. Instrumentation separates collector and program behavior, and performance counters measure timing and memory behavior on three architectures.Our experimental design reveals key algorithmic features and how they match program characteristics to explain the direct and indirect costs of garbage collection as a function of heap size on the SPEC JVM benchmarks. For example, we find that the contiguous allocation of copying collectors attains significant locality benefits over free-list allocators. The reduced collection costs of the generational algorithms together with the locality benefit of contiguous allocation motivates a copying nursery for newly allocated objects. These benefits dominate the overheads of generational collectors compared with non-generational and no collection, disputing the myth that \"no garbage collection is good garbage collection.\" Performance is less sensitive to the mature space collection algorithm in our benchmarks. However the locality and pointer mutation characteristics for a given program occasionally prefer copying or mark-sweep. This study is unique in its breadth of garbage collection algorithms and its depth of analysis.", "title": "" }, { "docid": "ef2996a04c819777cc4b88c47f502c21", "text": "Bioprinting is an emerging technology for constructing and fabricating artificial tissue and organ constructs. This technology surpasses the traditional scaffold fabrication approach in tissue engineering (TE). Currently, there is a plethora of research being done on bioprinting technology and its potential as a future source for implants and full organ transplantation. This review paper overviews the current state of the art in bioprinting technology, describing the broad range of bioprinters and bioink used in preclinical studies. Distinctions between laser-, extrusion-, and inkjet-based bioprinting technologies along with appropriate and recommended bioinks are discussed. In addition, the current state of the art in bioprinter technology is reviewed with a focus on the commercial point of view. Current challenges and limitations are highlighted, and future directions for next-generation bioprinting technology are also presented. [DOI: 10.1115/1.4028512]", "title": "" }, { "docid": "9e7e7a3c4ec5db247cfe3f61b1dbceaa", "text": "Digital information displays are becoming more common in public spaces such as museums, galleries, and libraries. However, the public nature of these locations requires special considerations concerning the design of information visualization in terms of visual representations and interaction techniques. We discuss the potential for, and challenges of, information visualization in the museum context based on our practical experience with EMDialog, an interactive information presentation that was part of the Emily Carr exhibition at the Glenbow Museum in Calgary. EMDialog visualizes the diverse and multi-faceted discourse about this Canadian artist with the goal to both inform and provoke discussion. It provides a visual exploration environment that offers interplay between two integrated visualizations, one for information access along temporal, and the other along contextual dimensions. We describe the results of an observational study we conducted at the museum that revealed the different ways visitors approached and interacted with EMDialog, as well as how they perceived this form of information presentation in the museum context. Our results include the need to present information in a manner sufficiently attractive to draw attention and the importance of rewarding passive observation as well as both short- and longer term information exploration.", "title": "" }, { "docid": "6c504c7a69dba18e8cbc6a3678ab4b09", "text": "This letter presents a compact model for flexible analog/RF circuits design with amorphous indium-gallium-zinc oxide thin-film transistors (TFTs). The model is based on the MOSFET LEVEL=3 SPICE model template, where parameters are fitted to measurements for both dc and ac characteristics. The proposed TFT compact model shows good scalability of the drain current for device channel lengths ranging from 50 to 3.6 μm. The compact model is validated by comparing measurements and simulations of various TFT amplifier circuits. These include a two-stage cascode amplifier showing 10 dB of voltage gain and 2.9 MHz of bandwidth.", "title": "" }, { "docid": "e519d705cd52b4eb24e4e936b849b3ce", "text": "Computer manufacturers spend a huge amount of time, resources, and money in designing new systems and newer configurations, and their ability to reduce costs, charge competitive prices and gain market share depends on how good these systems perform. In this work, we develop predictive models for estimating the performance of systems by using performance numbers from only a small fraction of the overall design space. Specifically, we first develop three models, two based on artificial neural networks and another based on linear regression. Using these models, we analyze the published Standard Performance Evaluation Corporation (SPEC) benchmark results and show that by using the performance numbers of only 2% and 5% of the machines in the design space, we can estimate the performance of all the systems within 9.1% and 4.6% on average, respectively. Then, we show that the performance of future systems can be estimated with less than 2.2% error rate on average by using the data of systems from a previous year. We believe that these tools can accelerate the design space exploration significantly and aid in reducing the corresponding research/development cost and time-to-market.", "title": "" }, { "docid": "4f355aa038e56b9449181eb780e05484", "text": "Composite indices or pooled indices are useful tools for the evaluation of disease activity in patients with rheumatoid arthritis (RA). They allow the integration of various aspects of the disease into a single numerical value, and may therefore facilitate consistent patient care and improve patient compliance, which both can lead to improved outcomes. The Simplified Disease Activity Index (SDAI) and the Clinical Disease Activity Index (CDAI) are two new tools for the evaluation of disease activity in RA. They have been developed to provide physicians and patients with simple and more comprehensible instruments. Moreover, the CDAI is the only composite index that does not incorporate an acute phase response and can therefore be used to conduct a disease activity evaluation essentially anytime and anywhere. These two new tools have not been developed to replace currently available instruments such as the DAS28, but rather to provide options for different environments. The comparative construct, content, and discriminant validity of all three indices--the DAS28, the SDAI, and the CDAI--allow physicians to base their choice of instrument on their infrastructure and their needs, and all of them can also be used in clinical trials.", "title": "" }, { "docid": "70fac5e4b287e8f47a4eec44f5c36373", "text": "In this paper, we present a learning based approach to depth fusion, i.e., dense 3D reconstruction from multiple depth images. The most common approach to depth fusion is based on averaging truncated signed distance functions, which was originally proposed by Curless and Levoy in 1996. While this method is simple and provides great results, it is not able to reconstruct (partially) occluded surfaces and requires a large number frames to filter out sensor noise and outliers. Motivated by the availability of large 3D model repositories and recent advances in deep learning, we present a novel 3D CNN architecture that learns to predict an implicit surface representation from the input depth maps. Our learning based method significantly outperforms the traditional volumetric fusion approach in terms of noise reduction and outlier suppression. By learning the structure of real world 3D objects and scenes, our approach is further able to reconstruct occluded regions and to fill in gaps in the reconstruction. We demonstrate that our learning based approach outperforms both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric fusion. Further, we demonstrate state-of-the-art 3D shape completion results.", "title": "" }, { "docid": "83d788ffb340b89c482965b96d6803c2", "text": "A dead-time compensation method in voltage-source inverters (VSIs) is proposed. The method is based on a feedforward approach which produces compensating signals obtained from those of the I/sub d/-I/sub q/ current and primary angular frequency references in a rotating reference (d-q) frame. The method features excellent inverter output voltage distortion correction for both fundamental and harmonic components. The correction is not affected by the magnitude of the inverter output voltage or current distortions. Since this dead-time compensation method allows current loop calculations in the d-q frame at a slower sampling rate with a conventional microprocessor than calculations in a stationary reference frame, a fully digital, vector-controlled speed regulator with just a current component loop is realized for PWM (pulsewidth modulation) VSIs. Test results obtained for the compression method are described.<<ETX>>", "title": "" }, { "docid": "aa7fe787492aa8aa3d50f748b2df17cb", "text": "Smart Contracts sind rechtliche Vereinbarungen, die sich IT-Technologien bedienen, um die eigene Durchsetzbarkeit sicherzustellen. Es werden durch Smart Contracts autonom Handlungen initiiert, die zuvor vertraglich vereinbart wurden. Beispielsweise können vereinbarte Zahlungen von Geldbeträgen selbsttätig veranlasst werden. Basieren Smart Contracts auf Blockchains, ergeben sich per se vertrauenswürdige Transaktionen. Eine dritte Instanz zur Sicherstellung einer korrekten Transaktion, beispielsweise eine Bank oder ein virtueller Marktplatz, wird nicht benötigt. Echte Peer-to-Peer-Verträge sind möglich. Ein weiterer Anwendungsfall von Smart Contracts ist denkbar. Smart Contracts könnten statt Vereinbarungen von Vertragsparteien gesetzliche Regelungen ausführen. Beispielsweise die Regelungen des Patentgesetzes könnten durch einen Smart Contract implementiert werden. Die Verwaltung von IPRs (Intellectual Property Rights) entsprechend den gesetzlichen Regelungen würde dadurch sichergestellt werden. Bislang werden Spezialisten, beispielsweise Patentanwälte, benötigt, um eine akkurate Administration von Schutzrechten zu gewährleisten. Smart Contracts könnten die Dienstleistungen dieser Spezialisten auf dem Gebiet des geistigen Eigentums obsolet werden lassen.", "title": "" }, { "docid": "b045e59c52ff1d555f79831f96309d5c", "text": "In this paper, we show that for several clustering problems one can extract a small set of points, so that using those core-sets enable us to perform approximate clustering efficiently. The surprising property of those core-sets is that their size is independent of the dimension.Using those, we present a (1+ ε)-approximation algorithms for the k-center clustering and k-median clustering problems in Euclidean space. The running time of the new algorithms has linear or near linear dependency on the number of points and the dimension, and exponential dependency on 1/ε and k. As such, our results are a substantial improvement over what was previously known.We also present some other clustering results including (1+ ε)-approximate 1-cylinder clustering, and k-center clustering with outliers.", "title": "" }, { "docid": "235899b940c658316693d0a481e2d954", "text": "BACKGROUND\nImmunohistochemical markers are often used to classify breast cancer into subtypes that are biologically distinct and behave differently. The aim of this study was to estimate mortality for patients with the major subtypes of breast cancer as classified using five immunohistochemical markers, to investigate patterns of mortality over time, and to test for heterogeneity by subtype.\n\n\nMETHODS AND FINDINGS\nWe pooled data from more than 10,000 cases of invasive breast cancer from 12 studies that had collected information on hormone receptor status, human epidermal growth factor receptor-2 (HER2) status, and at least one basal marker (cytokeratin [CK]5/6 or epidermal growth factor receptor [EGFR]) together with survival time data. Tumours were classified as luminal and nonluminal tumours according to hormone receptor expression. These two groups were further subdivided according to expression of HER2, and finally, the luminal and nonluminal HER2-negative tumours were categorised according to expression of basal markers. Changes in mortality rates over time differed by subtype. In women with luminal HER2-negative subtypes, mortality rates were constant over time, whereas mortality rates associated with the luminal HER2-positive and nonluminal subtypes tended to peak within 5 y of diagnosis and then decline over time. In the first 5 y after diagnosis the nonluminal tumours were associated with a poorer prognosis, but over longer follow-up times the prognosis was poorer in the luminal subtypes, with the worst prognosis at 15 y being in the luminal HER2-positive tumours. Basal marker expression distinguished the HER2-negative luminal and nonluminal tumours into different subtypes. These patterns were independent of any systemic adjuvant therapy.\n\n\nCONCLUSIONS\nThe six subtypes of breast cancer defined by expression of five markers show distinct behaviours with important differences in short term and long term prognosis. Application of these markers in the clinical setting could have the potential to improve the targeting of adjuvant chemotherapy to those most likely to benefit. The different patterns of mortality over time also suggest important biological differences between the subtypes that may result in differences in response to specific therapies, and that stratification of breast cancers by clinically relevant subtypes in clinical trials is urgently required.", "title": "" }, { "docid": "7098df58dc9f86c9b462610f03bd97a6", "text": "The advent of the computer and computer science, and in particular virtual reality, offers new experiment possibilities with numerical simulations and introduces a new type of investigation for the complex systems study : the in virtuo experiment. This work lies on the framework of multi-agent systems. We propose a generic model for systems biology based on reification of the interactions, on a concept of organization and on a multi-model approach. By ``reification'' we understand that interactions are considered as autonomous agents. The aim has been to combine the systemic paradigm and the virtual reality to provide an application able to collect, simulate, experiment and understand the knowledge owned by different biologists working around an interdisciplinary subject. In that case, we have been focused on the urticaria disease understanding. The method permits to integrate different natures of model. We have modeled biochemical reactions, molecular diffusion, cell organisations and mechanical interactions. It also permits to embed different expert system modeling methods like fuzzy cognitive maps.", "title": "" }, { "docid": "c91fe61e7ef90867377940644b566d93", "text": "The adoption of Learning Management Systems to create virtual learning communities is a unstructured form of allowing collaboration that is rapidly growing. Compared to other systems that structure interactions, these environments provide data of the interaction performed at a very low level. For assessment purposes, this fact poses some difficulties to derive higher lever indicators of collaboration. In this paper we propose to shape the analysis problem as a data mining task. We suggest that the typical data mining cycle bears many resemblances with proposed models for collaboration management. We present some preliminary experiments using clustering to discover patterns reflecting user behaviors. Results are very encouraging and suggest several research directions.", "title": "" }, { "docid": "56206ddb152c3a09f3e28a6ffa703cd6", "text": "This chapter introduces the operation and control of a Doubly-fed Induction Generator (DFIG) system. The DFIG is currently the system of choice for multi-MW wind turbines. The aerodynamic system must be capable of operating over a wide wind speed range in order to achieve optimum aerodynamic efficiency by tracking the optimum tip-speed ratio. Therefore, the generator’s rotor must be able to operate at a variable rotational speed. The DFIG system therefore operates in both suband super-synchronous modes with a rotor speed range around the synchronous speed. The stator circuit is directly connected to the grid while the rotor winding is connected via slip-rings to a three-phase converter. For variable-speed systems where the speed range requirements are small, for example ±30% of synchronous speed, the DFIG offers adequate performance and is sufficient for the speed range required to exploit typical wind resources. An AC-DC-AC converter is included in the induction generator rotor circuit. The power electronic converters need only be rated to handle a fraction of the total power – the rotor power – typically about 30% nominal generator power. Therefore, the losses in the power electronic converter can be reduced, compared to a system where the converter has to handle the entire power, and the system cost is lower due to the partially-rated power electronics. This chapter will introduce the basic features and normal operation of DFIG systems for wind power applications basing the description on the standard induction generator. Different aspects that will be described include their variable-speed feature, power converters and their associated control systems, and application issues.", "title": "" }, { "docid": "004743271b82054bae970bd0d17c1bd3", "text": "In 1934, Jordan et al. gave a necessary algebraic condition, the Jordan identity, for a sensible theory of quantum mechanics. All but one of the algebras that satisfy this condition can be described by Hermitian matrices over the complexes or quaternions. The remaining, exceptional Jordan algebra can be described by 3 × 3 Hermitian matrices over the octonions. We first review properties of the octonions and the exceptional Jordan algebra, including our previous work on the octonionic Jordan eigenvalue problem. We then examine a particular real, noncompact form of the Lie group E6, which preserves determinants in the exceptional Jordan algebra. Finally, we describe a possible symmetry-breaking scenario within E6: first choose one of the octonionic directions to be special, then choose one of the 2× 2 submatrices inside the 3× 3 matrices to be special. Making only these two choices, we are able to describe many properties of leptons in a natural way. We further speculate on the ways in which quarks might be similarly encoded.", "title": "" } ]
scidocsrr
e8bbe717500b0fb201be13a68456ecd4
Understanding the Digital Marketing Environment with KPIs and Web Analytics
[ { "docid": "0994065c757a88373a4d97e5facfee85", "text": "Scholarly literature suggests digital marketing skills gaps in industry, but these skills gaps are not clearly identified. The research aims to specify any digital marketing skills gaps encountered by professionals working in communication industries. In-depth interviews were undertaken with 20 communication industry professionals. A focus group followed, testing the rigour of the data. We find that a lack of specific technical skills; a need for best practice guidance on evaluation metrics, and a lack of intelligent futureproofing for dynamic technological change and development are skills gaps currently challenging the communication industry. However, the challenge of integrating digital marketing approaches with established marketing practice emerges as the key skills gap. Emerging from the key findings, a Digital Marketer Model was developed, highlighting the key competencies and skills needed by an excellent digital marketer. The research concludes that guidance on best practice, focusing upon evaluation metrics, futureproofing and strategic integration, needs to be developed for the communication industry. The Digital Marketing Model should be subject to further testing in industry and academia. Suggestions for further research are discussed.", "title": "" } ]
[ { "docid": "76efa42a492d8eb36b82397e09159c30", "text": "attempt to foster AI and intelligent robotics research by providing a standard problem where a wide range of technologies can be integrated and examined. The first RoboCup competition will be held at the Fifteenth International Joint Conference on Artificial Intelligence in Nagoya, Japan. A robot team must actually perform a soccer game, incorporating various technologies, including design principles of autonomous agents, multiagent collaboration, strategy acquisition, real-time reasoning, robotics, and sensor fusion. RoboCup is a task for a team of multiple fast-moving robots under a dynamic environment. Although RoboCup’s final target is a world cup with real robots, RoboCup offers a software platform for research on the software aspects of RoboCup. This article describes technical challenges involved in RoboCup, rules, and the simulation environment.", "title": "" }, { "docid": "1d26fc3a5f07e7ea678753e7171846c4", "text": "Data uncertainty is an inherent property in various applications due to reasons such as outdated sources or imprecise measurement. When data mining techniques are applied to these data, their uncertainty has to be considered to obtain high quality results. We present UK-means clustering, an algorithm that enhances the K-means algorithm to handle data uncertainty. We apply UKmeans to the particular pattern of moving-object uncertainty. Experimental results show that by considering uncertainty, a clustering algorithm can produce more accurate results.", "title": "" }, { "docid": "711daac04e27d0a413c99dd20f6f82e1", "text": "The gesture recognition using motion capture data and depth sensors has recently drawn more attention in vision recognition. Currently most systems only classify dataset with a couple of dozens different actions. Moreover, feature extraction from the data is often computational complex. In this paper, we propose a novel system to recognize the actions from skeleton data with simple, but effective, features using deep neural networks. Features are extracted for each frame based on the relative positions of joints (PO), temporal differences (TD), and normalized trajectories of motion (NT). Given these features a hybrid multi-layer perceptron is trained, which simultaneously classifies and reconstructs input data. We use deep autoencoder to visualize learnt features. The experiments show that deep neural networks can capture more discriminative information than, for instance, principal component analysis can. We test our system on a public database with 65 classes and more than 2,000 motion sequences. We obtain an accuracy above 95% which is, to our knowledge, the state of the art result for such a large dataset.", "title": "" }, { "docid": "b93455e6b023910bf7711d56d16f62a2", "text": "Learning low-dimensional embeddings of knowledge graphs is a powerful approach used to predict unobserved or missing edges between entities. However, an open challenge in this area is developing techniques that can go beyond simple edge prediction and handle more complex logical queries, which might involve multiple unobserved edges, entities, and variables. For instance, given an incomplete biological knowledge graph, we might want to predict what drugs are likely to target proteins involved with both diseases X and Y?—a query that requires reasoning about all possible proteins that might interact with diseases X and Y. Here we introduce a framework to efficiently make predictions about conjunctive logical queries—a flexible but tractable subset of first-order logic—on incomplete knowledge graphs. In our approach, we embed graph nodes in a low-dimensional space and represent logical operators as learned geometric operations (e.g., translation, rotation) in this embedding space. By performing logical operations within a low-dimensional embedding space, our approach achieves a time complexity that is linear in the number of query variables, compared to the exponential complexity required by a naive enumeration-based approach. We demonstrate the utility of this framework in two application studies on real-world datasets with millions of relations: predicting logical relationships in a network of drug-gene-disease interactions and in a graph-based representation of social interactions derived from a popular web forum.", "title": "" }, { "docid": "6a8afd6713425e7dc047da08d7c4c773", "text": "We present the first linear time (1 + /spl epsiv/)-approximation algorithm for the k-means problem for fixed k and /spl epsiv/. Our algorithm runs in O(nd) time, which is linear in the size of the input. Another feature of our algorithm is its simplicity - the only technique involved is random sampling.", "title": "" }, { "docid": "93133be6094bba6e939cef14a72fa610", "text": "We systematically searched available databases. We reviewed 6,143 studies published from 1833 to 2017. Reports in English, French, German, Italian, and Spanish were considered, as were publications in other languages if definitive treatment and recurrence at specific follow-up times were described in an English abstract. We assessed data in the manner of a meta-analysis of RCTs; further we assessed non-RCTs in the manner of a merged data analysis. In the RCT analysis including 11,730 patients, Limberg & Dufourmentel operations were associated with low recurrence of 0.6% (95%CI 0.3–0.9%) 12 months and 1.8% (95%CI 1.1–2.4%) respectively 24 months postoperatively. Analysing 89,583 patients from RCTs and non-RCTs, the Karydakis & Bascom approaches were associated with recurrence of only 0.2% (95%CI 0.1–0.3%) 12 months and 0.6% (95%CI 0.5–0.8%) 24 months postoperatively. Primary midline closure exhibited long-term recurrence up to 67.9% (95%CI 53.3–82.4%) 240 months post-surgery. For most procedures, only a few RCTs without long term follow up data exist, but substitute data from numerous non-RCTs are available. Recurrence in PSD is highly dependent on surgical procedure and by follow-up time; both must be considered when drawing conclusions regarding the efficacy of a procedure.", "title": "" }, { "docid": "3688c987419daade77c44912fbc72ecf", "text": "We propose a visual food recognition framework that integrates the inherent semantic relationships among fine-grained classes. Our method learns semantics-aware features by formulating a multi-task loss function on top of a convolutional neural network (CNN) architecture. It then refines the CNN predictions using a random walk based smoothing procedure, which further exploits the rich semantic information. We evaluate our algorithm on a large \"food-in-the-wild\" benchmark, as well as a challenging dataset of restaurant food dishes with very few training images. The proposed method achieves higher classification accuracy than a baseline which directly fine-tunes a deep learning network on the target dataset. Furthermore, we analyze the consistency of the learned model with the inherent semantic relationships among food categories. Results show that the proposed approach provides more semantically meaningful results than the baseline method, even in cases of mispredictions.", "title": "" }, { "docid": "566a2b2ff835d10e0660fb89fd6ae618", "text": "We argue that an understanding of the faculty of language requires substantial interdisciplinary cooperation. We suggest how current developments in linguistics can be profitably wedded to work in evolutionary biology, anthropology, psychology, and neuroscience. We submit that a distinction should be made between the faculty of language in the broad sense (FLB) and in the narrow sense (FLN). FLB includes a sensory-motor system, a conceptual-intentional system, and the computational mechanisms for recursion, providing the capacity to generate an infinite range of expressions from a finite set of elements. We hypothesize that FLN only includes recursion and is the only uniquely human component of the faculty of language. We further argue that FLN may have evolved for reasons other than language, hence comparative studies might look for evidence of such computations outside of the domain of communication (for example, number, navigation, and social relations).", "title": "" }, { "docid": "72345bf404d21d0f7aa1e54a5710674c", "text": "Many real-world data sets exhibit skewed class distributions in which almost all cases are allotted to a class and far fewer cases to a smaller, usually more interesting class. A classifier induced from an imbalanced data set has, typically, a low error rate for the majority class and an unacceptable error rate for the minority class. This paper firstly provides a systematic study on the various methodologies that have tried to handle this problem. Finally, it presents an experimental study of these methodologies with a proposed mixture of expert agents and it concludes that such a framework can be a more effective solution to the problem. Our method seems to allow improved identification of difficult small classes in predictive analysis, while keeping the classification ability of the other classes in an acceptable level.", "title": "" }, { "docid": "b23d73e29fc205df97f073eb571a2b47", "text": "In this paper, we study two different trajectory planning problems for robotmanipulators. In the first case, the end-effector of the robot is constrained to move along a prescribed path in the workspace, whereas in the second case, the trajectory of the end-effector has to be determined in the presence of obstacles. Constraints of this type are called holonomic constraints. Both problems have been solved as optimal control problems. Given the dynamicmodel of the robotmanipulator, the initial state of the system, some specifications about the final state and a set of holonomic constraints, one has to find the trajectory and the actuator torques that minimize the energy consumption during the motion. The presence of holonomic constraints makes the optimal control problem particularly difficult to solve. Our method involves a numerical resolution of a reformulation of the constrained optimal control problem into an unconstrained calculus of variations problem in which the state space constraints and the dynamic equations, also regarded as constraints, are treated by means of special derivative multipliers. We solve the resulting calculus of variations problem using a numerical approach based on the Euler–Lagrange necessary condition in the integral form in which time is discretized and admissible variations for each variable are approximated using a linear combination of piecewise continuous basis functions of time. The use of the Euler–Lagrange necessary condition in integral form avoids the need for numerical corner conditions and thenecessity of patching together solutions between corners. In thisway, a generalmethod for the solution of constrained optimal control problems is obtained inwhich holonomic constraints can be easily treated. Numerical results of the application of thismethod to trajectory planning of planar horizontal robot manipulators with two revolute joints are reported. © 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "5cd726f49dd0cb94fe7d2d724da9f215", "text": "We implement pedestrian dead reckoning (PDR) for indoor localization. With a waist-mounted PDR based system on a smart-phone, we estimate the user's step length that utilizes the height change of the waist based on the Pythagorean Theorem. We propose a zero velocity update (ZUPT) method to address sensor drift error: Simple harmonic motion and a low-pass filtering mechanism combined with the analysis of gait characteristics. This method does not require training to develop the step length model. Exploiting the geometric similarity between the user trajectory and the floor map, our map matching algorithm includes three different filters to calibrate the direction errors from the gyro using building floor plans. A sliding-window-based algorithm detects corners. The system achieved 98% accuracy in estimating user walking distance with a waist-mounted phone and 97% accuracy when the phone is in the user's pocket. ZUPT improves sensor drift error (the accuracy drops from 98% to 84% without ZUPT) using 8 Hz as the cut-off frequency to filter out sensor noise. Corner length impacted the corner detection algorithm. In our experiments, the overall location error is about 0.48 meter.", "title": "" }, { "docid": "dc18c0e5737b3d641418e5b33dd3f0e7", "text": "Millimeter wave (mmWave) communications have recently attracted large research interest, since the huge available bandwidth can potentially lead to the rates of multiple gigabit per second per user. Though mmWave can be readily used in stationary scenarios, such as indoor hotspots or backhaul, it is challenging to use mmWave in mobile networks, where the transmitting/receiving nodes may be moving, channels may have a complicated structure, and the coordination among multiple nodes is difficult. To fully exploit the high potential rates of mmWave in mobile networks, lots of technical problems must be addressed. This paper presents a comprehensive survey of mmWave communications for future mobile networks (5G and beyond). We first summarize the recent channel measurement campaigns and modeling results. Then, we discuss in detail recent progresses in multiple input multiple output transceiver design for mmWave communications. After that, we provide an overview of the solution for multiple access and backhauling, followed by the analysis of coverage and connectivity. Finally, the progresses in the standardization and deployment of mmWave for mobile networks are discussed.", "title": "" }, { "docid": "b5b8ae3b7b307810e1fe39630bc96937", "text": "Up to this point in the text we have considered the use of the logistic regression model in settings where we observe a single dichotomous response for a sample of statistically independent subjects. However, there are settings where the assumption of independence of responses may not hold for a variety of reasons. For example, consider a study of asthma in children in which subjects are interviewed bi-monthly for 1 year. At each interview the date is recorded and the mother is asked whether, during the previous 2 months, her child had an asthma attack severe enough to require medical attention, whether the child had a chest cold, and how many smokers lived in the household. The child’s age and race are recorded at the first interview. The primary outcome is the occurrence of an asthma attack. What differs here is the lack of independence in the observations due to the fact that we have six measurements on each child. In this example, each child represents a cluster of correlated observations of the outcome. The measurements of the presence or absence of a chest cold and the number of smokers residing in the household can change from observation to observation and thus are called clusterspecific or time-varying covariates. The date changes in a systematic way and is recorded to model possible seasonal effects. The child’s age and race are constant for the duration of the study and are referred to as cluster-level or time-invariant covariates. The terms clusters, subjects, cluster-specific and cluster-level covariates are general enough to describe multiple measurements on a single subject or single measurements on different but related subjects. An example of the latter setting would be a study of all children in a household. Repeated measurements on the same subject or a subject clustered in some sort of unit (household, hospital, or physician) are the two most likely scenarios leading to correlated data.", "title": "" }, { "docid": "3e7941e6d2e5c2991030950d2a13d48f", "text": "Mobile edge cloud (MEC) is a model for enabling on-demand elastic access to, or an interaction with a shared pool of reconfigurable computing resources such as servers, storage, peer devices, applications, and services, at the edge of the wireless network in close proximity to mobile users. It overcomes some obstacles of traditional central clouds by offering wireless network information and local context awareness as well as low latency and bandwidth conservation. This paper presents a comprehensive survey of MEC systems, including the concept, architectures, and technical enablers. First, the MEC applications are explored and classified based on different criteria, the service models and deployment scenarios are reviewed and categorized, and the factors influencing the MEC system design are discussed. Then, the architectures and designs of MEC systems are surveyed, and the technical issues, existing solutions, and approaches are presented. The open challenges and future research directions of MEC are further discussed.", "title": "" }, { "docid": "8c662416784ddaf8dae387926ba0b17c", "text": "Autoimmune reactions to vaccinations may rarely be induced in predisposed individuals by molecular mimicry or bystander activation mechanisms. Autoimmune reactions reliably considered vaccine-associated, include Guillain-Barré syndrome after 1976 swine influenza vaccine, immune thrombocytopenic purpura after measles/mumps/rubella vaccine, and myopericarditis after smallpox vaccination, whereas the suspected association between hepatitis B vaccine and multiple sclerosis has not been further confirmed, even though it has been recently reconsidered, and the one between childhood immunization and type 1 diabetes seems by now to be definitively gone down. Larger epidemiological studies are needed to obtain more reliable data in most suggested associations.", "title": "" }, { "docid": "9f40a57159a06ecd9d658b4d07a326b5", "text": "_____________________________________________________________________________ The aim of the present study was to investigate a cytotoxic oxidative cell stress related and the antioxidant profile of kaempferol, quercetin, and isoquercitrin. The flavonol compounds were able to act as scavengers of superoxide anion (but not hydrogen peroxide), hypochlorous acid, chloramine and nitric oxide. Although flavonoids are widely described as antioxidants and this activity is generally related to beneficial effects on human health, here we show important cytotoxic actions of three well known flavonoids. They were able to promote hemolysis which one was exacerbated on the presence of hypochlorous acid but not by AAPH radical. Therefore, WWW.SCIELO.BR/EQ VOLUME 36, NÚMERO 2, 2011", "title": "" }, { "docid": "4129d2906d3d3d96363ff0812c8be692", "text": "In this paper, we propose a picture recommendation system built on Instagram, which facilitates users to query correlated pictures by keying in hashtags or clicking images. Users can access the value-added information (or pictures) on Instagram through the recommendation platform. In addition to collecting available hashtags using the Instagram API, the system also uses the Free Dictionary to build the relationships between all the hashtags in a knowledge base. Thus, two kinds of correlations can be provided for a query in the system; i.e., user-defined correlation and system-defined correlation. Finally, the experimental results show that users have good satisfaction degrees with both user-defined correlation and system-defined correlation methods.", "title": "" }, { "docid": "8e28f1561b3a362b2892d7afa8f2164c", "text": "Inference based techniques are one of the major approaches to analyze DNS data and detecting malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct/indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new association scheme to identify domains controlled by the same entity. Our key idea is an indepth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme avoids the pitfall of naive approaches that rely on weak “co-IP” relationship of domains (i.e., two domains are resolved to the same IP) that results in low detection accuracy, and, meanwhile, identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed association scheme not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithm is specifically designed for DNS data analysis. It is effective but computationally expensive. To further demonstrate the strength of our domain association scheme as well as improving inference efficiency, we investigate the effectiveness of combining our association scheme with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only minor negative impact of detection accuracy, which suggests that such a combination could offer a good tradeoff for malicious domain detection in practice.", "title": "" }, { "docid": "acfdfe2de61ec2697ef865b1e5a42721", "text": "Artificial Immune System (AIS) algorithm is a novel and vibrant computational paradigm, enthused by the biological immune system. Over the last few years, the artificial immune system has been sprouting to solve numerous computational and combinatorial optimization problems. In this paper, we introduce the restricted MAX-kSAT as a constraint optimization problem that can be solved by a robust computational technique. Hence, we will implement the artificial immune system algorithm incorporated with the Hopfield neural network to solve the restricted MAX-kSAT problem. The proposed paradigm will be compared with the traditional method, Brute force search algorithm integrated with Hopfield neural network. The results demonstrate that the artificial immune system integrated with Hopfield network outperforms the conventional Hopfield network in solving restricted MAX-kSAT. All in all, the result has provided a concrete evidence of the effectiveness of our proposed paradigm to be applied in other constraint optimization problem. The work presented here has many profound implications for future studies to counter the variety of satisfiability problem.", "title": "" } ]
scidocsrr
04672b593dc0f356a1ef1e33aa86409f
Personalized search result diversification via structured learning
[ { "docid": "27029a5e18e5d874606a87f0d238cd14", "text": "User behavior provides many cues to improve the relevance of search results through personalization. One aspect of user behavior that provides especially strong signals for delivering better relevance is an individual's history of queries and clicked documents. Previous studies have explored how short-term behavior or long-term behavior can be predictive of relevance. Ours is the first study to assess how short-term (session) behavior and long-term (historic) behavior interact, and how each may be used in isolation or in combination to optimally contribute to gains in relevance through search personalization. Our key findings include: historic behavior provides substantial benefits at the start of a search session; short-term session behavior contributes the majority of gains in an extended search session; and the combination of session and historic behavior out-performs using either alone. We also characterize how the relative contribution of each model changes throughout the duration of a session. Our findings have implications for the design of search systems that leverage user behavior to personalize the search experience.", "title": "" } ]
[ { "docid": "894cfbb522a356bba407481bd051d834", "text": "We propose a novel method to handle thin structures in Image-Based Rendering (IBR), and specifically structures supported by simple geometric shapes such as planes, cylinders, etc. These structures, e.g. railings, fences, oven grills etc, are present in many man-made environments and are extremely challenging for multi-view 3D reconstruction, representing a major limitation of existing IBR methods. Our key insight is to exploit multi-view information. After a handful of user clicks to specify the supporting geometry, we compute multi-view and multi-layer alpha mattes to extract the thin structures. We use two multi-view terms in a graph-cut segmentation, the first based on multi-view foreground color prediction and the second ensuring multiview consistency of labels. Occlusion of the background can challenge reprojection error calculation and we use multiview median images and variance, with multiple layers of thin structures. Our end-to-end solution uses the multi-layer segmentation to create per-view mattes and the median colors and variance to create a clean background. We introduce a new multi-pass IBR algorithm based on depth-peeling to allow free-viewpoint navigation of multi-layer semi-transparent thin structures. Our results show significant improvement in rendering quality for thin structures compared to previous image-based rendering solutions.", "title": "" }, { "docid": "4d56abf003caaa11e5bef74a14bd44e0", "text": "The increasing importance of search engines to commercial web sites has given rise to a phenomenon we call \"web spam\", that is, web pages that exist only to mislead search engines into (mis)leading users to certain web sites. Web spam is a nuisance to users as well as search engines: users have a harder time finding the information they need, and search engines have to cope with an inflated corpus, which in turn causes their cost per query to increase. Therefore, search engines have a strong incentive to weed out spam web pages from their index.We propose that some spam web pages can be identified through statistical analysis: Certain classes of spam pages, in particular those that are machine-generated, diverge in some of their properties from the properties of web pages at large. We have examined a variety of such properties, including linkage structure, page content, and page evolution, and have found that outliers in the statistical distribution of these properties are highly likely to be caused by web spam.This paper describes the properties we have examined, gives the statistical distributions we have observed, and shows which kinds of outliers are highly correlated with web spam.", "title": "" }, { "docid": "0cc16f8fe35cbf169de8263236d08166", "text": "In this paper, we revisit a generally accepted opinion: implementing Elliptic Curve Cryptosystem (ECC) over GF (2) on sensor motes using small word size is not appropriate because XOR multiplication over GF (2) is not efficiently supported by current low-powered microprocessors. Although there are some implementations over GF (2) on sensor motes, their performances are not satisfactory enough to be used for wireless sensor networks (WSNs). We have found that a field multiplication over GF (2) are involved in a number of redundant memory accesses and its inefficiency is originated from this problem. Moreover, the field reduction process also requires many redundant memory accesses. Therefore, we propose some techniques for reducing unnecessary memory accesses. With the proposed strategies, the running time of field multiplication and reduction over GF (2) can be decreased by 21.1% and 24.7%, respectively. These savings noticeably decrease execution times spent in Elliptic Curve Digital Signature Algorithm (ECDSA) operations (signing and verification) by around 15% ∼ 19%. We present TinyECCK (Tiny Elliptic Curve Cryptosystem with Koblitz curve – a kind of TinyOS package supporting elliptic curve operations) which is the fastest ECC implementation over GF (2) on 8-bit sensor motes using ATmega128L as far as we know. Through comparisons with existing software implementations of ECC built in C or hybrid of C and inline assembly on sensor motes, we show that TinyECCK outperforms them in terms of running time, code size and supporting services. Furthermore, we show that a field multiplication over GF (2) can be faster than that over GF (p) on 8-bit ATmega128L processor by comparing TinyECCK with TinyECC, a well-known ECC implementation over GF (p). TinyECCK with sect163k1 can compute a scalar multiplication within 1.14 secs on a MICAz mote at the expense of 5,592-byte of ROM and 618-byte of RAM. Furthermore, it can also generate a signature and verify it in 1.37 and 2.32 secs with 13,748-byte of ROM and 1,004-byte of RAM. 2 Seog Chung Seo et al.", "title": "" }, { "docid": "f12c53ede3ef1cbab2641970aacbe16f", "text": "Considerable advances have been achieved in estimating the depth map from a single image via convolutional neural networks (CNNs) during the past few years. Combining depth prediction from CNNs with conventional monocular simultaneous localization and mapping (SLAM) is promising for accurate and dense monocular reconstruction, in particular addressing the two long-standing challenges in conventional monocular SLAM: low map completeness and scale ambiguity. However, depth estimated by pretrained CNNs usually fails to achieve sufficient accuracy for environments of different types from the training data, which are common for certain applications such as obstacle avoidance of drones in unknown scenes. Additionally, inaccurate depth prediction of CNN could yield large tracking errors in monocular SLAM. In this paper, we present a real-time dense monocular SLAM system, which effectively fuses direct monocular SLAM with an online-adapted depth prediction network for achieving accurate depth prediction of scenes of different types from the training data and providing absolute scale information for tracking and mapping. Specifically, on one hand, tracking pose (i.e., translation and rotation) from direct SLAM is used for selecting a small set of highly effective and reliable training images, which acts as ground truth for tuning the depth prediction network on-the-fly toward better generalization ability for scenes of different types. A stage-wise Stochastic Gradient Descent algorithm with a selective update strategy is introduced for efficient convergence of the tuning process. On the other hand, the dense map produced by the adapted network is applied to address scale ambiguity of direct monocular SLAM which in turn improves the accuracy of both tracking and overall reconstruction. The system with assistance of both CPUs and GPUs, can achieve real-time performance with progressively improved reconstruction accuracy. Experimental results on public datasets and live application to obstacle avoidance of drones demonstrate that our method outperforms the state-of-the-art methods with greater map completeness and accuracy, and a smaller tracking error.", "title": "" }, { "docid": "4f511a669a510153aa233d90da4e406a", "text": "In many visual surveillance applications the task of person detection and localization can be solved easier by using thermal long-wave infrared (LWIR) cameras which are less affected by changing illumination or background texture than visual-optical cameras. Especially in outdoor scenes where usually only few hot spots appear in thermal infrared imagery, humans can be detected more reliably due to their prominent infrared signature. We propose a two-stage person recognition approach for LWIR images: (1) the application of Maximally Stable Extremal Regions (MSER) to detect hot spots instead of background subtraction or sliding window and (2) the verification of the detected hot spots using a Discrete Cosine Transform (DCT) based descriptor and a modified Random Naïve Bayes (RNB) classifier. The main contributions are the novel modified RNB classifier and the generality of our method. We achieve high detection rates for several different LWIR datasets with low resolution videos in real-time. While many papers in this topic are dealing with strong constraints such as considering only one dataset, assuming a stationary camera, or detecting only moving persons, we aim at avoiding such constraints to make our approach applicable with moving platforms such as Unmanned Ground Vehicles (UGV).", "title": "" }, { "docid": "bfca88df9d719b1927e94b0beadb32bc", "text": "This paper proposes a new intelligent fashion recommender system to select the most relevant garment design scheme for a specific consumer in order to deliver new personalized garment products. This system integrates emotional fashion themes and human perception on personalized body shapes and professional designers' knowledge. The corresponding perceptual data are systematically collected from professional using sensory evaluation techniques. The perceptual data of consumers and designers are formalized mathematically using fuzzy sets and fuzzy relations. The complex relation between human body measurements and basic sensory descriptors, provided by designers, is modeled using fuzzy decision trees. The fuzzy decision trees constitute an empirical model based on learning data measured and evaluated on a set of representative samples. The complex relation between basic sensory descriptors and fashion themes, given by consumers, is modeled using fuzzy cognitive maps. The combination of the two models can provide more complete information to the fashion recommender system, making it possible to evaluate if a specific body shape is relevant to a desired emotional fashion theme and which garment design scheme can improve the image of the body shape. The proposed system has been validated in a customized design and mass market selection through the evaluations of target consumers and fashion experts using a method frequently used in marketing study.", "title": "" }, { "docid": "af22932b48a2ea64ecf3e5ba1482564d", "text": "Collaborative embedded systems (CES) heavily rely on information models to understand the contextual situations they are exposed to. These information models serve different purposes. First, during development time it is necessary to model the context for eliciting and documenting the requirements that a CES is supposed to achieve. Second, information models provide information to simulate different contextual situations and CES ́s behavior in these situations. Finally, CESs need information models about their context during runtime in order to react to different contextual situations and exchange context information with other CESs. Heavyweight ontologies, based on Ontology Web Language (OWL), have already proven suitable for representing knowledge about contextual situations during runtime. Furthermore, lightweight ontologies (e.g. class diagrams) have proven their practicality for creating domain specific languages for requirements documentation. However, building an ontology (lightor heavyweight) is a non-trivial task that needs to be integrated into development methods for CESs such that it serves the above stated purposes in a seamless way. This paper introduces the requirements for the building of ontologies and proposes a method that is integrated into the engineering of CESs.", "title": "" }, { "docid": "0dfcbae479f0af59236a5213cb37983a", "text": "The objective of this work is to detect the use of automated programs, known as game bots, based on social interactions in MMORPGs. Online games, especially MMORPGs, have become extremely popular among internet users in the recent years. Not only the popularity but also security threats such as the use of game bots and identity theft have grown manifold. As bot players can obtain unjustified assets without corresponding efforts, the gaming community does not allow players to use game bots. However, the task of identifying game bots is not an easy one because of the velocity and variety of their evolution in mimicking human behavior. Existing methods for detecting game bots have a few drawbacks like reducing immersion of players, low detection accuracy rate, and collision with other security programs. We propose a novel method for detecting game bots based on the fact that humans and game bots tend to form their social network in contrasting ways. In this work we focus particularly on the in game mentoring network from amongst several social networks. We construct a couple of new features based on eigenvector centrality to capture this intuition and establish their importance for detecting game bots. The results show a significant increase in the classification accuracy of various classifiers with the introduction of these features.", "title": "" }, { "docid": "a8614b86b55411d43d5cc863fcf8ca9c", "text": "This paper introduces a survey of different maximum peak power tracking (MPPT) techniques used in the implementation of photovoltaic power systems. It will discuss different 30 techniques used in tracking maximum power in photovoltaic arrays. This paper can be considered as a completion, updating, and declaration of the good efforts made in [3], that discussed 19 MPPT techniques in PV systems, while summarizes additional 11 MPPT methods.", "title": "" }, { "docid": "d4345ee2baaa016fc38ba160e741b8ee", "text": "Unstructured data, such as news and blogs, can provide valuable insights into the financial world. We present the NewsStream portal, an intuitive and easy-to-use tool for news analytics, which supports interactive querying and visualizations of the documents at different levels of detail. It relies on a scalable architecture for real-time processing of a continuous stream of textual data, which incorporates data acquisition, cleaning, natural-language preprocessing and semantic annotation components. It has been running for over two years and collected over 18 million news articles and blog posts. The NewsStream portal can be used to answer the questions when, how often, in what context, and with what sentiment was a financial entity or term mentioned in a continuous stream of news and blogs, and therefore providing a complement to news aggregators. We illustrate some features of our system in four use cases: relations between the rating agencies and the PIIGS countries, reflection of financial news on credit default swap (CDS) prices, the emergence of the Bitcoin digital currency, and visualizing how the world is connected through news.", "title": "" }, { "docid": "33f53ba19c1198fc2342960c57dd22f8", "text": "This paper reports on a facile and low cost method to fabricate highly stretchable potentiometric pH sensor arrays for biomedical and wearable applications. The technique uses laser carbonization of a thermoset polymer followed by transfer and embedment of carbonized nanomaterial onto an elastomeric matrix. The process combines selective laser pyrolization/carbonization with meander interconnect methodology to fabricate stretchable conductive composites with which pH sensors can be realized. The stretchable pH sensors display a sensitivity of -51 mV/pH over the clinically-relevant range of pH 4-10. The sensors remain stable for strains of up to 50 %.", "title": "" }, { "docid": "7eb4e5b88843d81390c14aae2a90c30b", "text": "A low-power, high-speed, but with a large input dynamic range and output swing class-AB output buffer circuit, which is suitable for the flat-panel display application, is proposed. The circuit employs an elegant comparator to sense the transients of the input to turn on charging/discharging transistors, thus draws little current during static, but has an improved driving capability during transients. It is demonstrated in a 0.6 m CMOS technology.", "title": "" }, { "docid": "a2799e0cee6ca6d7f6b0cc230957b56b", "text": "We present a photo-realistic training and evaluation simulator (UE4Sim) with extensive applications across various fields of computer vision. Built on top of the Unreal Engine, the simulator integrates full featured physics based cars, unmanned aerial vehicles (UAVs), and animated human actors in diverse urban and suburban 3D environments. We demonstrate the versatility of the simulator with two case studies: autonomous UAV-based tracking of moving objects and autonomous driving using supervised learning. The simulator fully integrates both several state-of-the-art tracking algorithms with a benchmark evaluation tool and a deep neural network (DNN) architecture for training vehicles to drive autonomously. It generates synthetic photo-realistic datasets with automatic ground truth annotations to easily extend existing real-world datasets and provides extensive synthetic data variety through its ability to reconfigure synthetic worlds on the fly using an automatic world generation tool.", "title": "" }, { "docid": "b2246b58bb9fb6c6ff58115e25da49dc", "text": "Human action in video sequences can be seen as silhouettes of a moving torso and protruding limbs undergoing articulated motion. We regard human actions as three-dimensional shapes induced by the silhouettes in the space-time volume. We adopt a recent approach by Gorelick et al. (2004) for analyzing 2D shapes and generalize it to deal with volumetric space-time action shapes. Our method utilizes properties of the solution to the Poisson equation to extract space-time features such as local space-time saliency, action dynamics, shape structure and orientation. We show that these features are useful for action recognition, detection and clustering. The method is fast, does not require video alignment and is applicable in (but not limited to) many scenarios where the background is known. Moreover, we demonstrate the robustness of our method to partial occlusions, non-rigid deformations, significant changes in scale and viewpoint, high irregularities in the performance of an action and low quality video", "title": "" }, { "docid": "c9acadfba9aa66ef6e7f4bc1d86943f6", "text": "We propose a new saliency detection model by combining global information from frequency domain analysis and local information from spatial domain analysis. In the frequency domain analysis, instead of modeling salient regions, we model the nonsalient regions using global information; these so-called repeating patterns that are not distinctive in the scene are suppressed by using spectrum smoothing. In spatial domain analysis, we enhance those regions that are more informative by using a center-surround mechanism similar to that found in the visual cortex. Finally, the outputs from these two channels are combined to produce the saliency map. We demonstrate that the proposed model has the ability to highlight both small and large salient regions in cluttered scenes and to inhibit repeating objects. Experimental results also show that the proposed model outperforms existing algorithms in predicting objects regions where human pay more attention.", "title": "" }, { "docid": "4f57590f8bbf00d35b86aaa1ff476fc0", "text": "Pedestrian detection has been used in applications such as car safety, video surveillance, and intelligent vehicles. In this paper, we present a pedestrian detection scheme using HOG, LUV and optical flow features with AdaBoost Decision Stump classifier. Our experiments on Caltech-USA pedestrian dataset show that the proposed scheme achieves promising results of about 16.7% log-average miss rate.", "title": "" }, { "docid": "1c3d933680ed75a1e228f5170dae8847", "text": "Visualization is a critical component of neuroimaging, and how to best view data that is naturally three dimensional is a long standing question in neuroscience. Many approaches, programs, and techniques have been developed specifically for neuroimaging. However, exploration of 3D information through a 2D screen is inherently limited. Many neuroscientific researchers hope that with the recent commercialization and popularization of VR, it can offer the next-step in data visualization and exploration. Neuro Imaging in Virtual Reality (NIVR), is a visualization suite that employs various immersive visualizations to represent neuroimaging information in VR. Some established techniques, such as raymarching volume visualization, are paired with newer techniques, such as near-field rendering, to provide a broad basis of how we can leverage VR to improve visualization and navigation of neuroimaging data. Several of the neuroscientific visualization approaches presented are, to our knowledge, the first of their kind. NIVR offers not only an exploration of neuroscientific data visualization, but also a tool to expose and educate the public regarding recent advancements in the field of neuroimaging. By providing an engaging experience to explore new techniques and discoveries in neuroimaging, we hope to spark scientific interest through a broad audience. Furthermore, neuroimaging offers deep and expansive datasets; a single scan can involve several gigabytes of information. Visualization and exploration of this type of information can be challenging, and real-time exploration of this information in VR even more so. NIVR explores pathways which make this possible, and offers preliminary stereo visualizations of these types of massive data.", "title": "" }, { "docid": "abba5d320a4b6bf2a90ba2b836019660", "text": "We aim at segmenting small organs (e.g., the pancreas) from abdominal CT scans. As the target often occupies a relatively small region in the input image, deep neural networks can be easily confused by the complex and variable background. To alleviate this, researchers proposed a coarse-to-fine approach [46], which used prediction from the first (coarse) stage to indicate a smaller input region for the second (fine) stage. Despite its effectiveness, this algorithm dealt with two stages individually, which lacked optimizing a global energy function, and limited its ability to incorporate multi-stage visual cues. Missing contextual information led to unsatisfying convergence in iterations, and that the fine stage sometimes produced even lower segmentation accuracy than the coarse stage. This paper presents a Recurrent Saliency Transformation Network. The key innovation is a saliency transformation module, which repeatedly converts the segmentation probability map from the previous iteration as spatial weights and applies these weights to the current iteration. This brings us two-fold benefits. In training, it allows joint optimization over the deep networks dealing with different input scales. In testing, it propagates multi-stage visual information throughout iterations to improve segmentation accuracy. Experiments in the NIH pancreas segmentation dataset demonstrate the state-of-the-art accuracy, which outperforms the previous best by an average of over 2%. Much higher accuracies are also reported on several small organs in a larger dataset collected by ourselves. In addition, our approach enjoys better convergence properties, making it more efficient and reliable in practice.", "title": "" }, { "docid": "ef2cee9972d6d0b84736ff7a0da8995c", "text": "The materials discovery process can be significantly expedited and simplified if we can learn effectively from available knowledge and data. In the present contribution, we show that efficient and accurate prediction of a diverse set of properties of material systems is possible by employing machine (or statistical) learning methods trained on quantum mechanical computations in combination with the notions of chemical similarity. Using a family of one-dimensional chain systems, we present a general formalism that allows us to discover decision rules that establish a mapping between easily accessible attributes of a system and its properties. It is shown that fingerprints based on either chemo-structural (compositional and configurational information) or the electronic charge density distribution can be used to make ultra-fast, yet accurate, property predictions. Harnessing such learning paradigms extends recent efforts to systematically explore and mine vast chemical spaces, and can significantly accelerate the discovery of new application-specific materials.", "title": "" }, { "docid": "cdf0d800c122ff8a64d8fca7386cbfd8", "text": "Digital wireless communication applications such as UWB and WPAN necessitate low-power high-speed ADCs to convert RF/IF signals into digital form for subsequent baseband processing. Considering latency and conversion speed, flash ADCs are often the most preferred option. Generally, flash ADCs suffer from high power consumption and large area overhead. On the contrary, SAR ADCs have low power dissipation and occupy a small area. However, a SAR ADC needs several comparison cycles to complete one conversion, which limits its conversion speed. The highest single-channel operation speed of previously reported SAR ADCs is 625MS/s [1]. The ADC in [1] utilizes a 2b/step structure. For non-multi-bit/step SAR ADCs, the highest reported conversion rate is 300MS/s [2]. The structure of a comparator-based binary-search ADC is between that of flash and SAR ADCs [3]. Compared to a flash ADC (high speed, high power) and a SAR ADC (low speed, low power), a binary-search ADC achieves balance between operation speed and power consumption. This paper reports a 5b asynchronous binary-search ADC with reference-range prediction. The maximum conversion speed of this ADC is 800MS/s at a cost of 2mW power consumption.", "title": "" } ]
scidocsrr
a3000a1037f4c47a0ede79d17eb0bdb4
Lay Theories About White Racists : What Constitutes Racism ( and What Doesn ’ t )
[ { "docid": "e464cde1434026c17b06716c6a416b7a", "text": "Three experiments supported the hypothesis that people are more willing to express attitudes that could be viewed as prejudiced when their past behavior has established their credentials as nonprejudiced persons. In Study 1, participants given the opportunity to disagree with blatantly sexist statements were later more willing to favor a man for a stereotypically male job. In Study 2, participants who first had the opportunity to select a member of a stereotyped group (a woman or an African American) for a category-neutral job were more likely to reject a member of that group for a job stereotypically suited for majority members. In Study 3, participants who had established credentials as nonprejudiced persons revealed a greater willingness to express a politically incorrect opinion even when the audience was unaware of their credentials. The general conditions under which people feel licensed to act on illicit motives are discussed.", "title": "" } ]
[ { "docid": "ec90e30c0ae657f25600378721b82427", "text": "We use deep max-pooling convolutional neural networks to detect mitosis in breast histology images. The networks are trained to classify each pixel in the images, using as context a patch centered on the pixel. Simple postprocessing is then applied to the network output. Our approach won the ICPR 2012 mitosis detection competition, outperforming other contestants by a significant margin.", "title": "" }, { "docid": "310036a45a95679a612cc9a60e44e2e0", "text": "A broadband single layer, dual circularly polarized (CP) reflectarrays with linearly polarized feed is introduced in this paper. To reduce the electrical interference between the two orthogonal polarizations of the CP element, a novel subwavelength multiresonance element with a Jerusalem cross and an open loop is proposed, which presents a broader bandwidth and phase range excessing 360° simultaneously. By tuning the x- and y-axis dimensions of the proposed element, an optimization technique is used to minimize the phase errors on both orthogonal components. Then, a single-layer offset-fed 20 × 20-element dual-CP reflectarray has been designed and fabricated. The measured results show that the 1-dB gain and 3-dB axial ratio (AR) bandwidths of the dual-CP reflectarray can reach 12.5% and 50%, respectively, which shows a significant improvement in gain and AR bandwidths as compared to reflectarrays with conventional λ/2 cross-dipole elements.", "title": "" }, { "docid": "d281c9d3862c4e0988247f7fe1e8a702", "text": "The vaginal microbial community is typically characterized by abundant lactobacilli. Lactobacillus iners, a fairly recently detected species, is frequently present in the vaginal niche. However, the role of this species in vaginal health is unclear, since it can be detected in normal conditions as well as during vaginal dysbiosis, such as bacterial vaginosis, a condition characterized by an abnormal increase in bacterial diversity and lack of typical lactobacilli. Compared to other Lactobacillus species, L. iners has more complex nutritional requirements and a Gram-variable morphology. L. iners has an unusually small genome (ca. 1 Mbp), indicative of a symbiotic or parasitic lifestyle, in contrast to other lactobacilli that show niche flexibility and genomes of up to 3-4 Mbp. The presence of specific L. iners genes, such as those encoding iron-sulfur proteins and unique σ-factors, reflects a high degree of niche specification. The genome of L. iners strains also encodes inerolysin, a pore-forming toxin related to vaginolysin of Gardnerella vaginalis. Possibly, this organism may have clonal variants that in some cases promote a healthy vagina, and in other cases are associated with dysbiosis and disease. Future research should examine this friend or foe relationship with the host.", "title": "" }, { "docid": "6a6691d92503f98331ad7eed61a9c357", "text": "This paper presents a new 3D point cloud classification benchmark data set with over four billion manually labelled points, meant as input for data-hungry (deep) learning methods. We also discuss first submissions to the benchmark that use deep convolutional neural networks (CNNs) as a work horse, which already show remarkable performance improvements over state-of-the-art. CNNs have become the de-facto standard for many tasks in computer vision and machine learning like semantic segmentation or object detection in images, but have no yet led to a true breakthrough for 3D point cloud labelling tasks due to lack of training data. With the massive data set presented in this paper, we aim at closing this data gap to help unleash the full potential of deep learning methods for 3D labelling tasks. Our semantic3D.net data set consists of dense point clouds acquired with static terrestrial laser scanners. It contains 8 semantic classes and covers a wide range of urban outdoor scenes: churches, streets, railroad tracks, squares, villages, soccer fields and castles. We describe our labelling interface and show that our data set provides more dense and complete point clouds with much higher overall number of labelled points compared to those already available to the research community. We further provide baseline method descriptions and comparison between methods submitted to our online system. We hope semantic3D.net will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case.", "title": "" }, { "docid": "684b9d64f4476a6b9dd3df1bd18bcb1d", "text": "We present the cases of three children with patent ductus arteriosus (PDA), pulmonary arterial hypertension (PAH), and desaturation. One of them had desaturation associated with atrial septal defect (ASD). His ASD, PAH, and desaturation improved after successful device closure of the PDA. The other two had desaturation associated with Down syndrome. One had desaturation only at room air oxygen (21% oxygen) but well saturated with 100% oxygen, subsequently underwent successful device closure of the PDA. The other had experienced desaturation at a younger age but spontaneously recovered when he was older, following attempted device closure of the PDA, with late embolization of the device.", "title": "" }, { "docid": "8b57c1f4c865c0a414b2e919d19959ce", "text": "A microstrip HPF with sharp attenuation by using cross-coupling is proposed in this paper. The HPF consists of parallel plate- and gap type- capacitors and inductor lines. The one block of the HPF has two sections of a constant K filter in the bridge T configuration. Thus the one block HPF is first coarsely designed and the performance is optimized by circuit simulator. With the gap capacitor adjusted the proposed HPF illustrates the sharp attenuation characteristics near the cut-off frequency made by cross-coupling between the inductor lines. In order to improve the stopband performance, the cascaded two block HPF is examined. Its measured results show the good agreement with the simulated ones giving the sharper attenuation slope.", "title": "" }, { "docid": "98e3279056e9bc15ce4b32c6dc027af9", "text": "Publication Information Bazrafkan, Shabab , Javidnia, Hossein , Lemley, Joseph , & Corcoran, Peter (2018). Semiparallel deep neural network hybrid architecture: first application on depth from monocular camera. Journal of Electronic Imaging, 27(4), 19. doi: 10.1117/1.JEI.27.4.043041 Publisher Society of Photo-optical Instrumentation Engineers (SPIE) Link to publisher's version https://dx.doi.org/10.1117/1.JEI.27.4.043041", "title": "" }, { "docid": "64139426292bc1744904a0758b6caed1", "text": "The quantity and complexity of available information is rapidly increasing. This potential information overload challenges the standard information retrieval models, as users find it increasingly difficult to find relevant information. We therefore propose a method that can utilize the potentially valuable knowledge contained in concept models such as ontologies, and thereby assist users in querying, using the terminology of the domain. The primary focus of this dissertation is similarity measures for use in ontology-based information retrieval. We aim at incorporating the information contained in ontologies by choosing a representation formalism where queries and objects in the information base are described using a lattice-algebraic concept language containing expressions that can be directly mapped into the ontology. Similarity between the description of the query and descriptions of the objects is calculated based on a nearness principle derived from the structure and relations of the ontology. This measure is then used to perform ontology-based query expansion. By doing so, we can replace semantic matching from direct reasoning over the ontology with numerical similarity calculation by means of a general aggregation principle The choice of the proposed similarity measure is guided by a set of properties aimed at ensuring the measures accordance with a set of distinctive structural qualities derived from the ontology. We furthermore empirically evaluate the proposed similarity measure by comparing the similarity ratings for pairs of concepts produced by the proposed measure, with the mean similarity ratings produced by humans for the same pairs.", "title": "" }, { "docid": "f4d6ff0005ecb467fc8fd3a4a9914ea7", "text": "In this paper, the working principle of reflective memory network is introduced, reflective memory network is designed and realized, and real-time, delay determinacy and reliability of reflective memory network are tested under QNX real-time operating system. The performance tests indicate that the reflective memory network meets the demands of the real-time and dependability and improves the stability of the power-supply control system greatly.", "title": "" }, { "docid": "394d30f3bd98cc0a72d940f93f0e32de", "text": "Due to the existence of multiple stakeholders with conflicting goals and policies, alterations to the existing Internet architecture are now limited to simple incremental updates; deployment of any new, radically different technology is next to impossible. To fend off this ossification, network virtualization has been propounded as a diversifying attribute of the future inter-networking paradigm. By introducing a plurality of heterogeneous network architectures cohabiting on a shared physical substrate, network virtualization promotes innovations and diversified applications. In this paper, we survey the existing technologies and a wide array of past and state-of-the-art projects on network virtualization followed by a discussion of major challenges in this area. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "746b9e9e1fdacc76d3acb4f78d824901", "text": "This paper proposes a new method for the detection of glaucoma using fundus image which mainly affects the optic disc by increasing the cup size is proposed. The ratio of the optic cup to disc (CDR) in retinal fundus images is one of the primary physiological parameter for the diagnosis of glaucoma. The Kmeans clustering technique is recursively applied to extract the optic disc and optic cup region and an elliptical fitting technique is applied to find the CDR values. The blood vessels in the optic disc region are detected by using local entropy thresholding approach. The ratio of area of blood vessels in the inferiorsuperior side to area of blood vessels in the nasal-temporal side (ISNT) is combined with the CDR for the classification of fundus image as normal or glaucoma by using K-Nearest neighbor , Support Vector Machine and Bayes classifier. A batch of 36 retinal images obtained from the Aravind Eye Hospital, Madurai, Tamilnadu, India is used to assess the performance of the proposed system and a classification rate of 95% is achieved.", "title": "" }, { "docid": "836815216224b278df229927d825e411", "text": "Logistics demand forecasting is important for investment decision-making of infrastructure and strategy programming of the logistics industry. In this paper, a hybrid method which combines the Grey Model, artificial neural networks and other techniques in both learning and analyzing phases is proposed to improve the precision and reliability of forecasting. After establishing a learning model GNNM(1,8) for road logistics demand forecasting, we chose road freight volume as target value and other economic indicators, i.e. GDP, production value of primary industry, total industrial output value, outcomes of tertiary industry, retail sale of social consumer goods, disposable personal income, and total foreign trade value as the seven key influencing factors for logistics demand. Actual data sequences of the province of Zhejiang from years 1986 to 2008 were collected as training and test-proof samples. By comparing the forecasting results, it turns out that GNNM(1,8) is an appropriate forecasting method to yield higher accuracy and lower mean absolute percentage errors than other individual models for short-term logistics demand forecasting.", "title": "" }, { "docid": "5a06eed96bd877138e1f484b2c771c38", "text": "This chapter presents an initial “4+1” theory of value-based software engineering (VBSE). The engine in the center is the stakeholder win-win Theory W, which addresses the questions of “which values are important?” and “how is success assured?” for a given software engineering enterprise. The four additional theories that it draws upon are utility theory (how important are the values?), decision theory (how do stakeholders’ values determine decisions?), dependency theory (how do dependencies affect value realization?), and control theory (how to adapt to change and control value realization?). After discussing the motivation and context for developing a VBSE theory and the criteria for a good theory, the chapter discusses how the theories work together into a process for defining, developing, and evolving software-intensive systems. It also illustrates the application of the theory to a supply chain system example, discusses how well the theory meets the criteria for a good theory, and identifies an agenda for further research.", "title": "" }, { "docid": "1cceffd9ef0281f89fb6b7efd5d03371", "text": "We report compact and wideband 90° hybrid with a one-way tapered 4×4 MMI waveguide. The fabricated device with a device length of 198 µm exhibited a phase deviation of &#60;±5.4° over a 70-nm-wide spectral range.", "title": "" }, { "docid": "55eb8b24baa00c38534ef0020c682fff", "text": "NoSQL databases are designed to manage large volumes of data. Although they do not require a default schema associated with the data, they are categorized by data models. Because of this, data organization in NoSQL databases needs significant design decisions because they affect quality requirements such as scalability, consistency and performance. In traditional database design, on the logical modeling phase, a conceptual schema is transformed into a schema with lower abstraction and suitable to the target database data model. In this context, the contribution of this paper is an approach for logical design of NoSQL document databases. Our approach consists in a process that converts a conceptual modeling into efficient logical representations for a NoSQL document database. Workload information is considered to determine an optimized logical schema, providing a better access performance for the application. We evaluate our approach through a case study in the e-commerce domain and demonstrate that the NoSQL logical structure generated by our approach reduces the amount of items accessed by the application queries.", "title": "" }, { "docid": "7c99299463d7f2a703f7bd9fbec3df74", "text": "Group emotional contagion, the transfer of moods among people in a group, and its influence on work group dynamics was examined in a laboratory study of managerial decision making using multiple, convergent measures of mood, individual attitudes, behavior, and group-level dynamics. Using a 2 times 2 experimental design, with a trained confederate enacting mood conditions, the predicted effect of emotional contagion was found among group members, using both outside coders' ratings of participants' mood and participants' selfreported mood. No hypothesized differences in contagion effects due to the degree of pleasantness of the mood expressed and the energy level with which it was conveyed were found. There was a significant influence of emotional contagion on individual-level attitudes and group processes. As predicted, the positive emotional contagion group members experienced improved cooperation, decreased conflict, and increased perceived task performance. Theoretical implications and practical ramifications of emotional contagion in groups and organizations are discussed. Disciplines Human Resources Management | Organizational Behavior and Theory This journal article is available at ScholarlyCommons: http://repository.upenn.edu/mgmt_papers/72 THE RIPPLE EFFECT: EMOTIONAL CONTAGION AND ITS INFLUENCE ON GROUP BEHAVIOR SIGAL G. BARSADE School of Management Yale University Box 208200 New Haven, CT 06520-8200 Telephone: (203) 432-6159 Fax: (203) 432-9994 E-mail: sigal.barsade@yale.edu August 2001 Revise and Resubmit, ASQ; Comments Welcome i I would like to thank my mentor Barry Staw, Charles O’Reilly, JB, Ken Craik, Batia Wiesenfeld, Jennifer Chatman, J. Turners, John Nezlek, Keith Murnigan, Linda Johanson, and three anonymous ASQ reviewers who have helped lead to positive emotional and cognitive contagion.", "title": "" }, { "docid": "cf8fd0b294f7d8b75df9f54b8e89af29", "text": "This paper reviews 138 empirical quantitative population-based studies of self-reported racism and health. These studies show an association between self-reported racism and ill health for oppressed racial groups after adjustment for a range of confounders. The strongest and most consistent findings are for negative mental health outcomes and health-related behaviours, with weaker associations existing for positive mental health outcomes, self-assessed health status, and physical health outcomes. Most studies in this emerging field have been published in the past 5 years and have been limited by a dearth of cohort studies, a lack of psychometrically validated exposure instruments, poor conceptualization and definition of racism, conflation of racism with stress, and debate about the aetiologically relevant period for self-reported racism. Future research should examine the psychometric validity of racism instruments and include these instruments, along with objectively measured health outcomes, in existing large-scale survey vehicles as well as longitudinal studies and studies involving children. There is also a need to gain a better understanding of the perception, attribution, and reporting of racism, to investigate the pathways via which self-reported racism affects health, the interplay between mental and physical health outcomes, and exposure to intra-racial, internalized, and systemic racism. Ensuring the quality of studies in this field will allow future research to reveal the complex role that racism plays as a determinant of population health.", "title": "" } ]
scidocsrr
b10d06a2a9dd16940546ca0c09ccf85b
MULTI-MODAL BACKGROUND SUBTRACTION USING GAUSSIAN MIXTURE MODELS
[ { "docid": "6851e4355ab4825b0eb27ac76be2329f", "text": "Segmentation of novel or dynamic objects in a scene, often referred to as “background subtraction” or “foreground segmentation”, is a critical early in step in most computer vision applications in domains such as surveillance and human-computer interaction. All previously described, real-time methods fail to handle properly one or more common phenomena, such as global illumination changes, shadows, inter-reflections, similarity of foreground color to background, and non-static backgrounds (e.g. active video displays or trees waving in the wind). The recent advent of hardware and software for real-time computation of depth imagery makes better approaches possible. We propose a method for modeling the background that uses per-pixel, time-adaptive, Gaussian mixtures in the combined input space of depth and luminance-invariant color. This combination in itself is novel, but we further improve it by introducing the ideas of 1) modulating the background model learning rate based on scene activity, and 2) making colorbased segmentation criteria dependent on depth observations. Our experiments show that the method possesses much greater robustness to problematic phenomena than the prior state-of-the-art, without sacrificing real-time performance, making it well-suited for a wide range of practical applications in video event detection and recognition.", "title": "" } ]
[ { "docid": "c8768e560af11068890cc097f1255474", "text": "Abstract This paper describes the functionality of MEAD, a comprehensive, public domain, open source, multidocument multilingual summarization environment that has been thus far downloaded by more than 500 organizations. MEAD has been used in a variety of summarization applications ranging from summarization for mobile devices to Web page summarization within a search engine and to novelty detection.", "title": "" }, { "docid": "015dbd7c7d1011802046f9b24df24280", "text": "The Resource Description Framework (RDF) provides a common data model for the integration of “real-time” social and sensor data streams with the Web and with each other. While there exist numerous protocols and data formats for exchanging dynamic RDF data, or RDF updates, these options should be examined carefully in order to enable a Semantic Web equivalent of the high-throughput, low-latency streams of typical Web 2.0, multimedia, and gaming applications. This paper contains a brief survey of RDF update formats and a high-level discussion of both TCP and UDPbased transport protocols for updates. Its main contribution is the experimental evaluation of a UDP-based architecture which serves as a real-world example of a high-performance RDF streaming application in an Internet-scale distributed environment.", "title": "" }, { "docid": "d08c24228e43089824357342e0fa0843", "text": "This paper presents a new register assignment heuristic for procedures in SSA Form, whose interference graphs are chordal; the heuristic is called optimistic chordal coloring (OCC). Previous register assignment heuristics eliminate copy instructions via coalescing, in other words, merging nodes in the interference graph. Node merging, however, can not preserve the chordal graph property, making it unappealing for SSA-based register allocation. OCC is based on graph coloring, but does not employ coalescing, and, consequently, preserves graph chordality, and does not increase its chromatic number; in this sense, OCC is conservative as well as optimistic. OCC is observed to eliminate at least as many dynamically executed copy instructions as iterated register coalescing (IRC) for a set of chordal interference graphs generated from several Mediabench and MiBench applications. In many cases, OCC and IRC were able to find optimal or near-optimal solutions for these graphs. OCC ran 1.89x faster than IRC, on average.", "title": "" }, { "docid": "e5abde9ecd6e50c60306411fc011db2d", "text": "We present a user study for two different automatic strategies that simplify text content for people with dyslexia. The strategies considered are the standard one (replacing a complex word with the most simpler synonym) and a new one that presents several synonyms for a complex word if the user requests them. We compare texts transformed by both strategies with the original text and to a gold standard manually built. The study was undertook by 96 participants, 47 with dyslexia plus a control group of 49 people without dyslexia. To show device independence, for the new strategy we used three different reading devices. Overall, participants with dyslexia found texts presented with the new strategy significantly more readable and comprehensible. To the best of our knowledge, this is the largest user study of its kind.", "title": "" }, { "docid": "9bc182298ad6158dbb5de4da15353312", "text": "We present Spectral Inference Networks, a framework for learning eigenfunctions of linear operators by stochastic optimization. Spectral Inference Networks generalize Slow Feature Analysis to generic symmetric operators, and are closely related to Variational Monte Carlo methods from computational physics. As such, they can be a powerful tool for unsupervised representation learning from video or pairs of data. We derive a training algorithm for Spectral Inference Networks that addresses the bias in the gradients due to finite batch size and allows for online learning of multiple eigenfunctions. We show results of training Spectral Inference Networks on problems in quantum mechanics and feature learning for videos on synthetic datasets as well as the Arcade Learning Environment. Our results demonstrate that Spectral Inference Networks accurately recover eigenfunctions of linear operators, can discover interpretable representations from video and find meaningful subgoals in reinforcement learning environments.", "title": "" }, { "docid": "49663600aeff26af65fbfe39f2ed0161", "text": "Misuse cases and attack trees have been suggested for security requirements elicitation and threat modeling in software projects. Their use is believed to increase security awareness throughout the software development life cycle. Experiments have identified strengths and weaknesses of both model types. In this paper we present how misuse cases and attack trees can be linked to get a high-level view of the threats towards a system through misuse case diagrams and a more detailed view on each threat through attack trees. Further, we introduce links to security activity descriptions in the form of UML activity graphs. These can be used to describe mitigating security activities for each identified threat. The linking of different models makes most sense when security modeling is supported by tools, and we present the concept of a security repository that is being built to store models and relations such as those presented in this paper.", "title": "" }, { "docid": "8b7aab188ac4b6e4e777dfd1c670fab3", "text": "In this paper, we have designed a newly shaped narrowband microstrip antenna operating at nearly 2.45 GHz based on transmission-line model. We have created a reversed `Arrow' shaped slot at the edge of opposite side of microstrip line feed to improve return loss and minimize VSWR, which are required for better impedance matching. After simulating the design, we have got higher return loss (approximately -41 dB) and lower VSWR (approximately 1.02:1) at 2.442 GHz. The radiation pattern of the antenna is unidirectional, which is suitable for both fixed RFID tag and reader. The gain of this antenna is 9.67 dB. The design has been simulated in CST Microwave Studio 2011.", "title": "" }, { "docid": "7f6e03069810f9d7ef68d6a775b8849b", "text": "For more than a century, the déjà vu experience has been examined through retrospective surveys, prospective surveys, and case studies. About 60% of the population has experienced déjà vu, and its frequency decreases with age. Déjà vu appears to be associated with stress and fatigue, and it shows a positive relationship with socioeconomic level and education. Scientific explanations of déjà vu fall into 4 categories: dual processing (2 cognitive processes momentarily out of synchrony), neurological (seizure, disruption in neuronal transmission), memory (implicit familiarity of unrecognized stimuli),and attentional (unattended perception followed by attended perception). Systematic research is needed on the prevalence and etiology of this culturally familiar cognitive experience, and several laboratory models may help clarify this illusion of recognition.", "title": "" }, { "docid": "eaddba3b27a3a1faf9e957917d102d3f", "text": "Some recent modifications of the protein assay by the method of Lowry, Rosebrough, Farr, and Randall (1951, .I. Biol. Chem. 193, 265-275) have been reexamined and altered to provide a consolidated method which is simple, rapid, objective, and more generally applicable. A DOC-TCA protein precipitation technique provides for rapid quantitative recovery of soluble and membrane proteins from interfering substances even in very dilute solutions (< 1 pg/ml of protein). SDS is added to alleviate possible nonionic and cationic detergent and lipid interferences, and to provide mild conditions for rapid denaturation of membrane and proteolipid proteins. A simple method based on a linear log-log protein standard curve is presented to permit rapid and totally objective protein analysis using small programmable calculators. The new modification compared favorably with the original method of Lowry ef al.", "title": "" }, { "docid": "a0c15895a455c07b477d4486d32582ef", "text": "PURPOSE\nTo evaluate the efficacy of α-lipoic acid (ALA) in reducing scarring after trabeculectomy.\n\n\nMATERIALS AND METHODS\nEighteen adult New Zealand white rabbits underwent trabeculectomy. During trabeculectomy, thin sponges were placed between the sclera and Tenon's capsule for 3 minutes, saline solution, mitomycin-C (MMC) and ALA was applied to the control group (CG) (n=6 eyes), MMC group (MMCG) (n=6 eyes), and ALA group (ALAG) (n=6 eyes), respectively. After surgery, topical saline and ALA was applied for 28 days to the control and ALAGs, respectively. Filtrating bleb patency was evaluated by using 0.1% trepan blue. Hematoxylin and eosin and Masson trichrome staining for toxicity, total cellularity, and collagen organization; α-smooth muscle actin immunohistochemistry staining performed for myofibroblast phenotype identification.\n\n\nRESULTS\nClinical evaluation showed that all 6 blebs (100%) of the CG had failed, whereas there were only 2 failures (33%) in the ALAG and no failures in the MMCG on day 28. Histologic evaluation showed significantly lower inflammatory cell infiltration in the ALAGs and CGs than the MMCG. Toxicity change was more significant in the MMCG than the control and ALAGs. Collagen was better organized in the ALAG than control and MMCGs. In immunohistochemistry evaluation, ALA significantly reduced the population of cells expressing α-smooth muscle action.\n\n\nCONCLUSIONS\nΑLA prevents and/or reduces fibrosis by inhibition of inflammation pathways, revascularization, and accumulation of extracellular matrix. It can be used as an agent for delaying tissue regeneration and for providing a more functional-permanent fistula.", "title": "" }, { "docid": "73d9e6a019b45639927752bdc4070876", "text": "An increasingly important challenge in data analytics is dirty data in the form of missing, duplicate, incorrect, or inconsistent values. In the SampleClean project, we have developed a new suite of algorithms to estimate the results of different types of analytic queries after applying data cleaning only to a sample. First, this article describes methods for computing statistically bounded estimates of SUM, COUNT, and AVG queries from samples of data corrupted with duplications and incorrect values. Some types of data error, such as duplication, can affect sampling probabilities so results have to be re-weighted to compensate for biases. Then it presents an application of these query processing and data cleaning methods to materialized views maintenance. The view cleaning algorithm applies hashing to efficiently maintain a uniform sample of rows in a materialized view, and then dirty data query processing techniques to correct stale query results. Finally, the article describes a gradient-descent algorithm that extends this idea to the increasingly common Machine Learning-based analytics.", "title": "" }, { "docid": "02be83035d624040dcc0b0824092124d", "text": "A generalized power tracking algorithm that minimizes power consumption of digital circuits by dynamic control of supply voltage and the body bias is proposed. A direct power monitoring scheme is proposed that does not need any replica and hence can sense total power consumed by load circuit across process, voltage, and temperature corners. Design details and performance of power monitor and tracking algorithm are examined by a simulation framework developed using UMC 90-nm CMOS triple well process. The proposed algorithm with direct power monitor achieves a power savings of 42.2% for activity of 0.02 and 22.4% for activity of 0.04. Experimental results from test chip fabricated in AMS 350 nm process shows power savings of 46.3% and 65% for load circuit operating in super threshold and near sub-threshold region, respectively. Measured resolution of power monitor is around 0.25 mV and it has a power overhead of 2.2% of die power. Issues with loop convergence and design tradeoff for power monitor are also discussed in this paper.", "title": "" }, { "docid": "a64600e570e7465124fe763c4658ddb5", "text": "There are several applications in VLSI technology that require high-speed shortest-path computations. The shortest path is a path between two nodes (or points) in a graph such that the sum of the weights of its constituent edges is minimum. Floyd-Warshall algorithm provides fastest computation of shortest path between all pair of nodes present in the graph. With rapid advances in VLSI technology, Field Programmable Gate Arrays (FPGAs) are receiving the attention of the Parallel and High Performance Computing community. This paper gives implementation outcome of Floyd-Warshall algorithm to solve the all pairs shortest-paths problem for directed graph in Verilog.", "title": "" }, { "docid": "3e4a715c040ebb38674c057de6efc680", "text": "Agricultural data have a major role in the planning and success of rural development activities. Agriculturalists, planners, policy makers, government officials, farmers and researchers require relevant information to trigger decision making processes. This paper presents our approach towards extracting named entities from real-world agricultural data from different areas of agriculture using Conditional Random Fields (CRFs). Specifically, we have created a Named Entity tagset consisting of 19 fine grained tags. To the best of our knowledge, there is no specific tag set and annotated corpus available for the agricultural domain. We have performed several experiments using different combination of features and obtained encouraging results. Most of the issues observed in an error analysis have been addressed by post-processing heuristic rules, which resulted in a significant improvement of our system’s accuracy.", "title": "" }, { "docid": "8cb33cec31601b096ff05426e5ffa848", "text": "Efficient actuation control of flapping-wing microrobots requires a low-power frequency reference with good absolute accuracy. To meet this requirement, we designed a fully-integrated 10MHz relaxation oscillator in a 40nm CMOS process. By adaptively biasing the continuous-time comparator, we are able to achieve a power consumption of 20μW, a 68% reduction to the conventional fixed bias design. A built-in self-calibration controller enables fast post-fabrication calibration of the clock frequency. Measurements show a frequency drift of 1.2% as the battery voltage changes from 3V to 4.1V.", "title": "" }, { "docid": "f4017556ac0fd4c1309d0aa062777125", "text": "At this juncture, clinical management, education for medical providers, and the design and interpretation of clinical trials have been hampered by the absence of a consensus system for nomenclature for the description of symptoms as well as classification of causes or potential causes of abnormal uterine bleeding (AUB). To address this issue, the Fédération Internationale de Gynécologie et d'Obstétrique (FIGO) has designed the PALM-COEIN (Polyp, Adenomyosis, Leiomyoma, Malignancy and Hyperplasia, Coagulopathy, Ovulatory Disorders, Endometrial Disorders, Iatrogenic Causes, and Not Classified) classification system for causes of AUB in the reproductive years.", "title": "" }, { "docid": "89af4054eb70309acab13bdb283bde3b", "text": "How to model distribution of sequential data, including but not limited to speech and human motions, is an important ongoing research problem. It has been demonstrated that model capacity can be significantly enhanced by introducing stochastic latent variables in the hidden states of recurrent neural networks. Simultaneously, WaveNet, equipped with dilated convolutions, achieves astonishing empirical performance in natural speech generation task. In this paper, we combine the ideas from both stochastic latent variables and dilated convolutions, and propose a new architecture to model sequential data, termed as Stochastic WaveNet, where stochastic latent variables are injected into the WaveNet structure. We argue that Stochastic WaveNet enjoys powerful distribution modeling capacity and the advantage of parallel training from dilated convolutions. In order to efficiently infer the posterior distribution of the latent variables, a novel inference network structure is designed based on the characteristics of WaveNet architecture. State-of-the-art performances on benchmark datasets are obtained by Stochastic WaveNet on natural speech modeling and high quality human handwriting samples can be generated as well.", "title": "" }, { "docid": "59119faf4281b933999c62f4d5099495", "text": "In conventional wireless networks, security issues are primarily considered above the physical layer and are usually based on bit-level algorithms to establish the identity of a legitimate wireless device. Physical layer security is a new paradigm in which features extracted from an analog signal can be used to establish the unique identity of a transmitter. Our previous research work into RF fingerprinting has shown that every transmitter has a unique RF fingerprint owing to imperfections in the analog components present in the RF front end. Generally, it is believed that the RF fingerprint of a specific transmitter is same across all receivers. That is, a fingerprint created in one receiver can be transported to another receiver to establish the identity of a transmitter. However, to the best of the author's knowledge, no such example is available in the literature in which an RF fingerprint generated in one receiver is used for identification in other receivers. This paper presents the results of experiments, and analyzing the feasibility of using an universal RF fingerprint of a transmitter for identification across different receivers.", "title": "" }, { "docid": "0e02a468a65909b93d3876f30a247ab1", "text": "Implant therapy can lead to peri-implantitis, and none of the methods used to treat this inflammatory response have been predictably effective. It is nearly impossible to treat infected surfaces such as TiUnite (a titanium oxide layer) that promote osteoinduction, but finding an effective way to do so is essential. Experiments were conducted to determine the optimum irradiation power for stripping away the contaminated titanium oxide layer with Er:YAG laser irradiation, the degree of implant heating as a result of Er:YAG laser irradiation, and whether osseointegration was possible after Er:YAG laser microexplosions were used to strip a layer from the surface of implants placed in beagle dogs. The Er:YAG laser was effective at removing an even layer of titanium oxide, and the use of water spray limited heating of the irradiated implant, thus protecting the surrounding bone tissue from heat damage.", "title": "" }, { "docid": "86910fd866dd4945d044bd6057fe2010", "text": "Context: The literature is rich in examples of both successful and failed global software development projects. However, practitioners do not have the time to wade through the many recommendations to work out which ones apply to them. To this end, we developed a prototype Decision Support System (DSS) for Global Teaming (GT), with the goal of making research results available to practitioners. Aims: We want the system we build to be based on the real needs of practitioners: the end users of our system. Therefore the aim of this study is to assess the usefulness and usability of our proof-of-concept in order to create a tool that is actually used by practitioners. Method: Twelve experts in GSD evaluated our system. Each individual participant tested the system and completed a short usability questionnaire. Results: Feedback on the prototype DSS was positive. All experts supported the concept, although many suggested areas that could be improved. Both expert practitioners and researchers participated, providing different perspectives on what we need to do to improve the system. Conclusion: Involving both practitioners (users) and researchers in the evaluation elicited a range of useful feedback, providing useful insights that might not have emerged had we focused on one or the other group. However, even when we implement recommended changes, we still need to persuade practitioner to adopt the new tool.", "title": "" } ]
scidocsrr
eedac8237e141a6be08a60687507900e
Machine vision: a survey
[ { "docid": "4a5cfc32cccc96c49739cc49f311ddb4", "text": "We present an approach for creating realistic synthetic views of existing architectural scenes from a sparse set of still photographs. Our approach, which combines both geometrybased and image-based modeling and rendering techniques, has two components. The rst component is an easy-to-use photogrammetric modeling system which facilitates the recovery of a basic geometric model of the photographed scene. The modeling system is e ective and robust because it exploits the constraints that are characteristic of architectural scenes. The second component is a model-based stereo algorithm, which recovers how the real scene deviates from the basic model. By making use of the model, our stereo approach can robustly recover accurate depth from image pairs with large baselines. Consequently, our approach can model large architectural environments with far fewer photographs than current imagebased modeling approaches. As an intermediate result, we present view-dependent texture mapping, a method of better simulating geometric detail on basic models. Our approach can recover models for use in either geometry-based or image-based rendering systems. We present results that demonstrate our approach's abilty to create realistic renderings of architectural scenes from viewpoints far from the original photographs.", "title": "" } ]
[ { "docid": "fdcea57edbe935ec9949247fd47888e6", "text": "Maintenance of skeletal muscle mass is contingent upon the dynamic equilibrium (fasted losses-fed gains) in protein turnover. Of all nutrients, the single amino acid leucine (Leu) possesses the most marked anabolic characteristics in acting as a trigger element for the initiation of protein synthesis. While the mechanisms by which Leu is 'sensed' have been the subject of great scrutiny, as a branched-chain amino acid, Leu can be catabolized within muscle, thus posing the possibility that metabolites of Leu could be involved in mediating the anabolic effect(s) of Leu. Our objective was to measure muscle protein anabolism in response to Leu and its metabolite HMB. Using [1,2-(13)C2]Leu and [(2)H5]phenylalanine tracers, and GC-MS/GC-C-IRMS we studied the effect of HMB or Leu alone on MPS (by tracer incorporation into myofibrils), and for HMB we also measured muscle proteolysis (by arteriovenous (A-V) dilution). Orally consumed 3.42 g free-acid (FA-HMB) HMB (providing 2.42 g of pure HMB) exhibited rapid bioavailability in plasma and muscle and, similarly to 3.42 g Leu, stimulated muscle protein synthesis (MPS; HMB +70% vs. Leu +110%). While HMB and Leu both increased anabolic signalling (mechanistic target of rapamycin; mTOR), this was more pronounced with Leu (i.e. p70S6K1 signalling 90 min vs. 30 min for HMB). HMB consumption also attenuated muscle protein breakdown (MPB; -57%) in an insulin-independent manner. We conclude that exogenous HMB induces acute muscle anabolism (increased MPS and reduced MPB) albeit perhaps via distinct, and/or additional mechanism(s) to Leu.", "title": "" }, { "docid": "b6e6784d18c596565ca1e4d881398a0d", "text": "Uncovering lies (or deception) is of critical importance to many including law enforcement and security personnel. Though these people may try to use many different tactics to discover deception, previous research tells us that this cannot be accomplished successfully without aid. This manuscript reports on the promising results of a research study where data and text mining methods along with a sample of real-world data from a high-stakes situation is used to detect deception. At the end, the information fusion based classification models produced better than 74% classification accuracy on the holdout sample using a 10-fold cross validation methodology. Nonetheless, artificial neural networks and decision trees produced accuracy rates of 73.46% and 71.60% respectively. However, due to the high stakes associated with these types of decisions, the extra effort of combining the models to achieve higher accuracy", "title": "" }, { "docid": "877d7d467711e8cb0fd03a941c7dc9da", "text": "Film clips are widely utilized to elicit emotion in a variety of research studies. Normative ratings for scenes selected for these purposes support the idea that selected clips correspond to the intended target emotion, but studies reporting normative ratings are limited. Using an ethnically diverse sample of college undergraduates, selected clips were rated for intensity, discreteness, valence, and arousal. Variables hypothesized to affect the perception of stimuli (i.e., gender, race-ethnicity, and familiarity) were also examined. Our analyses generally indicated that males reacted strongly to positively valenced film clips, whereas females reacted more strongly to negatively valenced film clips. Caucasian participants tended to react more strongly to the film clips, and we found some variation by race-ethnicity across target emotions. Finally, familiarity with the films tended to produce higher ratings for positively valenced film clips, and lower ratings for negatively valenced film clips. These findings provide normative ratings for a useful set of film clips for the study of emotion, and they underscore factors to be considered in research that utilizes scenes from film for emotion elicitation.", "title": "" }, { "docid": "2489fb3b63d40b3f851de5d1b5da4f45", "text": "HANDEXOS is an exoskeleton device for supporting th e human hand and performing teleoperation activities. It could be used to opera te both in remote-manipulation mode and directly in microgravity environments. In manipulation mode, crew or operators within the space ship could tele-control the endeffector of a robot in the space during the executi on of extravehicular activities (EVA) by means of an advanced upper limb exoskeleton. The ch oice of an appropriate man-machine interface (MMI) is important to allow a correct and dexterous grasp of objects of regular and irregular shapes in the space. Many different t chnologies have been proposed, from conventional joysticks to exoskeletons, but the ari sing number of more and more dexterous space manipulators such as Robonaut [1] or Eurobot [2] leads researchers to design novel MMIs with the aim to be more capable to exploit all functional advantages offered by new space robots. From this point of view exoskeletons better suite for execution of remote control-task than conventional joysticks, facilitat ing commanding of three dimensional trajectories and saving time in crew’s operation a nd training [3]. Moreover, it’s important to point out that in micro gravity environments the astronauts spend most time doing motor exercises, so HANDEXOS can be useful in supporting such motor practice, assisting human operators in overco ming physical limitations deriving from the fatigue in performing EVA. It is the goal of this paper to provide a detailed description of HANDEXOS mechanical design and to present the results of the preliminar y simulations derived from the implementation of the exoskeleton/human finger dyna mic model for different actuation solutions.", "title": "" }, { "docid": "7fd7f6f14e2623695ce3bb99c22db880", "text": "INTRODUCTION 425 DEFINING PLAY 426 THEORIES OF PLAY 428 Piaget 428 Vygotsky 429 VARIETIES OF PLAY AND THEIR DEVELOPMENTAL COURSE 430 Sensorimotor and Object Play 430 Physical or Locomotor Play 430 Rough-and-Tumble Play 431 Exploratory Play 431 Construction Play 432 Symbolic Play 432 Summary 433 CONTEMPORARY ISSUES IN PLAY RESEARCH 433 Pretend Play and Theory of Mind 433 Symbolic Understanding 439 Object Substitution 441 Distinguishing Pretense From Reality 442 Initiating Pretend Play 446 Does Play Improve Developmental Outcomes? 447 INTERINDIVIDUAL DIFFERENCES IN PLAY 451 Gender Differences in Play 451 The Play of Atypically Developing Children 451 Play Across Cultures 454 FUTURE DIRECTIONS 457 Changing Modes of Play 457 Why Children Pretend 458 Play Across the Life Span 459 CONCLUSION 459 REFERENCES 460", "title": "" }, { "docid": "127d6d93290a1953b8baff45e42858cb", "text": "Compressing convolutional neural networks (CNNs) is essential for transferring the success of CNNs to a wide variety of applications to mobile devices. In contrast to directly recognizing subtle weights or filters as redundant in a given CNN, this paper presents an evolutionary method to automatically eliminate redundant convolution filters. We represent each compressed network as a binary individual of specific fitness. Then, the population is upgraded at each evolutionary iteration using genetic operations. As a result, an extremely compact CNN is generated using the fittest individual. In this approach, either large or small convolution filters can be redundant, and filters in the compressed network are more distinct. In addition, since the number of filters in each convolutional layer is reduced, the number of filter channels and the size of feature maps are also decreased, naturally improving both the compression and speed-up ratios. Experiments on benchmark deep CNN models suggest the superiority of the proposed algorithm over the state-of-the-art compression methods.", "title": "" }, { "docid": "27b8e6f3781bd4010c92a705ba4d5fcc", "text": "Maximum power point tracking (MPPT) strategies in photovoltaic (PV) systems ensure efficient utilization of PV arrays. Among different strategies, the perturb and observe (P&O) algorithm has gained wide popularity due to its intuitive nature and simple implementation. However, such simplicity in P&O introduces two inherent issues, namely, an artificial perturbation that creates losses in steady-state operation and a limited ability to track transients in changing environmental conditions. This paper develops and discusses in detail an MPPT algorithm with zero oscillation and slope tracking to address those technical challenges. The strategy combines three techniques to improve steady-state behavior and transient operation: 1) idle operation on the maximum power point (MPP); 2) identification of the irradiance change through a natural perturbation; and 3) a simple multilevel adaptive tracking step. Two key elements, which form the foundation of the proposed solution, are investigated: 1) the suppression of the artificial perturb at the MPP; and 2) the indirect identification of irradiance change through a current-monitoring algorithm, which acts as a natural perturbation. The zero-oscillation adaptive step P&O strategy builds on these mechanisms to identify relevant information and to produce efficiency gains. As a result, the combined techniques achieve superior overall performance while maintaining simplicity of implementation. Simulations and experimental results are provided to validate the proposed strategy, and to illustrate its behavior in steady and transient operations.", "title": "" }, { "docid": "bb5e00ac09e12f3cdb097c8d6cfde9a9", "text": "3D biomaterial printing has emerged as a potentially revolutionary technology, promising to transform both research and medical therapeutics. Although there has been recent progress in the field, on-demand fabrication of functional and transplantable tissues and organs is still a distant reality. To advance to this point, there are two major technical challenges that must be overcome. The first is expanding upon the limited variety of available 3D printable biomaterials (biomaterial inks), which currently do not adequately represent the physical, chemical, and biological complexity and diversity of tissues and organs within the human body. Newly developed biomaterial inks and the resulting 3D printed constructs must meet numerous interdependent requirements, including those that lead to optimal printing, structural, and biological outcomes. The second challenge is developing and implementing comprehensive biomaterial ink and printed structure characterization combined with in vitro and in vivo tissueand organ-specific evaluation. This perspective outlines considerations for addressing these technical hurdles that, once overcome, will facilitate rapid advancement of 3D biomaterial printing as an indispensable tool for both investigating complex tissue and organ morphogenesis and for developing functional devices for a variety of diagnostic and regenerative medicine applications. PAPER 5 Contributed equally to this work. REcEivEd", "title": "" }, { "docid": "5cac184d3eb964a51722321096918ffb", "text": "We propose an effective technique to solving review-level sentiment classification problem by using sentence-level polarity correction. Our polarity correction technique takes into account the consistency of the polarities (positive and negative) of sentences within each product review before performing the actual machine learning task. While sentences with inconsistent polarities are removed, sentences with consistent polarities are used to learn state-of-the-art classifiers. The technique achieved better results on different types of products reviews and outperforms baseline models without the correction technique. Experimental results show an average of 82% F-measure on four different product review domains.", "title": "" }, { "docid": "c3f3ed8a363d8dcf9ac1efebfa116665", "text": "We report a new phenomenon associated with language comprehension: the action-sentence compatibility effect (ACE). Participants judged whether sentences were sensible by making a response that required moving toward or away from their bodies. When a sentence implied action in one direction (e.g., \"Close the drawer\" implies action away from the body), the participants had difficulty making a sensibility judgment requiring a response in the opposite direction. The ACE was demonstrated for three sentences types: imperative sentences, sentences describing the transfer of concrete objects, and sentences describing the transfer of abstract entities, such as \"Liz told you the story.\" These dataare inconsistent with theories of language comprehension in which meaning is represented as a set of relations among nodes. Instead, the data support an embodied theory of meaning that relates the meaning of sentences to human action.", "title": "" }, { "docid": "c60c83c93577377bad43ed1972079603", "text": "In this contribution, a set of robust GaN MMIC T/R switches and low-noise amplifiers, all based on the same GaN process, is presented. The target operating bandwidths are the X-band and the 2-18 GHz bandwidth. Several robustness tests on the fabricated MMICs demonstrate state-ofthe-art survivability to CW input power levels. The development of high-power amplifiers, robust low-noise amplifiers and T/R switches on the same GaN monolithic process will bring to the next generation of fully-integrated T/R module", "title": "" }, { "docid": "e083b5fdf76bab5cdc8fcafc77db23f7", "text": "Working under a model of privacy in which data remains private even from the statistician, we study the tradeoff between privacy guarantees and the risk of the resulting statistical estimators. We develop private versions of classical information-theoretic bounds, in particular those due to Le Cam, Fano, and Assouad. These inequalities allow for a precise characterization of statistical rates under local privacy constraints and the development of provably (minimax) optimal estimation procedures. We provide a treatment of several canonical families of problems: mean estimation and median estimation, multinomial probability estimation, and nonparametric density estimation. For all of these families, we provide lower and upper bounds that match up to constant factors, and exhibit new (optimal) privacy-preserving mechanisms and computationally efficient estimators that achieve the bounds. Additionally, we present a variety of experimental results for estimation problems involving sensitive data, including salaries, censored blog posts and articles, and drug abuse; these experiments demonstrate the importance of deriving optimal procedures.", "title": "" }, { "docid": "c0ae84c759f20ac8eb7f93c28d4f3835", "text": "The first part of this paper summarises the key points about the use of celebrities in advertising, sets this particular creative technique in context and demonstrates how significant its return on investment can be. In the second part the paper goes on to report a more detailed analysis of the ‘celebrity’ case histories among the winners in the IPA Effectiveness Awards, and how practitioners have applied celebrity use to brands to make exceptional impacts on profitability. DEFINITIONS ’Advertising’ Throughout this paper the word ‘advertising’ has the sense that the general public gives it, that is ‘anything that has a name on it is advertising’. This consumer definition results from extensive qualitative research (Ford-Hutchinson and Rothwell, 2002) conducted in 2002 by the Advertising Standards Authority (ASA), the UK advertising self-regulatory body. Its simplicity and directness reminds one that, while the industry sees itself promoting brands in a whole host of different ways, it is all ‘advertising’ from the customer’s point of view. Within the ‘marcoms’ industry, practitioners tend to segment these activities into particular niches and refer to the agencies that specialise in them as being in the creative, media, direct marketing, self-promotion, public relations, sponsorship, digital, new media and outdoor sectors, to name just a few. It would be very long-winded to list all these specialisms every time, and so the word ‘advertising’ will be used instead. Occasionally, and for variety, the words ‘marcoms’, ‘marketing communications’ or ‘commercial communications’ are employed instead of ‘advertising’. These terms are used interchangeably to signify all the means by which brands are promoted by Hamish Pringle Director General, IPA, 44 Belgrave Square, London, SW1X 8QS Tel: 020 7201 8201; 07977 269778 (m) e-mail: hamish@ ipa.co.uk", "title": "" }, { "docid": "27bf341c8c91713b5b9ebed84f78c92b", "text": "The Agile Manifesto and Agile Principles are typically referred to as the definitions of \"agile\" and \"agility\". There is research on agile values and agile practises, but how should “Scaled Agility” be defined, and what might be the characteristics and principles of Scaled Agile? This paper examines the characteristics of scaled agile, and the principles that are used to build up such agility. It also gives suggestions as principles upon which Scaled Agility can be built.", "title": "" }, { "docid": "e0f89b22f215c140f69a22e6b573df41", "text": "In this paper, a 10-bit 0.5V 100 kS/s successive approximation register (SAR) analog-to-digital converter (ADC) with a new fully dynamic rail-to-rail comparator is presented. The proposed comparator enhances the input signal range to the rail-to-rail mode, and hence, improves the signal-to-noise ratio (SNR) of the ADC in low supply voltages. The e®ect of the latch o®set voltage is reduced by providing a higher voltage gain in the regenerative latch. To reduce the ADC power consumption further, the binary-weighted capacitive array with an attenuation capacitor (BWA) is employed as the digital-to-analog converter (DAC) in this design. The ADC is designed and simulated in a 90 nm CMOS process with a single 0.5V power supply. Spectre simulation results show that the average power consumption of the proposed ADC is about 400 nW and the peak signal-to-noise plus distortion ratio (SNDR) is 56 dB. By considering 10% increase in total ADC power consumption due to the parasitics and a loss of 0.22 LSB in ENOB due to the DAC capacitors mismatch, the achieved ̄gure of merit (FoM) is 11.4 fJ/conversion-step.", "title": "" }, { "docid": "a94f066ec5db089da7fd19ac30fe6ee3", "text": "Information Centric Networking (ICN) is a new networking paradigm in which the ne twork provides users with content instead of communicatio n channels between hosts. Software Defined Networking (SDN) is an approach that promises to enable the co ntinuous evolution of networking architectures. In this paper we propose and discuss solutions to support ICN by using SDN concepts. We focus on an ICN framework called CONET, which groun ds its roots in the CCN/NDN architecture and can interwork with its implementation (CCNx). Altho ugh some details of our solution have been specifically designed for the CONET architecture, i ts general ideas and concepts are applicable to a c lass of recent ICN proposals, which follow the basic mod e of operation of CCN/NDN. We approach the problem in two complementary ways. First we discuss a general and long term solution based on SDN concepts without taking into account specific limit ations of SDN standards and equipment. Then we focus on an experiment to support ICN functionality over a large scale SDN testbed based on OpenFlow, developed in the context of the OFELIA Eu ropean research project. The current OFELIA testbed is based on OpenFlow 1.0 equipment from a v ariety of vendors, therefore we had to design the experiment taking into account the features that ar e currently available on off-the-shelf OpenFlow equipment.", "title": "" }, { "docid": "d70235bc7fb94e1e3d1d301f8d1835cb", "text": "How does the brain orchestrate perceptions, thoughts and actions from the spiking activity of its neurons? Early single-neuron recording research treated spike pattern variability as noise that needed to be averaged out to reveal the brain's representation of invariant input. Another view is that variability of spikes is centrally coordinated and that this brain-generated ensemble pattern in cortical structures is itself a potential source of cognition. Large-scale recordings from neuronal ensembles now offer the opportunity to test these competing theoretical frameworks. Currently, wire and micro-machined silicon electrode arrays can record from large numbers of neurons and monitor local neural circuits at work. Achieving the full potential of massively parallel neuronal recordings, however, will require further development of the neuron–electrode interface, automated and efficient spike-sorting algorithms for effective isolation and identification of single neurons, and new mathematical insights for the analysis of network properties.", "title": "" }, { "docid": "c5628c76f448fb71165069aefc75a2c4", "text": "This research work aims to design and develop a wireless food ordering system in the restaurant. The project presents in-depth on the technical operation of the Wireless Ordering System (WOS) including systems architecture, function, limitations and recommendations. It is believed that with the increasing use of handheld device e.g PDAs in restaurants, pervasive application will become an important tool for restaurants to improve the management aspect by utilizing PDAs to coordinate food ordering could increase efficiency for restaurants and caterers by saving time, reducing human errors and by providing higher quality customer service. With the combination of simple design and readily available emerging communications technologies, it can be concluded that this system is an attractive solution for the hospitality industry.", "title": "" }, { "docid": "d2f8f98289b59c3ff7c3fd3ec4599945", "text": "Massive public resume data emerging on the internet indicates individual-related characteristics in terms of profile and career experiences. Resume Analysis (RA) provides opportunities for many applications, such as recruitment trend predict, talent seeking and evaluation. Existing RA studies either largely rely on the knowledge of domain experts, or leverage classic statistical or data mining models to identify and filter explicit attributes based on pre-defined rules. However, they fail to discover the latent semantic information from semi-structured resume text, i.e., individual career progress trajectory and social-relations, which are otherwise vital to comprehensive understanding of people’s career evolving patterns. Besides, when dealing with large numbers of resumes, how to properly visualize such semantic information to reduce the information load and to support better human cognition is also challenging.\n To tackle these issues, we propose a visual analytics system called ResumeVis to mine and visualize resume data. First, a text mining-based approach is presented to extract semantic information. Then, a set of visualizations are devised to represent the semantic information in multiple perspectives. Through interactive exploration on ResumeVis performed by domain experts, the following tasks can be accomplished: to trace individual career evolving trajectory; to mine latent social-relations among individuals; and to hold the full picture of massive resumes’ collective mobility. Case studies with over 2,500 government officer resumes demonstrate the effectiveness of our system.", "title": "" }, { "docid": "8858053a805375aba9d8e71acfd7b826", "text": "With the accelerating rate of globalization, business exchanges are carried out cross the border, as a result there is a growing demand for talents professional both in English and Business. We can see that at present Business English courses are offered by many language schools in the aim of meeting the need for Business English talent. Many researchers argue that no differences can be defined between Business English teaching and General English teaching. However, this paper concludes that Business English is different from General English at least in such aspects as in the role of teacher, in course design, in teaching models, etc., thus different teaching methods should be applied in order to realize expected teaching goals.", "title": "" } ]
scidocsrr
d66efb72f65731b2c038286914adc689
Lumped-Element Fully Tunable Bandstop Filters for Cognitive Radio Applications
[ { "docid": "e5e1146fd0704357d865574da45ab2e5", "text": "This paper presents a compact low-loss tunable X-band bandstop filter implemented on a quartz substrate using both miniature RF microelectromechanical systems (RF-MEMS) capacitive switches and GaAs varactors. The two-pole filter is based on capacitively loaded folded-λ/2 resonators that are coupled to a microstrip line, and the filter analysis includes the effects of nonadjacent inter-resonator coupling. The RF-MEMS filter tunes from 11.34 to 8.92 GHz with a - 20-dB rejection bandwidth of 1.18%-3.51% and a filter quality factor of 60-135. The GaAs varactor loaded filter tunes from 9.56 to 8.66 GHz with a - 20-dB bandwidth of 1.65%-2% and a filter quality factor of 55-90. Nonlinear measurements at the filter null with Δf = 1 MHz show that the RF-MEMS loaded filter results in > 25-dBm higher third-order intermodulation intercept point and P-1 dB compared with the varactor loaded filter. Both filters show high rejection levels ( > 24 dB) and low passband insertion loss ( <; 0.8 dB) from dc to the first spurious response at 19.5 GHz. The filter topology can be extended to higher order designs with an even number of poles.", "title": "" } ]
[ { "docid": "24d77eb4ea6ecaa44e652216866ab8c8", "text": "In the development of smart cities across the world VANET plays a vital role for optimized route between source and destination. The VANETs is based on infra-structure less network. It facilitates vehicles to give information about safety through vehicle to vehicle communication (V2V) or vehicle to infrastructure communication (V2I). In VANETs wireless communication between vehicles so attackers violate authenticity, confidentiality and privacy properties which further effect security. The VANET technology is encircled with security challenges these days. This paper presents overview on VANETs architecture, a related survey on VANET with major concern of the security issues. Further, prevention measures of those issues, and comparative analysis is done. From the survey, found out that encryption and authentication plays an important role in VANETS also some research direction defined for future work.", "title": "" }, { "docid": "167dbfaa3b6db3fec5d9f83aacdcbfe8", "text": "Implementing a Natural Language Processing (NLP) system requires considerable engineering effort: creating data-structures to represent language constructs; reading corpora annotations into these data-structures; applying off-the-shelf NLP tools to augment the text representation; extracting features and training machine learning components; conducting experiments and computing performance statistics; and creating the end-user application that integrates the implemented components. While there are several widely used NLP libraries, each provides only partial coverage of these various tasks. We present our library COGCOMPNLP which simplifies the process of design and development of NLP applications by providing modules to address different challenges: we provide a corpus-reader module that supports popular corpora in the NLP community, a module for various low-level data-structures and operations (such as search over text), a module for feature extraction, and an extensive suite of annotation modules for a wide range of semantic and syntactic tasks. These annotation modules are all integrated in a single system, PIPELINE, which allows users to easily use the annotators with simple direct calls using any JVM-based language, or over a network. The sister project COGCOMPNLPY enables users to access the annotators with a Python interface. We give a detailed account of our system’s structure and usage, and where possible, compare it with other established NLP frameworks. We report on the performance, including time and memory statistics, of each component on a selection of well-established datasets. Our system is publicly available for research use and external contributions, at: http://github.com/CogComp/cogcomp-nlp.", "title": "" }, { "docid": "12680d4fcf57a8a18d9c2e2b1107bf2d", "text": "Recent advances in computer and technology resulted into ever increasing set of documents. The need is to classify the set of documents according to the type. Laying related documents together is expedient for decision making. Researchers who perform interdisciplinary research acquire repositories on different topics. Classifying the repositories according to the topic is a real need to analyze the research papers. Experiments are tried on different real and artificial datasets such as NEWS 20, Reuters, emails, research papers on different topics. Term Frequency-Inverse Document Frequency algorithm is used along with fuzzy K-means and hierarchical algorithm. Initially experiment is being carried out on small dataset and performed cluster analysis. The best algorithm is applied on the extended dataset. Along with different clusters of the related documents the resulted silhouette coefficient, entropy and F-measure trend are presented to show algorithm behavior for each data set.", "title": "" }, { "docid": "75961ecd0eadf854ad9f7d0d76f7e9c8", "text": "This paper presents the design of a microstrip-CPW transition where the CPW line propagates close to slotline mode. This design allows the solution to be determined entirely though analytical techniques. In addition, a planar via-less microwave crossover using this technique is proposed. The experimental results at 5 GHz show that the crossover has a minimum isolation of 32 dB. It also has low in-band insertion loss and return loss of 1.2 dB and 18 dB respectively over more than 44 % of bandwidth.", "title": "" }, { "docid": "cf1d8589fb42bd2af21e488e3ea79765", "text": "This paper presents ProRace, a dynamic data race detector practical for production runs. It is lightweight, but still offers high race detection capability. To track memory accesses, ProRace leverages instruction sampling using the performance monitoring unit (PMU) in commodity processors. Our PMU driver enables ProRace to sample more memory accesses at a lower cost compared to the state-of-the-art Linux driver. Moreover, ProRace uses PMU-provided execution contexts including register states and program path, and reconstructs unsampled memory accesses offline. This technique allows \\ProRace to overcome inherent limitations of sampling and improve the detection coverage by performing data race detection on the trace with not only sampled but also reconstructed memory accesses. Experiments using racy production software including apache and mysql shows that, with a reasonable offline cost, ProRace incurs only 2.6% overhead at runtime with 27.5% detection probability with a sampling period of 10,000.", "title": "" }, { "docid": "f9ff8dbf8537dffd40ccd938dcb758a8", "text": "In this paper, we propose to cryptanalyse an encryption algorithm which combines a DNA addition and a chaotic map to encrypt a gray scale image. Our contribution consists on, at first, demonstrating that the algorithm, as it is described, is non-invertible, which means that the receiver cannot decrypt the ciphered image even if he posses the secret key. Then, a chosen plaintext attack on the invertible encryption block is described, where, the attacker can illegally decrypt the ciphered image by a temporary access to the encryption machinery.", "title": "" }, { "docid": "b95190b1139935bdc40634fe0650a51c", "text": "Much of recent research has been devoted to video prediction and generation, yet most of the previous works have demonstrated only limited success in generating videos on short-term horizons. The hierarchical video prediction method by Villegas et al. (2017b) is an example of a state-of-the-art method for long-term video prediction, but their method is limited because it requires ground truth annotation of high-level structures (e.g., human joint landmarks) at training time. Our network encodes the input frame, predicts a high-level encoding into the future, and then a decoder with access to the first frame produces the predicted image from the predicted encoding. The decoder also produces a mask that outlines the predicted foreground object (e.g., person) as a by-product. Unlike Villegas et al. (2017b), we develop a novel training method that jointly trains the encoder, the predictor, and the decoder together without highlevel supervision; we further improve upon this by using an adversarial loss in the feature space to train the predictor. Our method can predict about 20 seconds into the future and provides better results compared to Denton and Fergus (2018) and Finn et al. (2016) on the Human 3.6M dataset.", "title": "" }, { "docid": "89eee86640807e11fa02d0de4862b3a5", "text": "The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, for example, higher data rates, excellent end-to-end performance, and user-coverage in hot-spots and crowded areas with lower latency, energy consumption, and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g. power control, cell association) in these networks with shared spectrum access (i.e. when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multi-tier networks where users in different tiers have different priorities for channel access. In this context a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems.", "title": "" }, { "docid": "779a8cf77a038dd2d0f852e3bd6e78fe", "text": "Systematic reviews are generally placed above narrative reviews in an assumed hierarchy of secondary research evidence. We argue that systematic reviews and narrative reviews serve different purposes and should be viewed as complementary. Conventional systematic reviews address narrowly focused questions; their key contribution is summarising data. Narrative reviews provide interpretation and critique; their key contribution is deepening understanding. This article is protected by copyright. All rights reserved.", "title": "" }, { "docid": "3886d46c2420216f5950cfc22597c82e", "text": "In this article, we describe a new approach to enhance driving safety via multi-media technologies by recognizing and adapting to drivers’ emotions with multi-modal intelligent car interfaces. The primary objective of this research was to build an affectively intelligent and adaptive car interface that could facilitate a natural communication with its user (i.e., the driver). This objective was achieved by recognizing drivers’ affective states (i.e., emotions experienced by the drivers) and by responding to those emotions by adapting to the current situation via an affective user model created for each individual driver. A controlled experiment was designed and conducted in a virtual reality environment to collect physiological data signals (galvanic skin response, heart rate, and temperature) from participants who experienced driving-related emotions and states (neutrality, panic/fear, frustration/anger, and boredom/sleepiness). k-Nearest Neighbor (KNN), Marquardt-Backpropagation (MBP), and Resilient Backpropagation (RBP) Algorithms were implemented to analyze the collected data signals and to find unique physiological patterns of emotions. RBP was the best classifier of these three emotions with 82.6% accuracy, followed by MBP with 73.26% and by KNN with 65.33%. Adaptation of the interface was designed to provide multi-modal feedback to the users about their current affective state and to respond to users’ negative emotional states in order to decrease the possible negative impacts of those emotions. Bayesian Belief Networks formalization was employed to develop the user model to enable the intelligent system to appropriately adapt to the current context and situation by considering user-dependent factors, such as personality traits and preferences. 2010 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "bdf191e0f2b06f13da05a08f34901459", "text": "This paper presents a deduplication storage system over cloud computing. Our deduplication storage system consists of two major components, a front-end deduplication application and Hadoop Distributed File System. Hadoop Distributed File System is common back-end distribution file system, which is used with a Hadoop database. We use Hadoop Distributed File System to build up a mass storage system and use a Hadoop database to build up a fast indexing system. With the deduplication applications, a scalable and parallel deduplicated cloud storage system can be effectively built up. We further use VMware to generate a simulated cloud environment. The simulation results demonstrate that our deduplication cloud storage system is more efficient than traditional deduplication approaches.", "title": "" }, { "docid": "194bea0d713d5d167e145e43b3c8b4e2", "text": "Users can enjoy personalized services provided by various context-aware applications that collect users' contexts through sensor-equipped smartphones. Meanwhile, serious privacy concerns arise due to the lack of privacy preservation mechanisms. Currently, most mechanisms apply passive defense policies in which the released contexts from a privacy preservation system are always real, leading to a great probability with which an adversary infers the hidden sensitive contexts about the users. In this paper, we apply a deception policy for privacy preservation and present a novel technique, FakeMask, in which fake contexts may be released to provably preserve users' privacy. The output sequence of contexts by FakeMask can be accessed by the untrusted context-aware applications or be used to answer queries from those applications. Since the output contexts may be different from the original contexts, an adversary has greater difficulty in inferring the real contexts. Therefore, FakeMask limits what adversaries can learn from the output sequence of contexts about the user being in sensitive contexts, even if the adversaries are powerful enough to have the knowledge about the system and the temporal correlations among the contexts. The essence of FakeMask is a privacy checking algorithm which decides whether to release a fake context for the current context of the user. We present a novel privacy checking algorithm and an efficient one to accelerate the privacy checking process. Extensive evaluation experiments on real smartphone context traces of users demonstrate the improved performance of FakeMask over other approaches.", "title": "" }, { "docid": "03e7d909183b66cc3b45eed6ac2de9dd", "text": "A s the millennium draws to a close, it is apparent that one question towers above all others in the life sciences: How does the set of processes we call mind emerge from the activity of the organ we call brain? The question is hardly new. It has been formulated in one way or another for centuries. Once it became possible to pose the question and not be burned at the stake, it has been asked openly and insistently. Recently the question has preoccupied both the experts—neuroscientists, cognitive scientists and philosophers—and others who wonder about the origin of the mind, specifically the conscious mind. The question of consciousness now occupies center stage because biology in general and neuroscience in particular have been so remarkably successful at unraveling a great many of life’s secrets. More may have been learned about the brain and the mind in the 1990s—the so-called decade of the brain—than during the entire previous history of psychology and neuroscience. Elucidating the neurobiological basis of the conscious mind—a version of the classic mind-body problem—has become almost a residual challenge. Contemplation of the mind may induce timidity in the contemplator, especially when consciousness becomes the focus of the inquiry. Some thinkers, expert and amateur alike, believe the question may be unanswerable in principle. For others, the relentless and exponential increase in new knowledge may give rise to a vertiginous feeling that no problem can resist the assault of science if only the theory is right and the techniques are powerful enough. The debate is intriguing and even unexpected, as no comparable doubts have been raised over the likelihood of explaining how the brain is responsible for processes such as vision or memory, which are obvious components of the larger process of the conscious mind. The multimedia mind-show occurs constantly as the brain processes external and internal sensory events. As the brain answers the unasked question of who is experiencing the mindshow, the sense of self emerges. by Antonio R. Damasio", "title": "" }, { "docid": "cfa6b417658cfc1b25200a8ff578ed2c", "text": "The Learning Analytics (LA) discipline analyzes educational data obtained from student interaction with online resources. Most of the data is collected from Learning Management Systems deployed at established educational institutions. In addition, other learning platforms, most notably Massive Open Online Courses such as Udacity and Coursera or other educational initiatives such as Khan Academy, generate large amounts of data. However, there is no generally agreedupon data model for student interactions. Thus, analysis tools must be tailored to each system's particular data structure, reducing their interoperability and increasing development costs. Some e-Learning standards designed for content interoperability include data models for gathering student performance information. In this paper, we describe how well-known LA tools collect data, which we link to how two e-Learning standards - IEEE Standard for Learning Technology and Experience API - define their data models. From this analysis, we identify the advantages of using these e-Learning standards from the point of view of Learning Analytics.", "title": "" }, { "docid": "d522f9a8b0d2a870a8142e20acff5028", "text": "Node-list and N-list, two novel data structure proposed in recent years, have been proven to be very efficient for mining frequent itemsets. The main problem of these structures is that they both need to encode each node of a PPC-tree with pre-order and post-order code. This causes that they are memory consuming and inconvenient to mine frequent itemsets. In this paper, we propose Nodeset, a more efficient data structure, for mining frequent itemsets. Nodesets require only the pre-order (or post-order code) of each node, which makes it saves half of memory compared with N-lists and Node-lists. Based on Nodesets, we present an efficient algorithm called FIN to mining frequent itemsets. For evaluating the performance of FIN, we have conduct experiments to compare it with PrePost and FP-growth ⁄ , two state-of-the-art algorithms, on a variety of real and synthetic datasets. The experimental results show that FIN is high performance on both running time and memory usage. Frequent itemset mining, first proposed by Agrawal, Imielinski, and Swami (1993), has become a fundamental task in the field of data mining because it has been widely used in many important data mining tasks such as mining associations, correlations, episodes , and etc. Since the first proposal of frequent itemset mining, hundreds of algorithms have been proposed on various kinds of extensions and applications, ranging from scalable data mining methodologies, to handling a wide diversity of data types, various extended mining tasks, and a variety of new applications (Han, Cheng, Xin, & Yan, 2007). In recent years, we present two data structures called Node-list (Deng & Wang, 2010) and N-list (Deng, Wang, & Jiang, 2012) for facilitating the mining process of frequent itemsets. Both structures use nodes with pre-order and post-order to represent an itemset. Based on Node-list and N-list, two algorithms called PPV (Deng & Wang, 2010) and PrePost (Deng et al., 2012) are proposed, respectively for mining frequent itemsets. The high efficiency of PPV and PrePost is achieved by the compressed characteristic of Node-lists and N-lists. However, they are memory-consuming because Node-lists and N-lists need to encode a node with pre-order and post-order. In addition, the nodes' code model of Node-list and N-list is not suitable to join Node-lists or N-lists of two short itemsets to generate the Node-list or N-list of a long itemset. This may affect the efficiency of corresponding algorithms. Therefore, how to design an efficient data structure without …", "title": "" }, { "docid": "d0b509f5776f7cdf3c4a108e0dfafd47", "text": "Motivated by the recent success in applying deep learning for natural image analysis, we designed an image segmentation system based on deep Convolutional Neural Network (CNN) to detect the presence of soft tissue sarcoma from multi-modality medical images, including Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and Positron Emission Tomography (PET). Multi-modality imaging analysis using deep learning has been increasingly applied in the field of biomedical imaging and brought unique value to medical applications. However, it is still challenging to perform the multi-modal analysis owing to a major difficulty that is how to fuse the information derived from different modalities. There exist varies of possible schemes which are application-dependent and lack of a unified framework to guide their designs. Aiming at lesion segmentation with multi-modality images, we innovatively propose a conceptual image fusion architecture for supervised biomedical image analysis. The architecture has been optimized by testing different fusion schemes within the CNN structure, including fusing at the feature learning level, fusing at the classifier level, and the fusing at the decision-making level. It is found from the results that while all the fusion schemes outperform the single-modality schemes, fusing at the feature level can generally achieve the best performance in terms of both accuracy and computational cost, but can also suffer from the decreased robustness due to the presence of large errors in one or more image modalities.", "title": "" }, { "docid": "6c221c4085c6868640c236b4dd72f777", "text": "Resilience has been most frequently defined as positive adaptation despite adversity. Over the past 40 years, resilience research has gone through several stages. From an initial focus on the invulnerable or invincible child, psychologists began to recognize that much of what seems to promote resilience originates outside of the individual. This led to a search for resilience factors at the individual, family, community - and, most recently, cultural - levels. In addition to the effects that community and culture have on resilience in individuals, there is growing interest in resilience as a feature of entire communities and cultural groups. Contemporary researchers have found that resilience factors vary in different risk contexts and this has contributed to the notion that resilience is a process. In order to characterize the resilience process in a particular context, it is necessary to identify and measure the risk involved and, in this regard, perceived discrimination and historical trauma are part of the context in many Aboriginal communities. Researchers also seek to understand how particular protective factors interact with risk factors and with other protective factors to support relative resistance. For this purpose they have developed resilience models of three main types: \"compensatory,\" \"protective,\" and \"challenge\" models. Two additional concepts are resilient reintegration, in which a confrontation with adversity leads individuals to a new level of growth, and the notion endorsed by some Aboriginal educators that resilience is an innate quality that needs only to be properly awakened.The review suggests five areas for future research with an emphasis on youth: 1) studies to improve understanding of what makes some Aboriginal youth respond positively to risk and adversity and others not; 2) case studies providing empirical confirmation of the theory of resilient reintegration among Aboriginal youth; 3) more comparative studies on the role of culture as a resource for resilience; 4) studies to improve understanding of how Aboriginal youth, especially urban youth, who do not live in self-governed communities with strong cultural continuity can be helped to become, or remain, resilient; and 5) greater involvement of Aboriginal researchers who can bring a nonlinear world view to resilience research.", "title": "" }, { "docid": "c92593172fafc266a67a049bd95082dc", "text": "The goals of the present study were to apply a generalized regression model and support vector machine (SVM) models with Shape Signatures descriptors, to the domain of blood–brain barrier (BBB) modeling. The Shape Signatures method is a novel computational tool that was used to generate molecular descriptors utilized with the SVM classification technique with various BBB datasets. For comparison purposes we have created a generalized linear regression model with eight MOE descriptors and these same descriptors were also used to create SVM models. The generalized regression model was tested on 100 molecules not in the model and resulted in a correlation r 2 = 0.65. SVM models with MOE descriptors were superior to regression models, while Shape Signatures SVM models were comparable or better than those with MOE descriptors. The best 2D shape signature models had 10-fold cross validation prediction accuracy between 80–83% and leave-20%-out testing prediction accuracy between 80–82% as well as correctly predicting 84% of BBB+ compounds (n = 95) in an external database of drugs. Our data indicate that Shape Signatures descriptors can be used with SVM and these models may have utility for predicting blood–brain barrier permeation in drug discovery.", "title": "" }, { "docid": "c5d74c69c443360d395a8371055ef3e2", "text": "The supply of oxygen and nutrients and the disposal of metabolic waste in the organs depend strongly on how blood, especially red blood cells, flow through the microvascular network. Macromolecular plasma proteins such as fibrinogen cause red blood cells to form large aggregates, called rouleaux, which are usually assumed to be disaggregated in the circulation due to the shear forces present in bulk flow. This leads to the assumption that rouleaux formation is only relevant in the venule network and in arterioles at low shear rates or stasis. Thanks to an excellent agreement between combined experimental and numerical approaches, we show that despite the large shear rates present in microcapillaries, the presence of either fibrinogen or the synthetic polymer dextran leads to an enhanced formation of robust clusters of red blood cells, even at haematocrits as low as 1%. Robust aggregates are shown to exist in microcapillaries even for fibrinogen concentrations within the healthy physiological range. These persistent aggregates should strongly affect cell distribution and blood perfusion in the microvasculature, with putative implications for blood disorders even within apparently asymptomatic subjects.", "title": "" }, { "docid": "abc1a53ea5e3d3fc7a4b45cbb64c6bca", "text": "This paper proposes a method to measure the junction temperatures of insulated-gate bipolar transistors (IGBTs) during the converter operation for prototype evaluation. The IGBT short-circuit current is employed as the temperature-sensitive electrical parameter (TSEP). The calibration experiments show that the short-circuit current has an adequate temperature sensitivity of 0.35 A/°C. The parameter also has good selectivity and linearity, which makes it suitable to be used as a TSEP. Test circuit and hardware design are proposed for the IGBT junction temperature measurement in various power electronics dc-dc and ac-dc converter applications. By connecting a temperature measurement unit to the converter and giving a short-circuit pulse during the converter operation, the short-circuit current is measured, and the IGBT junction temperature can be derived from the calibration curve. The proposed temperature measurement method is a valuable tool for prototype evaluation and avoids the unnecessary safety margin regarding device operating temperatures, which is significant particularly for high-temperature/high-density converter applications.", "title": "" } ]
scidocsrr
1e68530f79ccd54495b8f842ea675cd3
Feasibility study of mobile phone WiFi detection in aerial search and rescue operations
[ { "docid": "e5a9886927ce33ddd8a0c9a1273c297f", "text": "Recent advances in the field of Unmanned Aerial Vehicles (UAVs) make flying robots suitable platforms for carrying sensors and computer systems capable of performing advanced tasks. This paper presents a technique which allows detecting humans at a high frame rate on standard hardware onboard an autonomous UAV in a real-world outdoor environment using thermal and color imagery. Detected human positions are geolocated and a map of points of interest is built. Such a saliency map can, for example, be used to plan medical supply delivery during a disaster relief effort. The technique has been implemented and tested on-board the UAVTech1 autonomous unmanned helicopter platform as a part of a complete autonomous mission. The results of flight- tests are presented and performance and limitations of the technique are discussed.", "title": "" } ]
[ { "docid": "5eab71f546a7dc8bae157a0ca4dd7444", "text": "We introduce a new usability inspection method called HED (heuristic evaluation during demonstrations) for measuring and comparing usability of competing complex IT systems in public procurement. The method presented enhances traditional heuristic evaluation to include the use context, comprehensive view of the system, and reveals missing functionality by using user scenarios and demonstrations. HED also quantifies the results in a comparable way. We present findings from a real-life validation of the method in a large-scale procurement project of a healthcare and social welfare information system. We analyze and compare the performance of HED to other usability evaluation methods used in procurement. Based on the analysis HED can be used to evaluate the level of usability of an IT system during procurement correctly, comprehensively and efficiently.", "title": "" }, { "docid": "51d15ba34f93e0b589d4039226ad2d19", "text": "Botnet phenomenon in smartphones is evolving with the proliferation in mobile phone technologies after leaving imperative impact on personal computers. It refers to the network of computers, laptops, mobile devices or tablets which is remotely controlled by the cybercriminals to initiate various distributed coordinated attacks including spam emails, ad-click fraud, Bitcoin mining, Distributed Denial of Service (DDoS), disseminating other malwares and much more. Likewise traditional PC based botnet, Mobile botnets have the same operational impact except the target audience is particular to smartphone users. Therefore, it is import to uncover this security issue prior to its widespread adaptation. We propose SMARTbot, a novel dynamic analysis framework augmented with machine learning techniques to automatically detect botnet binaries from malicious corpus. SMARTbot is a component based off-device behavioral analysis framework which can generate mobile botnet learning model by inducing Artificial Neural Networks' back-propagation method. Moreover, this framework can detect mobile botnet binaries with remarkable accuracy even in case of obfuscated program code. The results conclude that, a classifier model based on simple logistic regression outperform other machine learning classifier for botnet apps' detection, i.e 99.49% accuracy is achieved. Further, from manual inspection of botnet dataset we have extracted interesting trends in those applications. As an outcome of this research, a mobile botnet dataset is devised which will become the benchmark for future studies.", "title": "" }, { "docid": "f36b101aa059792e21281bff8157568f", "text": "Many research projects oriented on control mechanisms of virtual agents in videogames have emerged in recent years. However, this boost has not been accompanied with the emergence of toolkits supporting development of these projects, slowing down the progress in the field. Here, we present Pogamut 3, an open source platform for rapid development of behaviour for virtual agents embodied in a 3D environment of the Unreal Tournament 2004 videogame. Pogamut 3 is designed to support research as well as educational projects. The paper also briefly touches extensions of Pogamut 3; the ACT-R integration, the emotional model ALMA integration, support for control of avatars at the level of gestures, and a toolkit for developing educational scenarios concerning orientation in urban areas. These extensions make Pogamut 3 applicable beyond the domain of computer games.", "title": "" }, { "docid": "628c8b906e3db854ea92c021bb274a61", "text": "Taxi demand prediction is an important building block to enabling intelligent transportation systems in a smart city. An accurate prediction model can help the city pre-allocate resources to meet travel demand and to reduce empty taxis on streets which waste energy and worsen the traffic congestion. With the increasing popularity of taxi requesting services such as Uber and Didi Chuxing (in China), we are able to collect large-scale taxi demand data continuously. How to utilize such big data to improve the demand prediction is an interesting and critical real-world problem. Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations. Recent advances in deep learning have shown superior performance on traditionally challenging tasks such as image classification by learning the complex features and correlations from largescale data. This breakthrough has inspired researchers to explore deep learning techniques on traffic prediction problems. However, existing methods on traffic prediction have only considered spatial relation (e.g., using CNN) or temporal relation (e.g., using LSTM) independently. We propose a Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial and temporal relations. Specifically, our proposed model consists of three views: temporal view (modeling correlations between future demand values with near time points via LSTM), spatial view (modeling local spatial correlation via local CNN), and semantic view (modeling correlations among regions sharing similar temporal patterns). Experiments on large-scale real taxi demand data demonstrate effectiveness of our approach over state-ofthe-art methods.", "title": "" }, { "docid": "025e76755193277b2ea55d06d4f22d03", "text": "Bioprinting technology shows potential in tissue engineering for the fabrication of scaffolds, cells, tissues and organs reproducibly and with high accuracy. Bioprinting technologies are mainly divided into three categories, inkjet-based bioprinting, pressure-assisted bioprinting and laser-assisted bioprinting, based on their underlying printing principles. These various printing technologies have their advantages and limitations. Bioprinting utilizes biomaterials, cells or cell factors as a “bioink” to fabricate prospective tissue structures. Biomaterial parameters such as biocompatibility, cell viability and the cellular microenvironment strongly influence the printed product. Various printing technologies have been investigated, and great progress has been made in printing various types of tissue, including vasculature, heart, bone, cartilage, skin and liver. This review introduces basic principles and key aspects of some frequently used printing technologies. We focus on recent advances in three-dimensional printing applications, current challenges and future directions.", "title": "" }, { "docid": "3510615d09b9cc7cf3be154d50da7e27", "text": "We propose a non-parametric model for pedestrian motion based on Gaussian Process regression, in which trajectory data are modelled by regressing relative motion against current position. We show how the underlying model can be learned in an unsupervised fashion, demonstrating this on two databases collected from static surveillance cameras. We furthermore exemplify the use of model for prediction, comparing the recently proposed GP-Bayesfilters with a Monte Carlo method. We illustrate the benefit of this approach for long term motion prediction where parametric models such as Kalman Filters would perform poorly.", "title": "" }, { "docid": "cb59c880b3848b7518264f305cfea32a", "text": "Leakage current reduction is crucial for the transformerless photovoltaic inverters. The conventional three-phase current source H6 inverter suffers from the large leakage current, which restricts its application to transformerless PV systems. In order to overcome the limitations, a new three-phase current source H7 (CH7) inverter is proposed in this paper. Only one additional Insulated Gate Bipolar Transistor is needed, but the leakage current can be effectively suppressed with a new space vector modulation (SVM). Finally, the experimental tests are carried out on the proposed CH7 inverter, and the experimental results verify the effectiveness of the proposed topology and SVM method.", "title": "" }, { "docid": "49a53a8cb649c93d685e832575acdb28", "text": "We address the vehicle detection and classification problems using Deep Neural Networks (DNNs) approaches. Here we answer to questions that are specific to our application including how to utilize DNN for vehicle detection, what features are useful for vehicle classification, and how to extend a model trained on a limited size dataset, to the cases of extreme lighting condition. Answering these questions we propose our approach that outperforms state-of-the-art methods, and achieves promising results on image with extreme lighting conditions.", "title": "" }, { "docid": "cb2d8e7b01de6cdb5a303a38cc11e211", "text": "Developing sensor network applications demands a new set of tools to aid programmers. A number of simulation environments have been developed that provide varying degrees of scalability, realism, and detail for understanding the behavior of sensor networks. To date, however, none of these tools have addressed one of the most important aspects of sensor application design: that of power consumption. While simple approximations of overall power usage can be derived from estimates of node duty cycle and communication rates, these techniques often fail to capture the detailed, low-level energy requirements of the CPU, radio, sensors, and other peripherals.\n In this paper, we present, a scalable simulation environment for wireless sensor networks that provides an accurate, per-node estimate of power consumption. PowerTOSSIM is an extension to TOSSIM, an event-driven simulation environment for TinyOS applications. In PowerTOSSIM, TinyOS components corresponding to specific hardware peripherals (such as the radio, EEPROM, LEDs, and so forth) are instrumented to obtain a trace of each device's activity during the simulation runPowerTOSSIM employs a novel code-transformation technique to estimate the number of CPU cycles executed by each node, eliminating the need for expensive instruction-level simulation of sensor nodes. PowerTOSSIM includes a detailed model of hardware energy consumption based on the Mica2 sensor node platform. Through instrumentation of actual sensor nodes, we demonstrate that PowerTOSSIM provides accurate estimation of power consumption for a range of applications and scales to support very large simulations.", "title": "" }, { "docid": "d0dd13964de87acab0f7fe76585d0bbf", "text": "The continual growth of electronic medical record (EMR) databases has paved the way for many data mining applications, including the discovery of novel disease-drug associations and the prediction of patient survival rates. However, these tasks are hindered because EMRs are usually segmented or incomplete. EMR analysis is further limited by the overabundance of medical term synonyms and morphologies, which causes existing techniques to mismatch records containing semantically similar but lexically distinct terms. Current solutions fill in missing values with techniques that tend to introduce noise rather than reduce it. In this paper, we propose to simultaneously infer missing data and solve semantic mismatching in EMRs by first integrating EMR data with molecular interaction networks and domain knowledge to build the HEMnet, a heterogeneous medical information network. We then project this network onto a low-dimensional space, and group entities in the network according to their relative distances. Lastly, we use this entity distance information to enrich the original EMRs. We evaluate the effectiveness of this method according to its ability to separate patients with dissimilar survival functions. We show that our method can obtain significant (p-value < 0.01) results for each cancer subtype in a lung cancer dataset, while the baselines cannot.", "title": "" }, { "docid": "0af8bbdda9482f24dfdfc41046382e1b", "text": "In this paper, we have examined the effectiveness of \"style matrix\" which is used in the works on style transfer and texture synthesis by Gatys et al. in the context of image retrieval as image features. A style matrix is presented by Gram matrix of the feature maps in a deep convolutional neural network. We proposed a style vector which are generated from a style matrix with PCA dimension reduction. In the experiments, we evaluate image retrieval performance using artistic images downloaded from Wikiarts.org regarding both artistic styles ans artists. We have obtained 40.64% and 70.40% average precision for style search and artist search, respectively, both of which outperformed the results by common CNN features. In addition, we found PCA-compression boosted the performance.", "title": "" }, { "docid": "4d297680cd342f46a5a706c4969273b8", "text": "Theory on passwords has lagged practice, where large providers use back-end smarts to survive with imperfect technology.", "title": "" }, { "docid": "36a694668a10bc0475f447adb1e09757", "text": "Previous findings indicated that when people observe someone’s behavior, they spontaneously infer the traits and situations that cause the target person’s behavior. These inference processes are called spontaneous trait inferences (STIs) and spontaneous situation inferences (SSIs). While both patterns of inferences have been observed, no research has examined the extent to which people from different cultural backgrounds produce these inferences when information affords both trait and situation inferences. Based on the theoretical frameworks of social orientations and thinking styles, we hypothesized that European Canadians would be more likely to produce STIs than SSIs because of the individualistic/independent social orientation and the analytic thinking style dominant in North America, whereas Japanese would produce both STIs and SSIs equally because of the collectivistic/interdependent social orientation and the holistic thinking style dominant in East Asia. Employing the savings-in-relearning paradigm, we presented information that affords both STIs and SSIs and examined cultural differences in the extent of both inferences. The results supported our hypotheses. The relationships between culturally dominant styles of thought and the inference processes in impression formation are discussed.", "title": "" }, { "docid": "03550fad9c5f21c69253f2bfc389fccc", "text": "The design of a Ka dual-band circular polarizer by inserting a dielectric septum in the middle of the circular waveguide is discussed here. The dielectric septum is located in fixing slots, and by adjusting the dimension of the dual-compensation slots which are built in the orthogonal plane, the phase difference of 90deg at the center frequency for the dual-band can be achieved. Furthermore, the gradual changing structures at both ends of the dielectric septum are built for impedance matching for both Ex and Ey polarizations. The simple structure of this kind of polarizer can reduce the influence of manufacturing inaccuracy in the Ka-band. The measured phase difference is within 90degplusmn 4.5deg for both bands. In addition, the return losses for both Ex and Ey polarizations are better than -15 dB.", "title": "" }, { "docid": "e0fc6fc1425bb5786847c3769c1ec943", "text": "Developing manufacturing simulation models usually requires experts with knowledge of multiple areas including manufacturing, modeling, and simulation software. The expertise requirements increase for virtual factory models that include representations of manufacturing at multiple resolution levels. This paper reports on an initial effort to automatically generate virtual factory models using manufacturing configuration data in standard formats as the primary input. The execution of the virtual factory generates time series data in standard formats mimicking a real factory. Steps are described for auto-generation of model components in a software environment primarily oriented for model development via a graphic user interface. Advantages and limitations of the approach and the software environment used are discussed. The paper concludes with a discussion of challenges in verification and validation of the virtual factory prototype model with its multiple hierarchical models and future directions.", "title": "" }, { "docid": "03eb1360ba9e3e38f082099ed08469ed", "text": "In this paper some concept of fuzzy set have discussed and one fuzzy model have applied on agricultural farm for optimal allocation of different crops by considering maximization of net benefit, production and utilization of labour . Crisp values of the objective functions obtained from selected nondominated solutions are converted into triangular fuzzy numbers and ranking of those fuzzy numbers are done to make a decision. .", "title": "" }, { "docid": "0742dcc602a216e41d3bfe47bffc7d30", "text": "In this paper we study supervised and semi-supervised classification of e-mails. We consider two tasks: filing e-mails into folders and spam e-mail filtering. Firstly, in a supervised learning setting, we investigate the use of random forest for automatic e-mail filing into folders and spam e-mail filtering. We show that random forest is a good choice for these tasks as it runs fast on large and high dimensional databases, is easy to tune and is highly accurate, outperforming popular algorithms such as decision trees, support vector machines and naïve Bayes. We introduce a new accurate feature selector with linear time complexity. Secondly, we examine the applicability of the semi-supervised co-training paradigm for spam e-mail filtering by employing random forests, support vector machines, decision tree and naïve Bayes as base classifiers. The study shows that a classifier trained on a small set of labelled examples can be successfully boosted using unlabelled examples to accuracy rate of only 5% lower than a classifier trained on all labelled examples. We investigate the performance of co-training with one natural feature split and show that in the domain of spam e-mail filtering it can be as competitive as co-training with two natural feature splits.", "title": "" }, { "docid": "b857bb7ceb60057991f45d1f2ce8453e", "text": "We present DisCo, a novel display-camera communication system. DisCo enables displays and cameras to communicate with each other while also displaying and capturing images for human consumption. Messages are transmitted by temporally modulating the display brightness at high frequencies so that they are imperceptible to humans. Messages are received by a rolling shutter camera that converts the temporally modulated incident light into a spatial flicker pattern. In the captured image, the flicker pattern is superimposed on the pattern shown on the display. The flicker and the display pattern are separated by capturing two images with different exposures. The proposed system performs robustly in challenging real-world situations such as occlusion, variable display size, defocus blur, perspective distortion, and camera rotation. Unlike several existing visible light communication methods, DisCo works with off-the-shelf image sensors. It is compatible with a variety of sources (including displays, single LEDs), as well as reflective surfaces illuminated with light sources. We have built hardware prototypes that demonstrate DisCo’s performance in several scenarios. Because of its robustness, speed, ease of use, and generality, DisCo can be widely deployed in several applications, such as advertising, pairing of displays with cell phones, tagging objects in stores and museums, and indoor navigation.", "title": "" }, { "docid": "680d755a3a6d8fcd926eb441fad5aa57", "text": "DNA hybridization arrays simultaneously measure the expression level for thousands of genes. These measurements provide a “snapshot” of transcription levels within the cell. A major challenge in computational biology is to uncover, from such measurements, gene/protein interactions and key biological features of cellular systems.\nIn this paper, we propose a new framework for discovering interactions between genes based on multiple expression measurements This framework builds on the use of Bayesian networks for representing statistical dependencies. A Bayesian network is a graph-based model of joint multi-variate probability distributions that captures properties of conditional independence between variables. Such models are attractive for their ability to describe complex stochastic processes, and for providing clear methodologies for learning from (noisy) observations.\nWe start by showing how Bayesian networks can describe interactions between genes. We then present an efficient algorithm capable of learning such networks and statistical method to assess our confidence in their features. Finally, we apply this method to the S. cerevisiae cell-cycle measurements of Spellman et al. [35] to uncover biological features", "title": "" } ]
scidocsrr
3675b67fd4e37f788dd02f44e921939e
Overview of the NLPCC-ICCPOL 2016 Shared Task: Chinese Word Similarity Measurement
[ { "docid": "502abb9980735a090a2f2a8b7510af9b", "text": "This paper presents and compares WordNetbased and distributional similarity approaches. The strengths and weaknesses of each approach regarding similarity and relatedness tasks are discussed, and a combination is presented. Each of our methods independently provide the best results in their class on the RG and WordSim353 datasets, and a supervised combination of them yields the best published results on all datasets. Finally, we pioneer cross-lingual similarity, showing that our methods are easily adapted for a cross-lingual task with minor losses.", "title": "" } ]
[ { "docid": "97a7ebf3cffa55f97e28ca42d1239131", "text": "The eeect of selecting varying numbers and kinds of features for use in predicting category membership was investigated on the Reuters and MUC-3 text categorization data sets. Good categorization performance was achieved using a statistical classiier and a proportional assignment strategy. The optimal feature set size for word-based indexing was found to be surprisingly low (10 to 15 features) despite the large training sets. The extraction of new text features by syntactic analysis and feature clustering was investigated on the Reuters data set. Syntactic indexing phrases, clusters of these phrases, and clusters of words were all found to provide less eeective representations than individual words.", "title": "" }, { "docid": "8f4f687aff724496efcc37ff7f6bbbeb", "text": "Sentiment Analysis is new way of machine learning to extract opinion orientation (positive, negative, neutral) from a text segment written for any product, organization, person or any other entity. Sentiment Analysis can be used to predict the mood of people that have impact on stock prices, therefore it can help in prediction of actual stock movement. In order to exploit the benefits of sentiment analysis in stock market industry we have performed sentiment analysis on tweets related to Apple products, which are extracted from StockTwits (a social networking site) from 2010 to 2017. Along with tweets, we have also used market index data which is extracted from Yahoo Finance for the same period. The sentiment score of a tweet is calculated by sentiment analysis of tweets through SVM. As a result each tweet is categorized as bullish or bearish. Then sentiment score and market data is used to build a SVM model to predict next day's stock movement. Results show that there is positive relation between people opinion and market data and proposed work has an accuracy of 76.65% in stock prediction.", "title": "" }, { "docid": "21eddfd81b640fc1810723e93f94ae5d", "text": "R. B. Gnanajothi, Topics in graph theory, Ph. D. thesis, Madurai Kamaraj University, India, 1991. E. M. Badr, On the Odd Gracefulness of Cyclic Snakes With Pendant Edges, International journal on applications of graph theory in wireless ad hoc networks and sensor networks (GRAPH-HOC) Vol. 4, No. 4, December 2012. E. M. Badr, M. I. Moussa & K. Kathiresan (2011): Crown graphs and subdivision of ladders are odd graceful, International Journal of Computer Mathematics, 88:17, 3570-3576. A. Rosa, On certain valuation of the vertices of a graph, Theory of Graphs (International Symposium, Rome, July 1966), Gordon and Breach, New York and Dunod Paris (1967) 349-355. A. Solairaju & P. Muruganantham, Even Vertex Gracefulness of Fan Graph,", "title": "" }, { "docid": "c294a7817e456736135357484f9141ed", "text": "Obesity continues to be one of the major public health problems due to its high prevalence and co-morbidities. Common co-morbidities not only include cardiometabolic disorders but also mood and cognitive disorders. Obese subjects often show deficits in memory, learning and executive functions compared to normal weight subjects. Epidemiological studies also indicate that obesity is associated with a higher risk of developing depression and anxiety, and vice versa. These associations between pathologies that presumably have different etiologies suggest shared pathological mechanisms. Gut microbiota is a mediating factor between the environmental pressures (e.g., diet, lifestyle) and host physiology, and its alteration could partly explain the cross-link between those pathologies. Westernized dietary patterns are known to be a major cause of the obesity epidemic, which also promotes a dysbiotic drift in the gut microbiota; this, in turn, seems to contribute to obesity-related complications. Experimental studies in animal models and, to a lesser extent, in humans suggest that the obesity-associated microbiota may contribute to the endocrine, neurochemical and inflammatory alterations underlying obesity and its comorbidities. These include dysregulation of the HPA-axis with overproduction of glucocorticoids, alterations in levels of neuroactive metabolites (e.g., neurotransmitters, short-chain fatty acids) and activation of a pro-inflammatory milieu that can cause neuro-inflammation. This review updates current knowledge about the role and mode of action of the gut microbiota in the cross-link between energy metabolism, mood and cognitive function.", "title": "" }, { "docid": "e84b6bbb2eaee0edb6ac65d585056448", "text": "As memory accesses become slower with respect to the processor and consume more power with increasing memory size, the focus of memory performance and power consumption has become increasingly important. With the trend to develop multi-threaded, multi-core processors, the demands on the memory system will continue to scale. However, determining the optimal memory system configuration is non-trivial. The memory system performance is sensitive to a large number of parameters. Each of these parameters take on a number of values and interact in fashions that make overall trends difficult to discern. A comparison of the memory system architectures becomes even harder when we add the dimensions of power consumption and manufacturing cost. Unfortunately, there is a lack of tools in the public-domain that support such studies. Therefore, we introduce DRAMsim, a detailed and highly-configurable C-based memory system simulator to fill this gap. DRAMsim implements detailed timing models for a variety of existing memories, including SDRAM, DDR, DDR2, DRDRAM and FB-DIMM, with the capability to easily vary their parameters. It also models the power consumption of SDRAM and its derivatives. It can be used as a standalone simulator or as part of a more comprehensive system-level model. We have successfully integrated DRAMsim into a variety of simulators including MASE [15], Sim-alpha [14], BOCHS[2] and GEMS[13]. The simulator can be downloaded from www.ece.umd.edu/dramsim.", "title": "" }, { "docid": "aca5ad6b3bbd9b52058cde1a71777202", "text": "Despite its high incidence and the great development of literature, there is still controversy about the optimal management of Achilles tendon rupture. The several techniques proposed to treat acute ruptures can essentially be classifi ed into: conservative management (cast immobilization or functional bracing), open repair, minimally invasive technique and percutaneous repair with or without augmentation. Although chronic ruptures represent a different chapter, the ideal treatment seems to be surgical too (debridement, local tissue transfer, augmentation and synthetic grafts). In this paper we reviewed the literature on acute injuries. Review Article Achilles Tendon Injuries: Comparison of Different Conservative and Surgical Treatment and Rehabilitation Alessandro Bistolfi , Jessica Zanovello, Elisa Lioce, Lorenzo Morino, Raul Cerlon, Alessandro Aprato* and Giuseppe Massazza Medical school, University of Turin, Turin, Italy *Address for Correspondence: Alessandro Aprato, Medical School, University of Turin, Viale 25 Aprile 137 int 6 10131 Torino, Italy, Tel: +39 338 6880640; Email: ale_aprato@hotmail.com Submitted: 03 January 2017 Approved: 13 February 2017 Published: 21 February 2017 Copyright: 2017 Bistolfi A, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. How to cite this article: Bistolfi A, Zanovello J, Lioce E, Morino L, Cerlon R, et al. Achilles Tendon Injuries: Comparison of Different Conservative and Surgical Treatment and Rehabilitation. J Nov Physiother Rehabil. 2017; 1: 039-053. https://doi.org/10.29328/journal.jnpr.1001006 INTRODUCTION The Achilles is the strongest and the largest tendon in the body and it can normally withstand several times a subject’s body weight. Achilles tendon rupture is frequent and it has been shown to cause signi icant morbidity and, regardless of treatment, major functional de icits persist 1 year after acute Achilles tendon rupture [1] and only 50-60% of elite athletes return to pre-injury levels following the rupture [2]. Most Achilles tendon rupture is promptly diagnosed, but at irst exam physicians may miss up to 20% of these lesions [3]. The de inition of an old, chronic or neglected rupture is variable: the most used timeframe is 4 to 10 weeks [4]. The diagnosis of chronic rupture can be more dif icult because the gap palpable in acute ruptures is no longer present and it has been replaced by ibrous scar tissue. Typically chronic rupture occur 2 to 6 cm above the calcaneal insertion with extensive scar tissue deposition between the retracted tendon stumps [5], and the blood supply to this area is poor. In this lesion the tendon end usually has been retracted so the management must be different from the acute lesion’s one. Despite its high incidence and the great development of literature about this topic, there is still controversy about the optimal management of Achilles tendon rupture [6]. The several techniques proposed to treat acute ruptures can essentially be classi ied into: conservative management (cast immobilization or functional bracing), open repair, minimally invasive technique and percutaneous repair [7] with or without augmentation. Chronic ruptures represent a different chapter and the ideal treatment seems to be surgical [3]: the techniques frequently used are debridement, local tissue transfer, augmentation and synthetic grafts [8]. Achilles Tendon Injuries: Comparison of Different Conservative and Surgical Treatment and Rehabilitation Published: February 21, 2017 040 Conservative treatment using a short leg resting cast in an equinus position is probably justi ied for elderly patients who have lower functional requirements or increased risk of surgical healing, such as individuals with diabetes mellitus or in treatment with immunosuppressive drugs. In the conservative treatment, traditionally the ankle is immobilized in maximal plantar lexion, so as to re-approximate the two stumps, and a cast is worn to enable the tendon tissue to undergo biological repair. Advantages include the avoidance of surgical complications [9-11] and hospitalization, and the cost minimization. However, conservative treatment is associated with high rate of tendon re-rupture (up to 20%) [12]. Operative treatment can ensure tendon approximation and improve healing, and thus leads to a lower re-rupture rate (about 2-5%). However, complications such as wound infections, skin tethering, sural nerve damage and hypertrophic scar have been reported to range up to 34% [13]. The clinically most commonly used suture techniques for ruptured Achilles tendon are the Bunnell [14,15] and Kessler techniques [16-18]. Minimally invasive surgical techniques (using limited incisions or percutaneous techniques) are considered to reduce the risk of operative complications and appear successful in preventing re-rupture in cohort studies [19,20]. Ma and Grif ith originally described the percutaneous repair, which is a closed procedure performed under local anesthesia using various surgical techniques and instruments. The advantages in this technique are reduced rate of complications such as infections, nerve lesions or re-ruptures [21]. The surgical repair of a rupture of the Achilles tendon with the AchillonTM device and immediate weight-bearing has shown fewer complications and faster rehabilitation [22]. A thoughtful, comprehensive and responsive rehabilitation program is necessary after the operative treatment of acute Achilles lesions. First of all, the purposes of the rehabilitation program are to obtain a reduction of pain and swelling; secondly, progress toward the gradual recovery of ankle motion and power; lastly, the restoration of coordinated activity and safe return to daily life and athletic activity [23]. An important point to considerer is the immediate postoperative management, which includes immobilization of the ankle and limited or prohibited weight-bearing [24].", "title": "" }, { "docid": "5a5b30b63944b92b168de7c17d5cdc5e", "text": "We introduce the Densely Segmented Supermarket (D2S) dataset, a novel benchmark for instance-aware semantic segmentation in an industrial domain. It contains 21 000 high-resolution images with pixel-wise labels of all object instances. The objects comprise groceries and everyday products from 60 categories. The benchmark is designed such that it resembles the real-world setting of an automatic checkout, inventory, or warehouse system. The training images only contain objects of a single class on a homogeneous background, while the validation and test sets are much more complex and diverse. To further benchmark the robustness of instance segmentation methods, the scenes are acquired with different lightings, rotations, and backgrounds. We ensure that there are no ambiguities in the labels and that every instance is labeled comprehensively. The annotations are pixel-precise and allow using crops of single instances for articial data augmentation. The dataset covers several challenges highly relevant in the field, such as a limited amount of training data and a high diversity in the test and validation sets. The evaluation of state-of-the-art object detection and instance segmentation methods on D2S reveals significant room for improvement.", "title": "" }, { "docid": "c9e5a1b9c18718cc20344837e10b08f7", "text": "Reconnaissance is the initial and essential phase of a successful advanced persistent threat (APT). In many cases, attackers collect information from social media, such as professional social networks. This information is used to select members that can be exploited to penetrate the organization. Detecting such reconnaissance activity is extremely hard because it is performed outside the organization premises. In this paper, we propose a framework for management of social network honeypots to aid in detection of APTs at the reconnaissance phase. We discuss the challenges that such a framework faces, describe its main components, and present a case study based on the results of a field trial conducted with the cooperation of a large European organization. In the case study, we analyze the deployment process of the social network honeypots and their maintenance in real social networks. The honeypot profiles were successfully assimilated into the organizational social network and received suspicious friend requests and mail messages that revealed basic indications of a potential forthcoming attack. In addition, we explore the behavior of employees in professional social networks, and their resilience and vulnerability toward social network infiltration.", "title": "" }, { "docid": "94229bd589a99a6a6b4691e4778b28fc", "text": "Commercially available software components come with the built-in functionality often offering end-user more than they need. A fact that end-user has no or very little influence on component’s functionality promoted nonfunctional requirements which are getting more attention than ever before. In this paper, we identify some of the problems encountered when non-functional requirements for COTS software components need to be defined.", "title": "" }, { "docid": "2da84ca7d7db508a6f9a443f2dbae7c1", "text": "This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40% while remaining highly competitive in terms of processing time.", "title": "" }, { "docid": "35dbef4cc4b8588d451008b8156f326f", "text": "Raman spectroscopy is a powerful tool for studying the biochemical composition of tissues and cells in the human body. We describe the initial results of a feasibility study to design and build a miniature, fiber optic probe incorporated into a standard hypodermic needle. This probe is intended for use in optical biopsies of solid tissues to provide valuable information of disease type, such as in the lymphatic system, breast, or prostate, or of such tissue types as muscle, fat, or spinal, when identifying a critical injection site. The optical design and fabrication of this probe is described, and example spectra of various ex vivo samples are shown.", "title": "" }, { "docid": "a78caf89bb51dca3a8a95f7736ae1b2b", "text": "The understanding of sentences involves not only the retrieval of the meaning of single words, but the identification of the relation between a verb and its arguments. The way the brain manages to process word meaning and syntactic relations during language comprehension on-line still is a matter of debate. Here we review the different views discussed in the literature and report data from crucial experiments investigating the temporal and neurotopological parameters of different information types encoded in verbs, i.e. word category information, the verb's argument structure information, the verb's selectional restriction and the morphosyntactic information encoded in the verb's inflection. The neurophysiological indices of the processes dealing with these different information types suggest an initial independence of the processing of word category information from other information types as the basis of local phrase structure building, and a later processing stage during which different information types interact. The relative ordering of the subprocesses appears to be universal, whereas the absolute timing of when during later phrases interaction takes places varies as a function of when the relevant information becomes available. Moreover, the neurophysiological indices for non-local dependency relations vary as a function of the morphological richness of the language.", "title": "" }, { "docid": "70374d2cbf730fab13c3e126359b59e8", "text": "We define a new distance measure the resistor-average distance between two probability distributions that is closely related to the Kullback-Leibler distance. While the KullbackLeibler distance is asymmetric in the two distributions, the resistor-average distance is not. It arises from geometric considerations similar to those used to derive the Chernoff distance. Determining its relation to well-known distance measures reveals a new way to depict how commonly used distance measures relate to each other.", "title": "" }, { "docid": "38012834c3e533adad68fb0d8377f7db", "text": "Undersampling the k -space data is widely adopted for acceleration of Magnetic Resonance Imaging (MRI). Current deep learning based approaches for supervised learning of MRI image reconstruction employ real-valued operations and representations by treating complex valued k-space/spatial-space as real values. In this paper, we propose complex dense fully convolutional neural network (CDFNet) for learning to de-alias the reconstruction artifacts within undersampled MRI images. We fashioned a densely-connected fully convolutional block tailored for complex-valued inputs by introducing dedicated layers such as complex convolution, batch normalization, non-linearities etc. CDFNet leverages the inherently complex-valued nature of input k -space and learns richer representations. We demonstrate improved perceptual quality and recovery of anatomical structures through CDFNet in contrast to its realvalued counterparts.", "title": "" }, { "docid": "0fb9b4577da65280e664eee48a76fd3a", "text": "We describe a set of rendering techniques for an autostereoscopic light field display able to present interactive 3D graphics to multiple simultaneous viewers 360 degrees around the display. The display consists of a high-speed video projector, a spinning mirror covered by a holographic diffuser, and FPGA circuitry to decode specially rendered DVI video signals. The display uses a standard programmable graphics card to render over 5,000 images per second of interactive 3D graphics, projecting 360-degree views with 1.25 degree separation up to 20 updates per second. We describe the system's projection geometry and its calibration process, and we present a multiple-center-of-projection rendering technique for creating perspective-correct images from arbitrary viewpoints around the display. Our projection technique allows correct vertical perspective and parallax to be rendered for any height and distance when these parameters are known, and we demonstrate this effect with interactive raster graphics using a tracking system to measure the viewer's height and distance. We further apply our projection technique to the display of photographed light fields with accurate horizontal and vertical parallax. We conclude with a discussion of the display's visual accommodation performance and discuss techniques for displaying color imagery.", "title": "" }, { "docid": "97b212bb8fde4859e368941a4e84ba90", "text": "What appears to be a simple pattern of results—distributed-study opportunities usually produce bettermemory thanmassed-study opportunities—turns out to be quite complicated.Many ‘‘impostor’’ effects such as rehearsal borrowing, strategy changes during study, recency effects, and item skipping complicate the interpretation of spacing experiments. We suggest some best practices for future experiments that diverge from the typical spacing experiments in the literature. Next, we outline themajor theories that have been advanced to account for spacing studies while highlighting the critical experimental evidence that a theory of spacingmust explain. We then propose a tentative verbal theory based on the SAM/REMmodel that utilizes contextual variability and study-phase retrieval to explain the major findings, as well as predict some novel results. Next, we outline the major phenomena supporting testing as superior to restudy on long-term retention tests, and review theories of the testing phenomenon, along with some possible boundary conditions. Finally, we suggest some ways that spacing and testing can be integrated into the classroom, and ask to what extent educators already capitalize on these phenomena. Along the way, we present several new experiments that shed light on various facets of the spacing and testing effects.", "title": "" }, { "docid": "9b2cd501685570f1d27394372cce0103", "text": "We present a transceiver chipset consisting of a four channel receiver (Rx) and a single-channel transmitter (Tx) designed in a 200-GHz SiGe BiCMOS technology. Each Rx channel has a conversion gain of 19 dB with a typical single sideband noise figure of 10 dB at 1-MHz offset. The Tx includes two exclusively-enabled voltage-controlled oscillators on the same die to switch between two bands at 76-77 and 77-81 GHz. The phase noise is -97 dBc/Hz at 1-MHz offset. On-wafer, the output power is 2 × 13 dBm. At 3.3-V supply, the Rx chip draws 240 mA, while the Tx draws 530 mA. The power dissipation for the complete chipset is 2.5 W. The two chips are used as vehicles for a 77-GHz package test. The chips are packaged using the redistribution chip package technology. We compare on-wafer measurements with on-board results. The loss at the RF port due to the transition in the package results to be less than 1 dB at 77 GHz. The results demonstrate an excellent potential of the presented millimeter-wave package concept for millimeter-wave applications.", "title": "" }, { "docid": "c1af668bdeeda5871e3bc6a602f022e6", "text": "Within the parallel computing domain, field programmable gate arrays (FPGA) are no longer restricted to their traditional role as substitutes for application-specific integrated circuits-as hardware \"hidden\" from the end user. Several high performance computing vendors offer parallel re configurable computers employing user-programmable FPGAs. These exciting new architectures allow end-users to, in effect, create reconfigurable coprocessors targeting the computationally intensive parts of each problem. The increased capability of contemporary FPGAs coupled with the embarrassingly parallel nature of the Jacobi iterative method make the Jacobi method an ideal candidate for hardware acceleration. This paper introduces a parameterized design for a deeply pipelined, highly parallelized IEEE 64-bit floating-point version of the Jacobi method. A Jacobi circuit is implemented using a Xilinx Virtex-II Pro as the target FPGA device. Implementation statistics and performance estimates are presented.", "title": "" }, { "docid": "2fe5a40499012640b3b4d18b134b3b7e", "text": "Hollywood has often been called the land of hunches and wild guesses. The uncertainty associated with the predictability of product demand makes the movie business a risky endeavor. Therefore, predicting the box-office receipts of a particular motion picture has intrigued many scholars and industry leaders as a difficult and challenging problem. In this study, with a rather large and feature rich dataset, we explored the use of data mining methods (e.g., artificial neural networks, decision trees and support vector machines along with information fusion based ensembles) to predict the financial performance of a movie at the box-office before its theatrical release. In our prediction models, we have converted the forecasting problem into a classification problem—rather than forecasting the point estimate of box-office receipts; we classified a movie (based on its box-office receipts) into nine categories, ranging from a “flop” to a “blockbuster.” Herein we present our exciting prediction results where we compared individual models to those of the ensamples.", "title": "" }, { "docid": "ada79ede490e8427f542d85a2ea5266b", "text": "We present QUINT, a live system for question answering over knowledge bases. QUINT automatically learns role-aligned utterance-query templates from user questions paired with their answers. When QUINT answers a question, it visualizes the complete derivation sequence from the natural language utterance to the final answer. The derivation provides an explanation of how the syntactic structure of the question was used to derive the structure of a SPARQL query, and how the phrases in the question were used to instantiate different parts of the query. When an answer seems unsatisfactory, the derivation provides valuable insights towards reformulating the question.", "title": "" } ]
scidocsrr
9bb36937256e01235372572769288507
A Hybrid Model Combining Convolutional Neural Network with XGBoost for Predicting Social Media Popularity
[ { "docid": "28c6fd64958a21c54f931f5eb802c814", "text": "Time information plays a crucial role on social media popularity. Existing research on popularity prediction, effective though, ignores temporal information which is highly related to user-item associations and thus often results in limited success. An essential way is to consider all these factors (user, item, and time), which capture the dynamic nature of photo popularity. In this paper, we present a novel approach to factorize the popularity into user-item context and time-sensitive context for exploring the mechanism of dynamic popularity. The user-item context provides a holistic view of popularity, while the time-sensitive context captures the temporal dynamics nature of popularity. Accordingly, we develop two kinds of time-sensitive features, including user activeness variability and photo prevalence variability. To predict photo popularity, we propose a novel framework named Multi-scale Temporal Decomposition (MTD), which decomposes the popularity matrix in latent spaces based on contextual associations. Specifically, the proposed MTD models time-sensitive context on different time scales, which is beneficial to automatically learn temporal patterns. Based on the experiments conducted on a real-world dataset with 1.29M photos from Flickr, our proposed MTD can achieve the prediction accuracy of 79.8% and outperform the best three state-of-the-art methods with a relative improvement of 9.6% on average.", "title": "" } ]
[ { "docid": "0b56f9c9ec0ce1db8dcbfd2830b2536b", "text": "In many statistical problems, a more coarse-grained model may be suitable for population-level behaviour, whereas a more detailed model is appropriate for accurate modelling of individual behaviour. This raises the question of how to integrate both types of models. Methods such as posterior regularization follow the idea of generalized moment matching, in that they allow matching expectations between two models, but sometimes both models are most conveniently expressed as latent variable models. We propose latent Bayesian melding, which is motivated by averaging the distributions over populations statistics of both the individual-level and the population-level models under a logarithmic opinion pool framework. In a case study on electricity disaggregation, which is a type of singlechannel blind source separation problem, we show that latent Bayesian melding leads to significantly more accurate predictions than an approach based solely on generalized moment matching.", "title": "" }, { "docid": "f6d3157155868f5fafe2533dfd8768b8", "text": "Over the past few years, the task of conceiving effective attacks to complex networks has arisen as an optimization problem. Attacks are modelled as the process of removing a number k of vertices, from the graph that represents the network, and the goal is to maximise or minimise the value of a predefined metric over the graph. In this work, we present an optimization problem that concerns the selection of nodes to be removed to minimise the maximum betweenness centrality value of the residual graph. This metric evaluates the participation of the nodes in the communications through the shortest paths of the network. To address the problem we propose an artificial bee colony algorithm, which is a swarm intelligence approach inspired in the foraging behaviour of honeybees. In this framework, bees produce new candidate solutions for the problem by exploring the vicinity of previous ones, called food sources. The proposed method exploits useful problem knowledge in this neighbourhood exploration by considering the partial destruction and heuristic reconstruction of selected solutions. The performance of the method, with respect to other models from the literature that can be adapted to face this problem, such as sequential centrality-based attacks, module-based attacks, a genetic algorithm, a simulated annealing approach, and a variable neighbourhood search, is empirically shown. E-mail addresses: lozano@decsai.ugr.es (M. Lozano), cgarcia@uco.es (C. GarćıaMart́ınez), fjrodriguez@unex.es (F.J. Rodŕıguez), humberto@ugr.es (H.M. Trujillo). Preprint submitted to Information Sciences August 17, 2016 *Manuscript (including abstract) Click here to view linked References", "title": "" }, { "docid": "e5d523d8a1f584421dab2eeb269cd303", "text": "In this paper, we propose a novel appearance-based method for person re-identification, that condenses a set of frames of the same individual into a highly informative signature, called Histogram Plus Epitome, HPE. It incorporates complementary global and local statistical descriptions of the human appearance, focusing on the overall chromatic content, via histograms representation, and on the presence of recurrent local patches, via epitome estimation. The matching of HPEs provides optimal performances against low resolution, occlusions, pose and illumination variations, defining novel state-of-the-art results on all the datasets considered.", "title": "" }, { "docid": "4776f37d50709362b6173de58f6badd4", "text": "Current object recognition systems aim at recognizing numerous object classes under limited supervision conditions. This paper provides a benchmark for evaluating progress on this fundamental task. Several methods have recently proposed to utilize the commonalities between object classes in order to improve generalization accuracy. Such methods can be termed interclass transfer techniques. However, it is currently difficult to asses which of the proposed methods maximally utilizes the shared structure of related classes. In order to facilitate the development, as well as the assessment of methods for dealing with multiple related classes, a new dataset including images of several hundred mammal classes, is provided, together with preliminary results of its use. The images in this dataset are organized into five levels of variability, and their labels include information on the objects’ identity, location and pose. From this dataset, a classification benchmark has been derived, requiring fine distinctions between 72 mammal classes. It is then demonstrated that a recognition method which is highly successful on the Caltech101, attains limited accuracy on the current benchmark (36.5%). Since this method does not utilize the shared structure between classes, the question remains as to whether interclass transfer methods can increase the accuracy to the level of human performance (90%). We suggest that a labeled benchmark of the type provided, containing a large number of related classes is crucial for the development and evaluation of classification methods which make efficient use of interclass transfer.", "title": "" }, { "docid": "efa566cdd4f5fa3cb12a775126377cb5", "text": "This paper deals with the electromagnetic emissions of integrated circuits. In particular, four measurement techniques to evaluate integrated circuit conducted emissions are described in detail and they are employed for the measurement of the power supply conducted emission delivered by a simple integrated circuit composed of six synchronous switching drivers. Experimental results obtained by employing such measurement methods are presented and the influence of each test setup on the measured quantities is discussed.", "title": "" }, { "docid": "e144521f4edf21916991590e173b4cf9", "text": "We demonstrated a high-yield and easily reproducible synthesis of a highly active oxygen evolution reaction (OER) catalyst, \"the core-oxidized amorphous cobalt phosphide nanostructures\". The rational formation of such core-oxidized amorphous cobalt phosphide nanostructures was accomplished by homogenization, drying, and annealing of a cobalt(II) acetate and sodium hypophosphite mixture taken in the weight ratio of 1:10 in an open atmosphere. Electrocatalytic studies were carried out on the same mixture and in comparison with commercial catalysts, viz., Co3O4-Sigma, NiO-Sigma, and RuO2-Sigma, have shown that our catalyst is superior to all three commercial catalysts in terms of having very low overpotential (287 mV at 10 mA cm-2), lower Tafel slope (0.070 V dec-1), good stability upon constant potential electrolysis, and accelerated degradation tests along with a significantly higher mass activity of 300 A g-1 at an overpotential of 360 mV. The synergism between the amorphous CoxPy shell with the Co3O4 core is attributed to the observed enhancement in the OER performance of our catalyst. Moreover, detailed literature has revealed that our catalyst is superior to most of the earlier reports.", "title": "" }, { "docid": "3380a9a220e553d9f7358739e3f28264", "text": "We present a multi-instance object segmentation algorithm to tackle occlusions. As an object is split into two parts by an occluder, it is nearly impossible to group the two separate regions into an instance by purely bottomup schemes. To address this problem, we propose to incorporate top-down category specific reasoning and shape prediction through exemplars into an intuitive energy minimization framework. We perform extensive evaluations of our method on the challenging PASCAL VOC 2012 segmentation set. The proposed algorithm achieves favorable results on the joint detection and segmentation task against the state-of-the-art method both quantitatively and qualitatively.", "title": "" }, { "docid": "8ec7edd2d963501b714be80cb2ea8535", "text": "The problem of recognizing text in images taken in the wild ha s g ined significant attention from the computer vision community in recent years. The scene text recognition task is more challenging compare d to the traditional problem of recognizing text in printed documents. We focus on this problem, and recognize text extracted from natural sce ne images and the web. Significant attempts have been made to address this p roblem in the recent past, for example [1, 2]. However, many of these wo rks benefit from the availability of strong context, which naturall y imits their applicability. In this work, we present a framework to overc ome these restrictions. Our model introduces a higher order prior com puted from an English dictionary to recognize a word, which may or may not b e a part of the dictionary. We present experimental analysis on stan dard as well as new benchmark datasets. The main contributions of this work are: (1) We present a fram ework, which incorporates higher order statistical language mode ls to recognize words in an unconstrained manner, i.e. we overcome the need for restricted word lists. (2) We achieve significant improvement (more than 20%) in word recognition accuracies in a general setting. (3 ) We introduce a large word recognition dataset (atleast 5 times large r than other public datasets) with character level annotation and bench mark it.", "title": "" }, { "docid": "c4fcd7db5f5ba480d7b3ecc46bef29f6", "text": "In this paper, we propose an indoor action detection system which can automatically keep the log of users' activities of daily life since each activity generally consists of a number of actions. The hardware setting here adopts top-view depth cameras which makes our system less privacy sensitive and less annoying to the users, too. We regard the series of images of an action as a set of key-poses in images of the interested user which are arranged in a certain temporal order and use the latent SVM framework to jointly learn the appearance of the key-poses and the temporal locations of the key-poses. In this work, two kinds of features are proposed. The first is the histogram of depth difference value which can encode the shape of the human poses. The second is the location-signified feature which can capture the spatial relations among the person, floor, and other static objects. Moreover, we find that some incorrect detection results of certain type of action are usually associated with another certain type of action. Therefore, we design an algorithm that tries to automatically discover the action pairs which are the most difficult to be differentiable, and suppress the incorrect detection outcomes. To validate our system, experiments have been conducted, and the experimental results have shown effectiveness and robustness of our proposed method.", "title": "" }, { "docid": "29c91c8d6f7faed5d23126482a2f553b", "text": "In this article, we present an account of the state of the art in acoustic scene classification (ASC), the task of classifying environments from the sounds they produce. Starting from a historical review of previous research in this area, we define a general framework for ASC and present different implementations of its components. We then describe a range of different algorithms submitted for a data challenge that was held to provide a general and fair benchmark for ASC techniques. The data set recorded for this purpose is presented along with the performance metrics that are used to evaluate the algorithms and statistical significance tests to compare the submitted methods.", "title": "" }, { "docid": "0c8947cbaa2226a024bf3c93541dcae1", "text": "As storage systems grow in size and complexity, they are increasingly confronted with concurrent disk failures together with multiple unrecoverable sector errors. To ensure high data reliability and availability, erasure codes with high fault tolerance are required. In this article, we present a new family of erasure codes with high fault tolerance, named GRID codes. They are called such because they are a family of strip-based codes whose strips are arranged into multi-dimensional grids. In the construction of GRID codes, we first introduce a concept of matched codes and then discuss how to use matched codes to construct GRID codes. In addition, we propose an iterative reconstruction algorithm for GRID codes. We also discuss some important features of GRID codes. Finally, we compare GRID codes with several categories of existing codes. Our comparisons show that for large-scale storage systems, our GRID codes have attractive advantages over many existing erasure codes: (a) They are completely XOR-based and have very regular structures, ensuring easy implementation; (b) they can provide up to 15 and even higher fault tolerance; and (c) their storage efficiency can reach up to 80% and even higher. All the advantages make GRID codes more suitable for large-scale storage systems.", "title": "" }, { "docid": "4806b28786af042c23897dbf23802789", "text": "With the rapidly increasing popularity of deep neural networks for image recognition tasks, a parallel interest in generating adversarial examples to attack the trained models has arisen. To date, these approaches have involved either directly computing gradients with respect to the image pixels or directly solving an optimization on the image pixels. We generalize this pursuit in a novel direction: can a separate network be trained to efficiently attack another fully trained network? We demonstrate that it is possible, and that the generated attacks yield startling insights into the weaknesses of the target network. We call such a network an Adversarial Transformation Network (ATN). ATNs transform any input into an adversarial attack on the target network, while being minimally perturbing to the original inputs and the target network’s outputs. Further, we show that ATNs are capable of not only causing the target network to make an error, but can be constructed to explicitly control the type of misclassification made. We demonstrate ATNs on both simple MNISTdigit classifiers and state-of-the-art ImageNet classifiers deployed by Google, Inc.: Inception ResNet-v2. With the resurgence of deep neural networks for many real-world classification tasks, there is an increased interest in methods to assess the weaknesses in the trained models. Adversarial examples are small perturbations of the inputs that are carefully crafted to fool the network into producing incorrect outputs. Seminal work by (Szegedy et al. 2013) and (Goodfellow, Shlens, and Szegedy 2014), as well as much recent work, has shown that adversarial examples are abundant, and that there are many ways to discover them. Given a classifier f(x) : x ∈ X → y ∈ Y and original inputs x ∈ X , the problem of generating untargeted adversarial examples can be expressed as the optimization: argminx∗ L(x,x ∗) s.t. f(x∗) = f(x), where L(·) is a distance metric between examples from the input space (e.g., the L2 norm). Similarly, generating a targeted adversarial attack on a classifier can be expressed as argminx∗ L(x,x ∗) s.t. f(x∗) = yt, where yt ∈ Y is some target label chosen by the attacker. Until now, these optimization problems have been solved using three broad approaches: (1) By directly using optimizers like L-BFGS or Adam (Kingma and Ba 2015), as Copyright c © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. proposed in (Szegedy et al. 2013) and (Carlini and Wagner 2016). (2) By approximation with single-step gradient-based techniques like fast gradient sign (Goodfellow, Shlens, and Szegedy 2014) or fast least likely class (Kurakin, Goodfellow, and Bengio 2016). (3) By approximation with iterative variants of gradient-based techniques (Kurakin, Goodfellow, and Bengio 2016; Moosavi-Dezfooli et al. 2016; Moosavi-Dezfooli, Fawzi, and Frossard 2016). These approaches use multiple forward and backward passes through the target network to more carefully move an input towards an adversarial classification. Other approaches assume a black-box model and only having access to the target model’s output (Papernot et al. 2016; Baluja, Covell, and Sukthankar 2015; Tramèr et al. 2016). See (Papernot et al. 2015) for a discussion of threat models. Each of the above approaches solved an optimization problem such that a single set of inputs was perturbed enough to force the target network to make a mistake. We take a fundamentally different approach: given a welltrained target network, can we create a separate, attacknetwork that, with high probability, minimally transforms all inputs into ones that will be misclassified? No per-sample optimization problems should be solved. The attack-network should take as input a clean image and output a minimally modified image that will cause a misclassification in the target network. Further, can we do this while imposing strict constraints on the types and amount of perturbations allowed? We introduce a class of networks, called Adversarial Transformation Networks, to efficiently address this task. Adversarial Transformation Networks In this work, we propose Adversarial Transformation Networks (ATNs). An ATN is a neural network that transforms an input into an adversarial example against a target network or set of networks. ATNs may be untargeted or targeted, and trained in a black-box or white-box manner. In this work, we will focus on targeted, white-box ATNs. Formally, an ATN can be defined as a neural network: gf,θ(x) : x ∈ X → x′ (1) where θ is the parameter vector of g, f is the target network which outputs a probability distribution across class labels, and x′ ∼ x, but argmax f(x) = argmax f(x′). The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)", "title": "" }, { "docid": "8dc50e5d77db50332c06684cac3e5c01", "text": "BACKGROUND\nRhodiola rosea (R. rosea) is grown at high altitudes and northern latitudes. Due to its purported adaptogenic properties, it has been studied for its performance-enhancing capabilities in healthy populations and its therapeutic properties in a number of clinical populations. To systematically review evidence of efficacy and safety of R. rosea for physical and mental fatigue.\n\n\nMETHODS\nSix electronic databases were searched to identify randomized controlled trials (RCTs) and controlled clinical trials (CCTs), evaluating efficacy and safety of R. rosea for physical and mental fatigue. Two reviewers independently screened the identified literature, extracted data and assessed risk of bias for included studies.\n\n\nRESULTS\nOf 206 articles identified in the search, 11 met inclusion criteria for this review. Ten were described as RCTs and one as a CCT. Two of six trials examining physical fatigue in healthy populations report R. rosea to be effective as did three of five RCTs evaluating R. rosea for mental fatigue. All of the included studies exhibit either a high risk of bias or have reporting flaws that hinder assessment of their true validity (unclear risk of bias).\n\n\nCONCLUSION\nResearch regarding R. rosea efficacy is contradictory. While some evidence suggests that the herb may be helpful for enhancing physical performance and alleviating mental fatigue, methodological flaws limit accurate assessment of efficacy. A rigorously-designed well reported RCT that minimizes bias is needed to determine true efficacy of R. rosea for fatigue.", "title": "" }, { "docid": "f8aeaf04486bdbc7254846d95e3cab24", "text": "In this paper, we present a novel wearable RGBD camera based navigation system for the visually impaired. The system is composed of a smartphone user interface, a glass-mounted RGBD camera device, a real-time navigation algorithm, and haptic feedback system. A smartphone interface provides an effective way to communicate to the system using audio and haptic feedback. In order to extract orientational information of the blind users, the navigation algorithm performs real-time 6-DOF feature based visual odometry using a glass-mounted RGBD camera as an input device. The navigation algorithm also builds a 3D voxel map of the environment and analyzes 3D traversability. A path planner of the navigation algorithm integrates information from the egomotion estimation and mapping and generates a safe and an efficient path to a waypoint delivered to the haptic feedback system. The haptic feedback system consisting of four micro-vibration motors is designed to guide the visually impaired user along the computed path and to minimize cognitive loads. The proposed system achieves real-time performance faster than 30Hz in average on a laptop, and helps the visually impaired extends the range of their activities and improve the mobility performance in a cluttered environment. The experiment results show that navigation in indoor environments with the proposed system avoids collisions successfully and improves mobility performance of the user compared to conventional and state-of-the-art mobility aid devices.", "title": "" }, { "docid": "b3b050c35a1517dc52351cd917d0665a", "text": "The amount of information shared via social media is rapidly increasing amid growing concerns over online privacy. This study investigates the effect of controversiality and social endorsement of media content on sharing behavior when choosing between sharing publicly or anonymously. Anonymous sharing is found to be a popular choice (59% of shares), especially for controversial content which is 3.2x more likely to be shard anonymously. Social endorsement was not found to affect sharing behavior, except for sports-related content. Implications for social media interface design are dis-", "title": "" }, { "docid": "dc207fb8426f468dde2cb1d804b33539", "text": "This paper presents a webcam-based spherical coordinate conversion system using OpenCL massive parallel computing for panorama video image stitching. With multi-core architecture and its high-bandwidth data transmission rate of memory accesses, modern programmable GPU makes it possible to process multiple video images in parallel for real-time interaction. To get a panorama view of 360 degrees, we use OpenCL to stitch multiple webcam video images into a panorama image and texture mapped it to a spherical object to compose a virtual reality immersive environment. The experimental results show that when we use NVIDIA 9600GT to process eight 640×480 images, OpenCL can achieve ninety times speedups.", "title": "" }, { "docid": "43741bb21c47889b7b0d8de372a4dacd", "text": "Indoor localization or zonification in disaster affected settings is a challenging research problem. Existing studies encompass localization and tracking of first-responders or fire fighters using wireless sensor networks. In addition to that, fast evacuation, routing, and planning have also been proposed. However, the problem of locating survivors or victims is yet to be explored to the full potential. State-of-the-art literature often employ infrastructure dependent solutions, for example, WiFi localization using WiFi access points exploiting fingerprinting techniques, Pedestrian Dead Reckoning (PDR) starting from known locations, etc. Owing to unpredictable and dynamic nature of disaster affected environments, infrastructure dependent solutions are seldom useful. Therefore, in this study, we propose an ad hoc WiFi zonification technique (named as AWZone) that is independent of any infrastructural settings. AWZone attempts to perform localization through exploiting commodity smartphones as a beaconing device and successively searching and narrowing down the search space. We perform two testbed experiments. The results reveal that, for a single survivor or victim, AWZone can identify the search space and estimate a location with an approximate 1.5m localization error through eliminating incorrect zones from a set of possible results.", "title": "" }, { "docid": "f2b1f83a02f7fa226bb7e515790d98d9", "text": "Data analytics using machine learning (ML) has become ubiquitous in science, business intelligence, journalism and many other domains. While a lot of work focuses on reducing the training cost, inference runtime and storage cost of ML models, little work studies how to reduce the cost of data acquisition, which potentially leads to a loss of sellers’ revenue and buyers’ affordability and efficiency. In this paper, we propose a model-based pricing (MBP) framework, which instead of pricing the data, directly prices ML model instances. We first formally describe the desired properties of the MBP framework, with a focus on avoiding arbitrage. Next, we show a concrete realization of the MBP framework via a noise injection approach, which provably satisfies the desired formal properties. Based on the proposed framework, we then provide algorithmic solutions on how the seller can assign prices to models under different market scenarios (such as to maximize revenue). Finally, we conduct extensive experiments, which validate that the MBP framework can provide high revenue to the seller, high affordability to the buyer, and also operate on low runtime cost.", "title": "" }, { "docid": "086886072f3ac6908bd47822ce7398d1", "text": "This paper presents a methodology to accurately record human finger postures during grasping. The main contribution consists of a kinematic model of the human hand reconstructed via magnetic resonance imaging of one subject that (i) is fully parameterized and can be adapted to different subjects, and (ii) is amenable to in-vivo joint angle recordings via optical tracking of markers attached to the skin. The principal novelty here is the introduction of a soft-tissue artifact compensation mechanism that can be optimally calibrated in a systematic way. The high-quality data gathered are employed to study the properties of hand postural synergies in humans, for the sake of ongoing neuroscience investigations. These data are analyzed and some comparisons with similar studies are reported. After a meaningful mapping strategy has been devised, these data could be employed to define robotic hand postures suitable to attain effective grasps, or could be used as prior knowledge in lower-dimensional, real-time avatar hand animation.", "title": "" } ]
scidocsrr
ceddf706f0a849865e0cb52e55f06478
DisLocation: Scalable Descriptor Distinctiveness for Location Recognition
[ { "docid": "beb22339057840dc9a7876a871d242cf", "text": "We look at the problem of location recognition in a large image dataset using a vocabulary tree. This entails finding the location of a query image in a large dataset containing 3times104 streetside images of a city. We investigate how the traditional invariant feature matching approach falls down as the size of the database grows. In particular we show that by carefully selecting the vocabulary using the most informative features, retrieval performance is significantly improved, allowing us to increase the number of database images by a factor of 10. We also introduce a generalization of the traditional vocabulary tree search algorithm which improves performance by effectively increasing the branching factor of a fixed vocabulary tree.", "title": "" }, { "docid": "3982c66e695fdefe36d8d143247add88", "text": "A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD’s. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.", "title": "" } ]
[ { "docid": "4e5cecf5f52f98bc35067a917d2240bc", "text": "Parasitic copepods, in particular sea lice, have considerable impacts upon global freshwater and marine fisheries, with major economic consequences recognized primarily in aquaculture. Sea lice have been a contentious issue with regards to interactions between farmed and wild populations of fish, in particular salmonids, and their potential for detrimental effects at a population level. The following discussion will pertain to aquatic parasitic copepod species for which we have significant information on the host-parasite interaction and host response to infection (Orders Cyclopoida, Poecilostomatoida and Siphonostomatoida). This review evaluates prior research in terms of contributions to understanding parasite stage specific responses by the host, and in many cases draws upon model organisms like Lepeophtheirus salmonis and Atlantic salmon to convey important concepts in fish responses to parasitic copepod infection. The article discusses TH1 and TH2-like host responses in light of parasite immunomodulation of the host, current methods of immunological stimulation and where the current and future work in this field is heading.", "title": "" }, { "docid": "02308a8c61d0d292441a6eed5dfeffd8", "text": "Disposing of plastic wastes to landfill is becoming undesirable due to legislation pressures, rising costs and the poor biodegradability of commonly used polymers. In addition, incineration meets with strong societal opposition. Therefore, recycling either mechanical or chemical, seems to be the only route of plastic wastes management towards sustainability. Polyolefins, mainly polyethylene (LDPE or HDPE) and polypropylene (PP) are a major type of thermoplastic used throughout the world in a wide variety of applications. In Western Europe alone approximately 22 million tones of these polymers are consumed each year, representing an amount of 56% of the total thermoplastics. In the present investigation the recycling of LDPE, HDPE and PP was examined using two different methods: the dissolution/reprecipitation and pyrolysis. The first belongs to the mechanical recycling techniques while the second to the chemical/feedstock recycling. During the first technique the polymer can be separated and recycled using a solvent/non-solvent system. For this purpose different solvents/non-solvents were examined at different weight percent amounts and temperatures using either model polymers as raw material or commercial waste products (packaging film, bags, pipes and food retail products). At all different experimental conditions and for all samples examined the polymer recovery was always greater than 90%. The quality of the recycled polymer was examined using FTIR and DSC. Furthermore, pyrolysis of LDPE, HDPE and PP was investigated with or without the use of an acid FCC catalyst. Experiments were carried out in a laboratory fixed bed reactor. The gaseous product was analyzed using GC, while the liquid with GC-MS. A small gaseous and a large liquid fraction were obtained from all polymers. Analysis of the derived gases and oils showed that pyrolysis products were hydrocarbons consisting of a series of alkanes and alkenes, with a great potential to be recycled back into the petrochemical industry as a feedstock for the production of new plastics or refined fuels.", "title": "" }, { "docid": "3e62ac4e3476cc2999808f0a43a24507", "text": "We present a detailed description of a new Bioconductor package, phyloseq, for integrated data and analysis of taxonomically-clustered phylogenetic sequencing data in conjunction with related data types. The phyloseq package integrates abundance data, phylogenetic information and covariates so that exploratory transformations, plots, and confirmatory testing and diagnostic plots can be carried out seamlessly. The package is built following the S4 object-oriented framework of the R language so that once the data have been input the user can easily transform, plot and analyze the data. We present some examples that highlight the methods and the ease with which we can leverage existing packages.", "title": "" }, { "docid": "0d59a6b5f8b5684b28adbec835735cd6", "text": "We present a deep learning strategy to fuse multiple semantic cues for complex event recognition. In particular, we tackle the recognition task by answering how to jointly analyze human actions (who is doing what), objects (what), and scenes (where). First, each type of semantic features (e.g., human action trajectories) is fed into a corresponding multi-layer feature abstraction pathway, followed by a fusion layer connecting all the different pathways. Second, the correlations of how the semantic cues interacting with each other are learned in an unsupervised cross-modality autoencoder fashion. Finally, by fine-tuning a large-margin objective deployed on this deep architecture, we are able to answer the question on how the semantic cues of who, what, and where compose a complex event. As compared with the traditional feature fusion methods (e.g., various early or late strategies), our method jointly learns the essential higher level features that are most effective for fusion and recognition. We perform extensive experiments on two real-world complex event video benchmarks, MED'11 and CCV, and demonstrate that our method outperforms the best published results by 21% and 11%, respectively, on an event recognition task.", "title": "" }, { "docid": "1ce25ed2d932f2cec5bb558e71c13277", "text": "Context: Deep Learning (DL) is a division of machine learning techniques that based on algorithms for learning multiples level of representations. Big Data Analytics (BDA) is the process of examining large scale of data and variety of data types. Objectives: The aims of this study are to identify the existing features of DL approaches for using in BDA and identify the key features that affect the effectiveness of DL approaches. Method: A Systematic Literature Review (SLR) was carried out and reported based on the preferred reporting items for systematic reviews. 4065 papers were retrieved by manual search in four databases which are Google Scholar, Taylor & Francis, Springer Link and Science Direct. 34 primary studies were finally included. Result: From these studies, 70% were journal articles, 25% were conference papers and 5% were contributions from the studies consisted of book chapters. Five features of DL were identified and analyzed. The features are (1) hierarchical layer, (2) high-level abstraction, (3) process high volume of data, (4) universal model and (5) does not over fit the training data. Conclusion: This review delivers the evidence that DL in BDA is an active research area. The review provides researchers with some guidelines for future research on this topic. It also provides broad information on DL in BDA which could be useful for practitioners.", "title": "" }, { "docid": "4a75ef354da682701e29a4f76091bed3", "text": "Community detection has arisen as one of the most relevant topics in the field of graph mining, principally for its applications in domains such as social or biological networks analysis. Different community detection algorithms have been proposed during the last decade, approaching the problem from different perspectives. However, existing algorithms are, in general, based on complex and expensive computations, making them unsuitable for large graphs with millions of vertices and edges such as those usually found in the real world.\n In this paper, we propose a novel disjoint community detection algorithm called Scalable Community Detection (SCD). By combining different strategies, SCD partitions the graph by maximizing the Weighted Community Clustering (WCC), a recently proposed community detection metric based on triangle analysis. Using real graphs with ground truth overlapped communities, we show that SCD outperforms the current state of the art proposals (even those aimed at finding overlapping communities) in terms of quality and performance. SCD provides the speed of the fastest algorithms and the quality in terms of NMI and F1Score of the most accurate state of the art proposals. We show that SCD is able to run up to two orders of magnitude faster than practical existing solutions by exploiting the parallelism of current multi-core processors, enabling us to process graphs of unprecedented size in short execution times.", "title": "" }, { "docid": "8e4c04fcd4ffc09fbd653bc5c9f107b5", "text": "THE LARGE AMOUNT OF DATA collected today is quickly overwhelming researchers’ abilities to interpret the data and discover interesting patterns in it. In response to this problem, researchers have developed techniques and systems for discovering concepts in databases. 1–3Much of the collected data, however, has an explicit or implicit structural component (spatial or temporal), which few discovery systems are designed to handle. 4 So, in addition to the need to accelerate data mining of large databases, there is an urgent need to develop scalable tools for discovering concepts in structural databases. One method for discovering knowledge in structural data is the identification of common substructures within the data. Substructure discovery is the process of identifying concepts describing interesting and repetitive substructures within structural data. The discovered substructure concepts allow abstraction from the detailed data structure and provide relevant attributes for interpreting the data. The substructure discovery method is the basis of Subdue, which performs data mining on databases represented as graphs. The system performs two key data-mining techniques: unsupervised pattern discovery and supervised concept learning from examples. Our test applications have demonstrated the scalability and effectiveness of these techniques on a variety of structural databases. Unsupervised concept discovery", "title": "" }, { "docid": "fb1f467ab11bb4c01a9e410bf84ac258", "text": "The modular arrangement of the neocortex is based on the cell minicolumn: a self-contained ecosystem of neurons and their afferent, efferent, and interneuronal connections. The authors' preliminary studies indicate that minicolumns in the brains of autistic patients are narrower, with an altered internal organization. More specifically, their minicolumns reveal less peripheral neuropil space and increased spacing among their constituent cells. The peripheral neuropil space of the minicolumn is the conduit, among other things, for inhibitory local circuit projections. A defect in these GABAergic fibers may correlate with the increased prevalence of seizures among autistic patients. This article expands on our initial findings by arguing for the specificity of GABAergic inhibition in the neocortex as being focused around its mini- and macrocolumnar organization. The authors conclude that GABAergic interneurons are vital to proper minicolumnar differentiation and signal processing (e.g., filtering capacity of the neocortex), thus providing a putative correlate to autistic symptomatology.", "title": "" }, { "docid": "bcda77a0de7423a2a4331ff87ce9e969", "text": "Because of the increasingly competitive nature of the computer manufacturing industry, Compaq Computer Corporation has made some trend-setting changes in the way it does business. One of these changes is the extension of Compaq's call-logging sy ste problem-resolution component that assists customer support personnel in determining the resolution to a customer's questions and problems. Recently, Compaq extended its customer service to provide not only dealer support but also direct end user support; it is also accepting ownership of any Compaq customer's problems in a Banyan, Mi-crosoft, Novell, or SCO UNIX operating environment. One of the tools that makes this feat possible is SMART (support management automated reasoning technology). SMART is part of a Compaq strategy to increase the effectiveness of the customer support staff and reduce overall cost to the organization by retaining problem-solving knowledge and making it available to the entire support staff at the point it is needed.", "title": "" }, { "docid": "07bb0aec18894ae389eea9e2756443f8", "text": "Generative Adversarial Networks (GANs) and their extensions have carved open many exciting ways to tackle well known and challenging medical image analysis problems such as medical image denoising, reconstruction, segmentation, data simulation, detection or classification. Furthermore, their ability to synthesize images at unprecedented levels of realism also gives hope that the chronic scarcity of labeled data in the medical field can be resolved with the help of these generative models. In this review paper, a broad overview of recent literature on GANs for medical applications is given, the shortcomings and opportunities of the proposed methods are thoroughly discussed and potential future work is elaborated. A total of 63 papers published until end of July 2018 are reviewed. For quick access, the papers and important details such as the underlying method, datasets and performance are summarized in tables.", "title": "" }, { "docid": "f9d1777be40b879aee2f6e810422d266", "text": "This study intended to examine the effect of ground colour on memory performance. Most of the past research on colour-memory relationship focus on the colour of the figure rather than the background. Based on these evidences, this study try to extend the previous works to the ground colour and how its effect memory performance based on recall rate. 90 undergraduate students will participate in this study. The experimental design will be used is multiple independent group experimental design. Fifty geometrical shapes will be used in the study phase with measurement of figure, 4.74cm x 3.39cm and ground, 19cm x 25cm. The participants will be measured on numbers of shape that are being recall in test phase in three experimental conditions, coloured background, non-coloured background and mix between coloured and non-coloured background slides condition. It is hypothesized that shape with coloured background will be recalled better than shape with non-coloured background. Analysis of variance (ANOVA) statistical procedure will be used to analyse the data of recall performance between three experimental groups using Statistical Package for Social Sciences (SPSS 17.0) to examine the cause and effect relationship between those variables.", "title": "" }, { "docid": "6566ad2c654274105e94f99ac5e20401", "text": "This paper presents a universal morphological feature schema that represents the finest distinctions in meaning that are expressed by overt, affixal inflectional morphology across languages. This schema is used to universalize data extracted from Wiktionary via a robust multidimensional table parsing algorithm and feature mapping algorithms, yielding 883,965 instantiated paradigms in 352 languages. These data are shown to be effective for training morphological analyzers, yielding significant accuracy gains when applied to Durrett and DeNero’s (2013) paradigm learning framework.", "title": "" }, { "docid": "33ab76f714ca23bdfddecfe436fd1ee2", "text": "A rational agent (artificial or otherwise) residing in a complex changing environment must gather information perceptually, update that information as the world changes, and combine that information with causal information to reason about the changing world. Using the system of defeasible reasoning that is incorporated into the OSCAR architecture for rational agents, a set of reason-schemas is proposed for enabling an agent to perform some of the requisite reasoning. Along the way, solutions are proposed for the Frame Problem, the Qualification Problem, and the Ramification Problem. The principles and reasoning described have all been implemented in OSCAR. keywords: defeasible reasoning, nonmonotonic logic, perception, causes, causation, time, temporal This work was supported in part by NSF grant no. IRI-9634106. An early version of some of this material appears in Pollock (1996), but it has undergone substantial change in the present paper. projection, frame problem, qualification problem, ramification problem, OSCAR.", "title": "" }, { "docid": "1d724b07c232098e2a5e5af2bb1e7c83", "text": "[2] Brown SJ, McLean WH. One remarkable molecule: filaggrin. J Invest Dermatol 2012;132:751–62. [3] Sandilands A, Terron-Kwiatkowski A, Hull PR, O’Regan GM, Clayton TH, Watson RM, et al. Comprehensive analysis of the gene encoding filaggrin uncovers prevalent and rare mutations in ichthyosis vulgaris and atopic eczema. Nat Genet 2007;39:650–4. [4] Margolis DJ, Apter AJ, Gupta J, Hoffstad O, Papadopoulos M, Campbell LE, et al. The persistence of atopic dermatitis and Filaggrin mutations in a US longitudinal cohort. J Allergy Clin Immunol 2012;130(4):912–7. [5] Smith FJ, Irvine AD, Terron-Kwiatkowski A, Sandilands A, Campbell LE, Zhao Y, et al. Loss-of-function mutations in the gene encoding filaggrin cause ichthyosis vulgaris. Nat Genet 2006;38:337–42. [6] Paternoster L, Standl M, Chen CM, Ramasamy A, Bonnelykke K, Duijts L, et al. Meta-analysis of genome-wide association studies identifies three new risk Table 1 Reliability and validity comparisons for FLG null mutations as assayed by TaqMan and beadchip methods.", "title": "" }, { "docid": "6b221edbde15defb80ecfb03340b012d", "text": "Abstract We use well-established methods of knot theory to study the topological structure of the set of periodic orbits of the Lü attractor. We show that, for a specific set of parameters, the Lü attractor is topologically different from the classical Lorenz attractor, whose dynamics is formed by a double cover of the simple horseshoe. This argues against the ‘similarity’ between the Lü and Lorenz attractors, claimed, for these parameter values, by some authors on the basis of non-topological observations. However, we show that the Lü system belongs to the Lorenz-like family, since by changing the values of the parameters, the behaviour of the system follows the behaviour of all members of this family. An attractor of the Lü kind with higher order symmetry is constructed and some remarks on the Chen attractor are also presented.", "title": "" }, { "docid": "5e8f2e9d799b865bb16bd3a68003db73", "text": "A robust road markings detection algorithm is a fundamental component of intelligent vehicles' autonomous navigation in urban environment. This paper presents an algorithm for detecting road markings including zebra crossings, stop lines and lane markings to provide road information for intelligent vehicles. First, to eliminate the impact of the perspective effect, an Inverse Perspective Mapping (IPM) transformation is applied to the images grabbed by the camera; the region of interest (ROI) was extracted from IPM image by a low level processing. Then, different algorithms are adopted to extract zebra crossings, stop lines and lane markings. The experiments on a large number of street scenes in different conditions demonstrate the effectiveness of the proposed algorithm.", "title": "" }, { "docid": "147208d94d35950d0cceef69494de84f", "text": "In 3 experiments, we investigated the effect of grammatical gender on object categorization. Participants were asked to judge whether 2 objects, whose names did or did not share grammatical gender, belonged to the same semantic category by pressing a key. Monolingual speakers of English (Experiment 1), Italian (Experiments 1 and 2), and Spanish (Experiments 2 and 3) were tested in their native language. Italian and Spanish participants responded faster to pairs of stimuli sharing the same gender, whereas no difference was observed for English participants. In Experiment 2, the pictures were chosen in such a way that the grammatical gender of the names was opposite in Italian and Spanish. Therefore, the same pair of stimuli gave rise to different patterns depending on the gender congruency of the names in the languages. In Experiment 3, Spanish speakers performed the same task under an articulatory suppression condition, showing no grammatical gender effect. The locus where meaning and gender interact can be located at the level of the lexical representation that specifies syntactic information: Nouns sharing the same grammatical gender activate each other, thus facilitating their processing and speeding up responses, either to semantically related pairs or to semantically unrelated pairs.", "title": "" }, { "docid": "ec2257854faa3076b5c25d2c947d1780", "text": "This paper presents a novel approach for road marking detection and classification based on machine learning algorithms. Road marking recognition is an important feature of an intelligent transportation system (ITS). Previous works are mostly developed using image processing and decisions are often made using empirical functions, which makes it difficult to be generalized. Hereby, we propose a general framework for object detection and classification, aimed at video-based intelligent transportation applications. It is a two-step approach. The detection is carried out using binarized normed gradient (BING) method. PCA network (PCANet) is employed for object classification. Both BING and PCANet are among the latest algorithms in the field of machine learning. Practically the proposed method is applied to a road marking dataset with 1,443 road images. We randomly choose 60% images for training and use the remaining 40% images for testing. Upon training, the system can detect 9 classes of road markings with an accuracy better than 96.8%. The proposed approach is readily applicable to other ITS applications.", "title": "" }, { "docid": "1eb415cae9b39655849537cdc007f51f", "text": "Aesthetics has been the subject of long-standing debates by philosophers and psychologists alike. In psychology, it is generally agreed that aesthetic experience results from an interaction between perception, cognition, and emotion. By experimental means, this triad has been studied in the field of experimental aesthetics, which aims to gain a better understanding of how aesthetic experience relates to fundamental principles of human visual perception and brain processes. Recently, researchers in computer vision have also gained interest in the topic, giving rise to the field of computational aesthetics. With computing hardware and methodology developing at a high pace, the modeling of perceptually relevant aspect of aesthetic stimuli has a huge potential. In this review, we present an overview of recent developments in computational aesthetics and how they relate to experimental studies. In the first part, we cover topics such as the prediction of ratings, style and artist identification as well as computational methods in art history, such as the detection of influences among artists or forgeries. We also describe currently used computational algorithms, such as classifiers and deep neural networks. In the second part, we summarize results from the field of experimental aesthetics and cover several isolated image properties that are believed to have a effect on the aesthetic appeal of visual stimuli. Their relation to each other and to findings from computational aesthetics are discussed. Moreover, we compare the strategies in the two fields of research and suggest that both fields would greatly profit from a joined research effort. We hope to encourage researchers from both disciplines to work more closely together in order to understand visual aesthetics from an integrated point of view.", "title": "" }, { "docid": "9975e61afd0bf521c3ffbf29d0f39533", "text": "Computer security depends largely on passwords to authenticate human users. However, users have difficulty remembering passwords over time if they choose a secure password, i.e. a password that is long and random. Therefore, they tend to choose short and insecure passwords. Graphical passwords, which consist of clicking on images rather than typing alphanumeric strings, may help to overcome the problem of creating secure and memorable passwords. In this paper we describe PassPoints, a new and more secure graphical password system. We report an empirical study comparing the use of PassPoints to alphanumeric passwords. Participants created and practiced either an alphanumeric or graphical password. The participants subsequently carried out three longitudinal trials to input their password over the course of 6 weeks. The results show that the graphical password users created a valid password with fewer difficulties than the alphanumeric users. However, the graphical users took longer and made more invalid password inputs than the alphanumeric users while practicing their passwords. In the longitudinal trials the two groups performed similarly on memory of their password, but the graphical group took more time to input a password. r 2005 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
34b1bd04cf2de83f9a2661cc8d77dd31
Clustering Data Streams Based on Shared Density between Micro-Clusters
[ { "docid": "368a3dd36283257c5573a7e1ab94e930", "text": "This paper develops the multidimensional binary search tree (or <italic>k</italic>-d tree, where <italic>k</italic> is the dimensionality of the search space) as a data structure for storage of information to be retrieved by associative searches. The <italic>k</italic>-d tree is defined and examples are given. It is shown to be quite efficient in its storage requirements. A significant advantage of this structure is that a single data structure can handle many types of queries very efficiently. Various utility algorithms are developed; their proven average running times in an <italic>n</italic> record file are: insertion, <italic>O</italic>(log <italic>n</italic>); deletion of the root, <italic>O</italic>(<italic>n</italic><supscrpt>(<italic>k</italic>-1)/<italic>k</italic></supscrpt>); deletion of a random node, <italic>O</italic>(log <italic>n</italic>); and optimization (guarantees logarithmic performance of searches), <italic>O</italic>(<italic>n</italic> log <italic>n</italic>). Search algorithms are given for partial match queries with <italic>t</italic> keys specified [proven maximum running time of <italic>O</italic>(<italic>n</italic><supscrpt>(<italic>k</italic>-<italic>t</italic>)/<italic>k</italic></supscrpt>)] and for nearest neighbor queries [empirically observed average running time of <italic>O</italic>(log <italic>n</italic>).] These performances far surpass the best currently known algorithms for these tasks. An algorithm is presented to handle any general intersection query. The main focus of this paper is theoretical. It is felt, however, that <italic>k</italic>-d trees could be quite useful in many applications, and examples of potential uses are given.", "title": "" } ]
[ { "docid": "4cebaea2af0ec07d45b27d0c857d301c", "text": "We propose design patterns as a new mechanism for expressing object-oriented design experience. Design patterns identify, name, and abstract common themes in objectoriented design. They capture the intent behind a design by identifying objects, their collaborations, and the distribution of responsibilities. Design patterns play many roles in the object-oriented development process: they provide a common vocabulary for design, they reduce system complexity by naming and de ning abstractions, they constitute a base of experience for building reusable software, and they act as building blocks from which more complex designs can be built. Design patterns can be considered reusable micro-architectures that contribute to an overall system architecture. We describe how to express and organize design patterns and introduce a catalog of design patterns. We also describe our experience in applying design patterns to the design of object-oriented systems.", "title": "" }, { "docid": "27ed4433fad92baec6bbbfa003b591b6", "text": "The new generation of high-performance decimal floating-point units (DFUs) is demanding efficient implementations of parallel decimal multipliers. In this paper, we describe the architectures of two parallel decimal multipliers. The parallel generation of partial products is performed using signed-digit radix-10 or radix-5 recodings of the multiplier and a simplified set of multiplicand multiples. The reduction of partial products is implemented in a tree structure based on a decimal multioperand carry-save addition algorithm that uses unconventional (non BCD) decimal-coded number systems. We further detail these techniques and present the new improvements to reduce the latency of the previous designs, which include: optimized digit recoders for the generation of 2n-tuples (and 5-tuples), decimal carry-save adders (CSAs) combining different decimal-coded operands, and carry-free adders implemented by special designed bit counters. Moreover, we detail a design methodology that combines all these techniques to obtain efficient reduction trees with different area and delay trade-offs for any number of partial products generated. Evaluation results for 16-digit operands show that the proposed architectures have interesting area-delay figures compared to conventional Booth radix-4 and radix--8 parallel binary multipliers and outperform the figures of previous alternatives for decimal multiplication.", "title": "" }, { "docid": "4b2b199aeb61128cbee7691bc49e16f5", "text": "Although deep learning approaches have achieved performance surpassing humans for still image-based face recognition, unconstrained video-based face recognition is still a challenging task due to large volume of data to be processed and intra/inter-video variations on pose, illumination, occlusion, scene, blur, video quality, etc. In this work, we consider challenging scenarios for unconstrained video-based face recognition from multiple-shot videos and surveillance videos with low-quality frames. To handle these problems, we propose a robust and efficient system for unconstrained video-based face recognition, which is composed of face/fiducial detection, face association, and face recognition. First, we use multi-scale single-shot face detectors to efficiently localize faces in videos. The detected faces are then grouped respectively through carefully designed face association methods, especially for multi-shot videos. Finally, the faces are recognized by the proposed face matcher based on an unsupervised subspace learning approach and a subspace-tosubspace similarity metric. Extensive experiments on challenging video datasets, such as Multiple Biometric Grand Challenge (MBGC), Face and Ocular Challenge Series (FOCS), JANUS Challenge Set 6 (CS6) for low-quality surveillance videos and IARPA JANUS Benchmark B (IJB-B) for multiple-shot videos, demonstrate that the proposed system can accurately detect and associate faces from unconstrained videos and effectively learn robust and discriminative features for recognition.", "title": "" }, { "docid": "523d11b771c5ea8776217eed253e6817", "text": "Incremental learning (IL) is an important task aimed to increase the capability of a trained model, in terms of the number of classes recognizable by the model. The key problem in this task is the requirement of storing data (e.g. images) associated with existing classes, while training the classifier to learn new classes. However, this is impractical as it increases the memory requirement at every incremental step, which makes it impossible to implement IL algorithms on the edge devices with limited memory. Hence, we propose a novel approach, called ‘Learning without Memorizing (LwM)’, to preserve the information with respect to existing (base) classes, without storing any of their data, while making the classifier progressively learn the new classes. In LwM, we present an information preserving penalty: Attention Distillation Loss (LAD), and demonstrate that penalizing the changes in classifiers’ attention maps helps to retain information of the base classes, as new classes are added. We show that adding LAD to the distillation loss which is an existing information preserving loss consistently outperforms the state-of-the-art performance in the iILSVRC-small and iCIFAR-100 datasets in terms of the overall accuracy of base and incrementally learned classes.", "title": "" }, { "docid": "6e7098f39a8b860307dba52dcc7e0d42", "text": "The paper presents an experimental algorithm to detect conventionalized metaphors implicit in the lexical data in a resource like WordNet, where metaphors are coded into the senses and so would never be detected by any algorithm based on the violation of preferences, since there would always be a constraint satisfied by such senses. We report an implementation of this algorithm, which was implemented first the preference constraints in VerbNet. We then derived in a systematic way a far more extensive set of constraints based on WordNet glosses, and with this data we reimplemented the detection algorithm and got a substantial improvement in recall. We suggest that this technique could contribute to improve the performance of existing metaphor detection strategies that do not attempt to detect conventionalized metaphors. The new WordNet-derived data is of wider significance because it also contains adjective constraints, unlike any existing lexical resource, and can be applied to any language with a semantic parser (and", "title": "" }, { "docid": "e86ee83d4270098d414338f3140c46e6", "text": "In this paper, we study aspects of single microphone speech enhancement SE based on deep neural networks DNNs. Specifically, we explore the generalizability capabilities of state-of-the-art DNN-based SE systems with respect to the background noise type, the gender of the target speaker, and the signal-to-noise ratio SNR. Furthermore, we investigate how specialized DNN-based SE systems, which have been trained to be either noise type specific, speaker specific or SNR specific, perform relative to DNN based SE systems that have been trained to be noise type general, speaker general, and SNR general. Finally, we compare how a DNN-based SE system trained to be noise type general, speaker general, and SNR general performs relative to a state-of-the-art short-time spectral amplitude minimum mean square error STSA-MMSE based SE algorithm. We show that DNN-based SE systems, when trained specifically to handle certain speakers, noise types and SNRs, are capable of achieving large improvements in estimated speech quality SQ and speech intelligibility SI, when tested in matched conditions. Furthermore, we show that improvements in estimated SQ and SI can be achieved by a DNN-based SE system when exposed to unseen speakers, genders and noise types, given a large number of speakers and noise types have been used in the training of the system. In addition, we show that a DNN-based SE system that has been trained using a large number of speakers and a wide range of noise types outperforms a state-of-the-art STSA-MMSE based SE method, when tested using a range of unseen speakers and noise types. Finally, a listening test using several DNN-based SE systems tested in unseen speaker conditions show that these systems can improve SI for some SNR and noise type configurations but degrade SI for others.", "title": "" }, { "docid": "c9a28a3d90f6d716643c45ed2c0b47bb", "text": "A fast, completely automated method to create 3D watertight building models from airborne LiDAR point clouds is presented. The proposed method analyzes the scene content and produces multi-layer rooftops with complex boundaries and vertical walls that connect rooftops to the ground. A graph cuts based method is used to segment vegetative areas from the rest of scene content. The ground terrain and building rooftop patches are then extracted utilizing our technique, the hierarchical Euclidean clustering. Our method adopts a “divide-and-conquer” strategy. Once potential points on rooftops are segmented from terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential building footprints. For each individual building region, significant features on the rooftop are further detected using a specifically designed region growing algorithm with smoothness constraint. Boundaries for all of these features are refined in order to produce strict description. After this refinement, mesh models could be generated using an existing robust dual contouring method.", "title": "" }, { "docid": "4d00dc1306e624fda75742295cd3005b", "text": "We present a transparent conducting electrode composed of a periodic two-dimensional network of silver nanowires. Networks of Ag nanowires are made with wire diameters of 45-110 nm and a pitch of 500, 700, and 1000 nm. Anomalous optical transmission is observed, with an averaged transmission up to 91% for the best transmitting network and sheet resistances as low as 6.5 Ω/sq for the best conducting network. Our most dilute networks show lower sheet resistance and higher optical transmittance than an 80 nm thick layer of ITO sputtered on glass. By comparing measurements and simulations, we identify four distinct physical phenomena that govern the transmission of light through the networks: all related to the excitation of localized surface plasmons and surface plasmon polaritons on the wires. The insights given in this paper provide the key guidelines for designing high-transmittance and low-resistance nanowire electrodes for optoelectronic devices, including thin-film solar cells. For the latter, we discuss the general design principles to use the nanowire electrodes also as a light trapping scheme.", "title": "" }, { "docid": "2d6718172b83ef2a109f91791af6a0c3", "text": "BACKGROUND & AIMS\nWe previously established long-term culture conditions under which single crypts or stem cells derived from mouse small intestine expand over long periods. The expanding crypts undergo multiple crypt fission events, simultaneously generating villus-like epithelial domains that contain all differentiated types of cells. We have adapted the culture conditions to grow similar epithelial organoids from mouse colon and human small intestine and colon.\n\n\nMETHODS\nBased on the mouse small intestinal culture system, we optimized the mouse and human colon culture systems.\n\n\nRESULTS\nAddition of Wnt3A to the combination of growth factors applied to mouse colon crypts allowed them to expand indefinitely. Addition of nicotinamide, along with a small molecule inhibitor of Alk and an inhibitor of p38, were required for long-term culture of human small intestine and colon tissues. The culture system also allowed growth of mouse Apc-deficient adenomas, human colorectal cancer cells, and human metaplastic epithelia from regions of Barrett's esophagus.\n\n\nCONCLUSIONS\nWe developed a technology that can be used to study infected, inflammatory, or neoplastic tissues from the human gastrointestinal tract. These tools might have applications in regenerative biology through ex vivo expansion of the intestinal epithelia. Studies of these cultures indicate that there is no inherent restriction in the replicative potential of adult stem cells (or a Hayflick limit) ex vivo.", "title": "" }, { "docid": "be9c88e6916e1c5af04e8ae1b6dc5748", "text": "In neural networks, the learning rate of the gradient descent strongly affects performance. This prevents reliable out-of-the-box training of a model on a new problem. We propose the All Learning Rates At Once (Alrao) algorithm: each unit or feature in the network gets its own learning rate sampled from a random distribution spanning several orders of magnitude, in the hope that enough units will get a close-to-optimal learning rate. Perhaps surprisingly, stochastic gradient descent (SGD) with Alrao performs close to SGD with an optimally tuned learning rate, for various network architectures and problems. In our experiments, all Alrao runs were able to learn well without any tuning.", "title": "" }, { "docid": "49dd1fd4640a160ba41fed048b2c804b", "text": "This paper proposes a novel method to predict increases in YouTube viewcount driven from the Twitter social network. Specifically, we aim to predict two types of viewcount increases: a sudden increase in viewcount (named as Jump), and the viewcount shortly after the upload of a new video (named as Early). Experiments on hundreds of thousands of videos and millions of tweets show that Twitter-derived features alone can predict whether a video will be in the top 5% for Early popularity with 0.7 Precision@100. Furthermore, our results reveal that while individual influence is indeed important for predicting how Twitter drives YouTube views, it is a diversity of interest from the most active to the least active Twitter users mentioning a video (measured by the variation in their total activity) that is most informative for both Jump and Early prediction. In summary, by going beyond features that quantify individual influence and additionally leveraging collective features of activity variation, we are able to obtain an effective cross-network predictor of Twitter-driven YouTube views.", "title": "" }, { "docid": "ecd486fabd206ad8c28ea9d9da8cd0ee", "text": "The prevailing binding of SOAP to HTTP specifies that SOAP messages be encoded as an XML 1.0 document which is then sent between client and server. XML processing however can be slow and memory intensive, especially for scientific data, and consequently SOAP has been regarded as an inappropriate protocol for scientific data. Efficiency considerations thus lead to the prevailing practice of separating data from the SOAP control channel. Instead, it is stored in specialized binary formats and transmitted either via attachments or indirectly via a file sharing mechanism, such as GridFTP or HTTP. This separation invariably complicates development due to the multiple libraries and type systems to be handled; furthermore it suffers from performance issues, especially when handling small binary data. As an alternative solution, binary XML provides a highly efficient encoding scheme for binary data in the XML and SOAP messages, and with it we can gain high performance as well as unifying the development environment without unduly impacting the Web service protocol stack. In this paper we present our implementation of a generic SOAP engine that supports both textual XML and binary XML as the encoding scheme of the message. We also present our binary XML data model and encoding scheme. Our experiments show that for scientific applications binary XML together with the generic SOAP implementation not only ease development, but also provide better performance and are more widely applicable than the commonly used separated schemes", "title": "" }, { "docid": "01a1a1e3dfe85cd6765e4459c340a483", "text": "Traditional Chinese Medicine (TCM) illustrates that the physique determines the susceptibility of human to certain diseases and treatment programs for illness. Tongue diagnosis is an important way to identify the physique, but now it is performed by the doctor’s professional experience and the design of a questionnaire. Consequently, accurate physique identification cannot be obtained easily. In this paper, we propose a new method to identify the physique through wild tongue images using hybrid deep learning methods. It begins with constructing a large number of tongue images that are taken in natural conditions, instead of in a controlled environment. Based on the resulting database, a new method of tongue coating detection is put forward that applies a rapid deep learning method to complete the initial tongue coating detection, and then utilizes another deep learning method, a calibration neural network, to further improve the accuracy of tongue detection. Finally, an effective deep learning method is applied to identify the tongue physique. Experiments validate the proposed method, illustrating that physique identification can be performed well using hybrid deep learning methods.", "title": "" }, { "docid": "35adbc66c3b98543471bbe47cb71e00d", "text": "Because of their demonstrated capabilities in attaining high rates of advance in civil tunnel construction, the hard rock mining industry has always shown a major interest in the use of TBMs for mine development, primarily for development of entries, as well as ventilation, haulage and production drifts. The successful application of TBM technology to mining depends on the selection of the most suitable equipment and cutting tools for the rock and ground conditions to be encountered. In addition to geotechnical investigations and required rock testing, cutterhead design optimization is an integral part of the machine selection to ensure a successful application of the machines in a specific underground mine environment. This paper presents and discusses selected case histories of TBM applications in mining, the lessons learned, the process of laboratory testing together with machine selection and performance estimation methods.", "title": "" }, { "docid": "b94d146408340ce2a89b95f1b47e91f6", "text": "In order to improve the life quality of amputees, providing approximate manipulation ability of a human hand to that of a prosthetic hand is considered by many researchers. In this study, a biomechanical model of the index finger of the human hand is developed based on the human anatomy. Since the activation of finger bones are carried out by tendons, a tendon configuration of the index finger is introduced and used in the model to imitate the human hand characteristics and functionality. Then, fuzzy sliding mode control where the slope of the sliding surface is tuned by a fuzzy logic unit is proposed and applied to have the finger model to follow a certain trajectory. The trajectory of the finger model, which mimics the motion characteristics of the human hand, is pre-determined from the camera images of a real hand during closing and opening motion. Also, in order to check the robust behaviour of the controller, an unexpected joint friction is induced on the prosthetic finger on its way. Finally, the resultant prosthetic finger motion and the tendon forces produced are given and results are discussed.", "title": "" }, { "docid": "d2401987609efcb5a7fe420d48dfec1b", "text": "Good sparse approximations are essential for practical inference in Gaussian Processes as the computational cost of exact methods is prohibitive for large datasets. The Fully Independent Training Conditional (FITC) and the Variational Free Energy (VFE) approximations are two recent popular methods. Despite superficial similarities, these approximations have surprisingly different theoretical properties and behave differently in practice. We thoroughly investigate the two methods for regression both analytically and through illustrative examples, and draw conclusions to guide practical application.", "title": "" }, { "docid": "dd50ef22ed75db63254df4dc369d6891", "text": "—Speech Recognition by computer is a process where speech signals are automatically converted into the corresponding sequence of words in text. When the training and testing conditions are not similar, statistical speech recognition algorithms suffer from severe degradation in recognition accuracy. So we depend on intelligent and recognizable sounds for common communications. In this research, word inputs are recognized by the system and executed in the form of text corresponding to the input word. In this paper, we propose a hybrid model by using a fully connected hidden layer between the input state nodes and the output. We have proposed a new objective function for the neural network using a combined framework of statistical and neural network based classifiers. We have used the hybrid model of Radial Basis Function and the Pattern Matching method. The system was trained by Indian English word consisting of 50 words uttered by 20 male speakers and 20 female speakers. The test samples comprised 30 words spoken by a different set of 20 male speakers and 20 female speakers. The recognition accuracy is found to be 91% which is well above the previous results.", "title": "" }, { "docid": "bbecbf907a81e988379fe61d8d8f9f17", "text": "In this paper, we address the problem of visual question answering by proposing a novel model, called VIBIKNet. Our model is based on integrating Kernelized Convolutional Neural Networks and Long-Short Term Memory units to generate an answer given a question about an image. We prove that VIBIKNet is an optimal trade-off between accuracy and computational load, in terms of memory and time consumption. We validate our method on the VQA challenge dataset and compare it to the top performing methods in order to illustrate its performance and speed.", "title": "" }, { "docid": "591b0a6e8d690dd77485b13cb0b14a9f", "text": "A human face provides a variety of different communicative functions such as identification, the perception of emotional expression, and lip-reading. For these reasons, many applications in robotics require tracking and recognizing a human face. A novel face recognition system should be able to deal with various changes in face images, such as pose, illumination, and expression, among which pose variation is the most difficult one to deal with. Therefore, face registration (alignment) is the key of robust face recognition. If we can register face images into frontal views, the recognition task would be much easier. To align a face image into a canonical frontal view, we need to know the pose information of a human head. Therefore, in this paper, we propose a novel method for modeling a human head as a simple 3D ellipsoid. And also, we present 3D head tracking and pose estimation methods using the proposed ellipsoidal model. After recovering full motion of the head, we can register face images with pose variations into stabilized view images which are suitable for frontal face recognition. By doing so, simple and efficient frontal face recognition can be easily carried out in the stabilized texture map space instead of the original input image space. To evaluate the feasibility of the proposed approach using a simple ellipsoid model, 3D head tracking experiments are carried out on 45 image sequences with ground truth from Boston University, and several face recognition experiments are conducted on our laboratory database and the Yale Face Database B by using subspace-based face recognition methods such as PCA, PCA+LAD, and DCV.", "title": "" } ]
scidocsrr
06efed59acbf413361449091065b3d14
A Survey on Collaborative Deep Learning and Privacy-Preserving
[ { "docid": "ba4f3060a36021ef60f7bc6c9cde9d35", "text": "Neural Networks (NN) are today increasingly used in Machine Learning where they have become deeper and deeper to accurately model or classify high-level abstractions of data. Their development however also gives rise to important data privacy risks. This observation motives Microsoft researchers to propose a framework, called Cryptonets. The core idea is to combine simplifications of the NN with Fully Homomorphic Encryptions (FHE) techniques to get both confidentiality of the manipulated data and efficiency of the processing. While efficiency and accuracy are demonstrated when the number of non-linear layers is small (eg 2), Cryptonets unfortunately becomes ineffective for deeper NNs which let the problem of privacy preserving matching open in these contexts. This work successfully addresses this problem by combining the original ideas of Cryptonets’ solution with the batch normalization principle introduced at ICML 2015 by Ioffe and Szegedy. We experimentally validate the soundness of our approach with a neural network with 6 non-linear layers. When applied to the MNIST database, it competes the accuracy of the best non-secure versions, thus significantly improving Cryptonets.", "title": "" }, { "docid": "e49aa0d0f060247348f8b3ea0a28d3c6", "text": "Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.", "title": "" }, { "docid": "3bc897662b39bcd59b7c7831fb1df091", "text": "The proliferation of wearable devices has contributed to the emergence of mobile crowdsensing, which leverages the power of the crowd to collect and report data to a third party for large-scale sensing and collaborative learning. However, since the third party may not be honest, privacy poses a major concern. In this paper, we address this concern with a two-stage privacy-preserving scheme called RG-RP: the first stage is designed to mitigate maximum a posteriori (MAP) estimation attacks by perturbing each participant's data through a nonlinear function called repeated Gompertz (RG); while the second stage aims to maintain accuracy and reduce transmission energy by projecting high-dimensional data to a lower dimension, using a row-orthogonal random projection (RP) matrix. The proposed RG-RP scheme delivers better recovery resistance to MAP estimation attacks than most state-of-the-art techniques on both synthetic and real-world datasets. For collaborative learning, we proposed a novel LSTM-CNN model combining the merits of Long Short-Term Memory (LSTM) and Convolutional Neural Networks (CNN). Our experiments on two representative movement datasets captured by wearable sensors demonstrate that the proposed LSTM-CNN model outperforms standalone LSTM, CNN and Deep Belief Network. Together, RG+RP and LSTM-CNN provide a privacy-preserving collaborative learning framework that is both accurate and privacy-preserving.", "title": "" }, { "docid": "b5a4b5b3e727dde52a9c858d3360a2e7", "text": "Differential privacy is a recent framework for computation on sensitive data, which has shown considerable promise in the regime of large datasets. Stochastic gradient methods are a popular approach for learning in the data-rich regime because they are computationally tractable and scalable. In this paper, we derive differentially private versions of stochastic gradient descent, and test them empirically. Our results show that standard SGD experiences high variability due to differential privacy, but a moderate increase in the batch size can improve performance significantly.", "title": "" } ]
[ { "docid": "dc67945b32b2810a474acded3c144f68", "text": "This paper presents an overview of the eld of Intelligent Products. As Intelligent Products have many facets, this paper is mainly focused on the concept behind Intelligent Products, the technical foundations, and the achievable practical goals of Intelligent Products. A novel classi cation of Intelligent Products is introduced, which distinguishes between three orthogonal dimensions. Furthermore, the technical foundations in the areas of automatic identi cation and embedded processing, distributed information storage and processing, and agent-based systems are discussed, as well as the achievable practical goals in the contexts of manufacturing, supply chains, asset management, and product life cycle management.", "title": "" }, { "docid": "a65d1881f5869f35844064d38b684ac8", "text": "Skilled artists, using traditional media or modern computer painting tools, can create a variety of expressive styles that are very appealing in still images, but have been unsuitable for animation. The key difficulty is that existing techniques lack adequate temporal coherence to animate these styles effectively. Here we augment the range of practical animation styles by extending the guided texture synthesis method of Image Analogies [Hertzmann et al. 2001] to create temporally coherent animation sequences. To make the method art directable, we allow artists to paint portions of keyframes that are used as constraints. The in-betweens calculated by our method maintain stylistic continuity and yet change no more than necessary over time.", "title": "" }, { "docid": "236e19e8300be2486938cfd016371121", "text": "Cyberbullying is a serious social problem in online environments and social networks. Current approaches to tackle this problem are still inadequate for detecting bullying incidents or to flag bullies. In this study we used a multi-criteria evaluation system to obtain a better understanding of YouTube users‟ behaviour and their characteristics through expert knowledge. Based on experts‟ knowledge, the system assigns a score to the users, which represents their level of “bulliness” based on the history of their activities, The scores can be used to discriminate among users with a bullying history and those who were not engaged in hurtful acts. This preventive approach can provide information about users of social networks and can be used to build monitoring tools to aid finding and stopping potential bullies.", "title": "" }, { "docid": "e5c602d9996109ea713eba551a9bf94b", "text": "Several focus measures were studied in this paper as the measures of image clarity, in the field of multi-focus image fusion. All these focus measures are defined in the spatial domain and can be implemented in real-time fusion systems with fast response and robustness. This paper proposed a method to assess focus measures according to focus measures’ capability of distinguishing focused image blocks from defocused image blocks. Experiments were conducted on several sets of images and results show that sum-modified-Laplacian (SML) can provide better performance than other focus measures, when the execution time is not included in the evaluation. 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "401b2494b8b032751c219726671cb48e", "text": "Current state-of-the-art approaches to skeleton-based action recognition are mostly based on recurrent neural networks (RNN). In this paper, we propose a novel convolutional neural networks (CNN) based framework for both action classification and detection. Raw skeleton coordinates as well as skeleton motion are fed directly into CNN for label prediction. A novel skeleton transformer module is designed to rearrange and select important skeleton joints automatically. With a simple 7-layer network, we obtain 89.3% accuracy on validation set of the NTU RGB+D dataset. For action detection in untrimmed videos, we develop a window proposal network to extract temporal segment proposals, which are further classified within the same network. On the recent PKU-MMD dataset, we achieve 93.7% mAP, surpassing the baseline by a large margin.", "title": "" }, { "docid": "224287bfe0a3f7b3236b442748a59cff", "text": "Interactive image processing techniques, along with a linear-programming-based inductive classiier, have been used to create a highly accurate system for diagnosis of breast tumors. A small fraction of a ne needle aspirate slide is selected and digitized. With an interactive interface, the user initializes active contour models, known as snakes, near the boundaries of a set of cell nuclei. The customized snakes are deformed to the exact shape of the nuclei. This allows for precise, automated analysis of nuclear size, shape and texture. Ten such features are computed for each nucleus, and the mean value, largest (or \\worst\") value and standard error of each feature are found over the range of isolated cells. After 569 images were analyzed in this fashion, diierent combinations of features were tested to nd those which best separate benign from malignant samples. Tenfold cross-validation accuracy of 97% was achieved using a single separating plane on three of the thirty features: mean texture, worst area and worst smoothness. This represents an improvement over the best diagnostic results in the medical literature. The system is currently in use at the University of Wisconsin Hospitals. The same feature set has also been utilized in the much more diicult task of predicting distant recurrence of malignancy in patients, resulting in an accuracy of 86%.", "title": "" }, { "docid": "c2a955e02d73537a9439e03c1d4d1788", "text": "BACKGROUND\nLyme neuroborreliosis (LNB) is a nervous system infection caused by Borrelia burgdorferi sensu lato (Bb).\n\n\nOBJECTIVES\nTo present evidence-based recommendations for diagnosis and treatment.\n\n\nMETHODS\nData were analysed according to levels of evidence as suggested by EFNS.\n\n\nRECOMMENDATIONS\nThe following three criteria should be fulfilled for definite LNB, and two of them for possible LNB: (i) neurological symptoms; (ii) cerebrospinal fluid (CSF) pleocytosis; (iii) Bb-specific antibodies produced intrathecally. PCR and CSF culture may be corroborative if symptom duration is <6 weeks, when Bb antibodies may be absent. PCR is otherwise not recommended. There is also not enough evidence to recommend the following tests for diagnostic purposes: microscope-based assays, chemokine CXCL13, antigen detection, immune complexes, lymphocyte transformation test, cyst formation, lymphocyte markers. Adult patients with definite or possible acute LNB (symptom duration <6 months) should be offered a single 14-day course of antibiotic treatment. Oral doxycycline (200 mg daily) and intravenous (IV) ceftriaxone (2 g daily) are equally effective in patients with symptoms confined to the peripheral nervous system, including meningitis (level A). Patients with CNS manifestations should be treated with IV ceftriaxone (2 g daily) for 14 days and late LNB (symptom duration >6 months) for 3 weeks (good practice points). Children should be treated as adults, except that doxycycline is contraindicated under 8 years of age (nine in some countries). If symptoms persist for more than 6 months after standard treatment, the condition is often termed post-Lyme disease syndrome (PLDS). Antibiotic therapy has no impact on PLDS (level A).", "title": "" }, { "docid": "f52073ddb9c4507d11190cd13637b91d", "text": "The application of fuzzy-based control strategies has recently gained enormous recognition as an approach for the rapid development of effective controllers for nonlinear time-variant systems. This paper describes the preliminary research and implementation of a fuzzy logic based controller to control the wheel slip for electric vehicle antilock braking systems (ABSs). As the dynamics of the braking systems are highly nonlinear and time variant, fuzzy control offers potential as an important tool for development of robust traction control. Simulation studies are employed to derive an initial rule base that is then tested on an experimental test facility representing the dynamics of a braking system. The test facility is composed of an induction machine load operating in the generating region. It is shown that the torque-slip characteristics of an induction motor provides a convenient platform for simulating a variety of tire/road driving conditions, negating the initial requirement for skid-pan trials when developing algorithms. The fuzzy membership functions were subsequently refined by analysis of the data acquired from the test facility while simulating operation at a high coefficient of friction. The robustness of the fuzzy-logic slip regulator is further tested by applying the resulting controller over a wide range of operating conditions. The results indicate that ABS/traction control may substantially improve longitudinal performance and offer significant potential for optimal control of driven wheels, especially under icy conditions where classical ABS/traction control schemes are constrained to operate very conservatively.", "title": "" }, { "docid": "c983e94a5334353ec0e2dabb0e95d92a", "text": "Digital family calendars have the potential to help families coordinate, yet they must be designed to easily fit within existing routines or they will simply not be used. To understand the critical factors affecting digital family calendar design, we extended LINC, an inkable family calendar to include ubiquitous access, and then conducted a month-long field study with four families. Adoption and use of LINC during the study demonstrated that LINC successfully supported the families' existing calendaring routines without disrupting existing successful social practices. Families also valued the additional features enabled by LINC. For example, several primary schedulers felt that ubiquitous access positively increased involvement by additional family members in the calendaring routine. The field trials also revealed some unexpected findings, including the importance of mobility---both within and outside the home---for the Tablet PC running LINC.", "title": "" }, { "docid": "871778b5f2bb9097d7072fe3c856669b", "text": "Social exchange and evolutionary models of mate selection incorporate economic assumptions but have not considered a key distinction between necessities and luxuries. This distinction can clarify an apparent paradox: Status and attractiveness, though emphasized by many researchers, are not typically rated highly by research participants. Three studies supported the hypothesis that women and men first ensure sufficient levels of necessities in potential mates before considering many other characteristics rated as more important in prior surveys. In Studies 1 and 2, participants designed ideal long-term mates, purchasing various characteristics with 3 different budgets. Study 3 used a mate-screening paradigm and showed that people inquire 1st about hypothesized necessities. Physical attractiveness was a necessity to men, status and resources were necessities to women, and kindness and intelligence were necessities to both.", "title": "" }, { "docid": "419c721c2d0a269c65fae59c1bdb273c", "text": "Previous work on understanding user web search behavior has focused on how people search and what they are searching for, but not why they are searching. In this paper, we describe a framework for understanding the underlying goals of user searches, and our experience in using the framework to manually classify queries from a web search engine. Our analysis suggests that so-called navigational\" searches are less prevalent than generally believed while a previously unexplored \"resource-seeking\" goal may account for a large fraction of web searches. We also illustrate how this knowledge of user search goals might be used to improve future web search engines.", "title": "" }, { "docid": "3571e2646d76d5f550075952cb75ba30", "text": "Traditional simultaneous localization and mapping (SLAM) algorithms have been used to great effect in flat, indoor environments such as corridors and offices. We demonstrate that with a few augmentations, existing 2D SLAM technology can be extended to perform full 3D SLAM in less benign, outdoor, undulating environments. In particular, we use data acquired with a 3D laser range finder. We use a simple segmentation algorithm to separate the data stream into distinct point clouds, each referenced to a vehicle position. The SLAM technique we then adopt inherits much from 2D delayed state (or scan-matching) SLAM in that the state vector is an ever growing stack of past vehicle positions and inter-scan registrations are used to form measurements between them. The registration algorithm used is a novel combination of previous techniques carefully balancing the need for maximally wide convergence basins, robustness and speed. In addition, we introduce a novel post-registration classification technique to detect matches which have converged to incorrect local minima", "title": "" }, { "docid": "433ad2acfdeee2e2bb28dde529338ea8", "text": "Cardiac fibrosis is characterized by excessive extracellular matrix accumulation that ultimately destroys tissue architecture and eventually abolishes normal function. In recent years, despite the underlying mechanisms of cardiac fibrosis are still unknown, numerous studies suggest that epigenetic modifications impact on the development of cardiac fibrosis. Epigenetic modifications control cell proliferation, differentiation, migration, and so on. Epigenetic modifications contain three main processes: DNA methylation, histone modifications, and silencing by microRNAs. We here outline the recent work pertaining to epigenetic changes in cardiac fibrosis. This review focuses on the epigenetic regulation of cardiac fibrosis.", "title": "" }, { "docid": "f4065bd38779754896e2308773bf5f61", "text": "Memristors have recently received significant attention as ubiquitous device-level components for building a novel generation of computing systems. These devices have many promising features, such as non-volatility, low power consumption, high density, and excellent scalability. The ability to control and modify biasing voltages at the two terminals of memristors make them promising candidates to perform matrix-vector multiplications and solve systems of linear equations. In this article, we discuss how networks of memristors arranged in crossbar arrays can be used for efficiently solving optimization and machine learning problems. We introduce a new memristor-based optimization framework that combines the computational merit of memristor crossbars with the advantages of an operator splitting method, alternating direction method of multipliers (ADMM). Here, ADMM helps in splitting a complex optimization problem into subproblems that involve the solution of systems of linear equations. The capability of this framework is shown by applying it to linear programming, quadratic programming, and sparse optimization. In addition to ADMM, implementation of a customized power iteration (PI) method for eigenvalue/eigenvector computation using memristor crossbars is discussed. The memristorbased PI method can further be applied to principal component analysis (PCA). The use of memristor crossbars yields a significant speed-up in computation, and thus, we believe, has the potential to advance optimization and machine learning research in artificial intelligence (AI).", "title": "" }, { "docid": "64e54dc578b5e6c3faa2d910ba2b808c", "text": "A design of a radome-covered slot antenna array based on Substrate Integrated Waveguide (SIW) technology is presented in this paper. The design method consists of the analysis of an isolated radiating element, the synthesis of a linear array, the optimization of a planar array, and the development of a power divider with a transition from a feeding metal waveguide to an SIW. The antenna array is designed using a full-wave electromagnetic solver (CST Microwave Studio) for the operating frequency band 26.5-27.5 GHz. The paper describes simulation results of the antenna array consisting of 10×10 longitudinal slots and simulated and measured results of a fabricated antenna array consisting of 4×4 slots.", "title": "" }, { "docid": "971e39e4b99695f249ec1d367b5044f0", "text": "Research on curiosity has undergone 2 waves of intense activity. The 1st, in the 1960s, focused mainly on curiosity's psychological underpinnings. The 2nd, in the 1970s and 1980s, was characterized by attempts to measure curiosity and assess its dimensionality. This article reviews these contributions with a concentration on the 1st wave. It is argued that theoretical accounts of curiosity proposed during the 1st period fell short in 2 areas: They did not offer an adequate explanation for why people voluntarily seek out curiosity, and they failed to delineate situational determinants of curiosity. Furthermore, these accounts did not draw attention to, and thus did not explain, certain salient characteristics of curiosity: its intensity, transience, association with impulsivity, and tendency to disappoint when satisfied. A new account of curiosity is offered that attempts to address these shortcomings. The new account interprets curiosity as a form of cognitively induced deprivation that arises from the perception of a gap in knowledge or understanding.", "title": "" }, { "docid": "11de03383fbd4178613eb4bdf47b90be", "text": "Question Generation (QG) and Question Answering (QA) are some of the many challenges for natural language understanding and interfaces. As humans need to ask good questions, the potential benefits from automated QG systems may assist them in meeting useful inquiry needs. In this paper, we consider an automatic Sentence-to-Question generation task, where given a sentence, the Question Generation (QG) system generates a set of questions for which the sentence contains, implies, or needs answers. To facilitate the question generation task, we build elementary sentences from the input complex sentences using a syntactic parser. A named entity recognizer and a part of speech tagger are applied on each of these sentences to encode necessary information. We classify the sentences based on their subject, verb, object and preposition for determining the possible type of questions to be generated. We use the TREC-2007 (Question Answering Track) dataset for our experiments and evaluation. Mots-clés : Génération de questions, Analyseur syntaxique, Phrases élémentaires, POS Tagging.", "title": "" }, { "docid": "0d9f3c9bca0d79198beddc1883f76b4a", "text": "This paper investigates the transformation of v-terms into continuation-passing style (CPS). We show that by appropriate-expansion of Fischer and Plotkin's two-pass equational speciication of the CPS transform, we can obtain a static and context-free separation of the result terms into \\essential\" and \\administrative\" constructs. Interpreting the former as syntax builders and the latter as directly executable code, we obtain a simple and eecient one-pass transformation algorithm, easily extended to conditional expressions, recursive deenitions, and similar constructs. This new transformation algorithm leads to a simpler proof of Plotkin's simulation and indiierence results. We go on to show how CPS-based control operators similar to, but more general than, Scheme's call/cc can be naturally accommodated by the new transformation algorithm. To demonstrate the expressive power of these operators, we use them to present an equivalent but even more concise formulation of the eecient CPS transformation algorithm. Finally, we relate the fundamental ideas underlying this derivation to similar concepts from other works on program manipulation; we derive a one-pass CPS transformation of n-terms; and we outline some promising areas for future research.", "title": "" }, { "docid": "9571116e0d70a229970913e8b918b9be", "text": "The reservoir capacity of dogs for Trypanosoma cruzi infection was analyzed in the city of Campeche, an urban town located in the Yucatan peninsula in Mexico. The city is inhabited by ~96,000 dogs and ~168,000 humans; Triatoma dimidiata is the only recognized vector. In the present study, we sampled 262 dogs (148 stray dogs and 114 pet dogs) and 2800 young people (ranging in age between 15 and 20 years old) and tested for T. cruzi antibodies by enzyme-linked immunosorbent assay, Indirect Immunofluorescence, and Western blotting serological assays. Seroprevalence in stray dogs was twice higher than in pet dogs (9.5% vs. 5.3%) with general seroprevalence of 7.6%. In humans, the observed seroprevalence was 76 times lower than in dogs (0.1% vs. 7.6%, respectively). Western blotting analysis showed that dogs' antibodies recognized different T. cruzi antigenic patterns than those for humans. In conclusion, T. cruzi infection in Campeche, Mexico, represents a low potential risk to inhabitants but deserves vigilance.", "title": "" }, { "docid": "45c669a351a636bd707f5dd9c9613e2c", "text": "The present paper shows how to construct a maximum matching in a bipartite graph with n vertices and m edges in a number of computation steps proportional to (m + n)x/.", "title": "" } ]
scidocsrr
d6f6ef29d39924604fb09596eb6aeb37
An extension of the technology acceptance model in an ERP implementation environment
[ { "docid": "a4197ab8a70142ac331599c506996bc9", "text": "This paper presents the findings of two studies that replicate previous work by Fred Davis on the subject of perceived usefulness, ease of use, and usage of information technology. The two studies focus on evaluating the psychometric properties of the ease of use and usefulness scales, while examining the relationship between ease of use, usefulness, and system usage. Study 1 provides a strong assessment of the convergent validity of the two scales by examining heterogeneous user groups dealing with heterogeneous implementations of messaging technology. In addition, because one might expect users to share similar perspectives about voice and electronic mail, the study also represents a strong test of discriminant validity. In this study a total of 118 respondents from 10 different organizations were surveyed for their attitudes toward two messaging technologies: voice and electronic mail. Study 2 complements the approach taken in Study 1 by focusing on the ability to demonstrate discriminant validity. Three popular software applications (WordPerfect, Lotus 1-2-3, and Harvard Graphics) were examined based on the expectation that they would all be rated highly on both scales. In this study a total of 73 users rated the three packages in terms of ease of use and usefulness. The results of the studies demonstrate reliable and valid scales for measurement of perceived ease of use and usefulness. In addition, the paper tests the relationships between ease of use, usefulness, and usage using structural equation modelling. The results of this model are consistent with previous research for Study 1, suggesting that usefulness is an important determinant of system use. For Study 2 the results are somewhat mixed, but indicate the importance of both ease of use and usefulness. Differences in conditions of usage are explored to explain these findings.", "title": "" }, { "docid": "bd13f54cd08fe2626fe8de4edce49197", "text": "Ease of use and usefulness are believed to be fundamental in determining the acceptance and use of various, corporate ITs. These beliefs, however, may not explain the user's behavior toward newly emerging ITs, such as the World-Wide-Web (WWW). In this study, we introduce playfulness as a new factor that re ̄ects the user's intrinsic belief in WWW acceptance. Using it as an intrinsic motivation factor, we extend and empirically validate the Technology Acceptance Model (TAM) for the WWW context. # 2001 Elsevier Science B.V. All rights reserved.", "title": "" } ]
[ { "docid": "37b97f66230fb292f585d0413af48986", "text": "In this paper, we notice that sparse and low-rank structures arise in the context of many collaborative filtering applications where the underlying graphs have block-diagonal adjacency matrices. Therefore, we propose a novel Sparse and Low-Rank Linear Method (Lor SLIM) to capture such structures and apply this model to improve the accuracy of the Top-N recommendation. Precisely, a sparse and low-rank aggregation coefficient matrix W is learned from Lor SLIM by solving an l1-norm and nuclear norm regularized optimization problem. We also develop an efficient alternating augmented Lagrangian method (ADMM) to solve the optimization problem. A comprehensive set of experiments is conducted to evaluate the performance of Lor SLIM. The experimental results demonstrate the superior recommendation quality of the proposed algorithm in comparison with current state-of-the-art methods.", "title": "" }, { "docid": "25f0871346c370db4b26aecd08a9d75e", "text": "This review presents a comprehensive discussion of the key technical issues in woody biomass pretreatment: barriers to efficient cellulose saccharification, pretreatment energy consumption, in particular energy consumed for wood-size reduction, and criteria to evaluate the performance of a pretreatment. A post-chemical pretreatment size-reduction approach is proposed to significantly reduce mechanical energy consumption. Because the ultimate goal of biofuel production is net energy output, a concept of pretreatment energy efficiency (kg/MJ) based on the total sugar recovery (kg/kg wood) divided by the energy consumption in pretreatment (MJ/kg wood) is defined. It is then used to evaluate the performances of three of the most promising pretreatment technologies: steam explosion, organosolv, and sulfite pretreatment to overcome lignocelluloses recalcitrance (SPORL) for softwood pretreatment. The present study found that SPORL is the most efficient process and produced highest sugar yield. Other important issues, such as the effects of lignin on substrate saccharification and the effects of pretreatment on high-value lignin utilization in woody biomass pretreatment, are also discussed.", "title": "" }, { "docid": "aeed0f9595c9b40bb03c95d4624dd21c", "text": "Most research in primary and secondary computing education has focused on understanding learners within formal classroom communities, leaving aside the growing number of promising informal online programming communities where young learners contribute, comment, and collaborate on programs. In this paper, we examined trends in computational participation in Scratch, an online community with over 1 million registered youth designers primarily 11-18 years of age. Drawing on a random sample of 5,000 youth programmers and their activities over three months in early 2012, we examined the quantity of programming concepts used in projects in relation to level of participation, gender, and account age of Scratch programmers. Latent class analyses revealed four unique groups of programmers. While there was no significant link between level of online participation, ranging from low to high, and level of programming sophistication, the exception was a small group of highly engaged users who were most likely to use more complex programming concepts. Groups who only used few of the more sophisticated programming concepts, such as Booleans, variables and operators, were identified as Scratch users new to the site and girls. In the discussion we address the challenges of analyzing young learners' programming in informal online communities and opportunities for designing more equitable computational participation.", "title": "" }, { "docid": "9f21af3bc0955dcd9a05898f943f54ad", "text": "Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multi-signal ensembles that exploit both intraand inter-signal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. We study in detail three simple models for jointly sparse signals, propose algorithms for joint recovery of multiple signals from incoherent projections, and characterize theoretically and empirically the number of measurements per sensor required for accurate reconstruction. We establish a parallel with the Slepian-Wolf theorem from information theory and establish upper and lower bounds on the measurement rates required for encoding jointly sparse signals. In two of our three models, the results are asymptotically best-possible, meaning that both the upper and lower bounds match the performance of our practical algorithms. Moreover, simulations indicate that the asymptotics take effect with just a moderate number of signals. In some sense DCS is a framework for distributed compression of sources with memory, which has remained a challenging problem for some time. DCS is immediately applicable to a range of problems in sensor networks and arrays.", "title": "" }, { "docid": "981634bc9b96eba12fd07e8960d02c2d", "text": "This paper presents the existing legal frameworks, professional guidelines and other documents related to the conditions and extent of the disclosure of genetic information by physicians to at-risk family members. Although the duty of a physician regarding disclosure of genetic information to a patient’s relatives has only been addressed by few legal cases, courts have found such a duty under some circumstances. Generally, disclosure should not be permitted without the patient’s consent. Yet, due to the nature of genetic information, exceptions are foreseen, where treatment and prevention are available. This duty to warn a patient’s relative is also supported by some professional and policy organizations that have addressed the issue. Practice guidelines with a communication and intervention plan are emerging, providing physicians with tools that allow them to assist patients in their communication with relatives without jeopardizing their professional liability. Since guidelines aim to improve the appropriateness of medical practice and consequently to better serve the interests of patients, it is important to determine to what degree they document the ‘best practice’ standards. Such an analysis is an essential step to evaluate the different approaches permitting the disclosure of genetic information to family members.", "title": "" }, { "docid": "6897b2842b041e75278aec7bc03ec870", "text": "PURPOSE\nThe optimal treatment of systemic sclerosis (SSc) is a challenge because the pathogenesis of SSc is unclear and it is an uncommon and clinically heterogeneous disease affecting multiple organ systems. The aim of the European League Against Rheumatism (EULAR) Scleroderma Trials and Research group (EUSTAR) was to develop evidence-based, consensus-derived recommendations for the treatment of SSc.\n\n\nMETHODS\nTo obtain and maintain a high level of intrinsic quality and comparability of this approach, EULAR standard operating procedures were followed. The task force comprised 18 SSc experts from Europe, the USA and Japan, two SSc patients and three fellows for literature research. The preliminary set of research questions concerning SSc treatment was provided by 74 EUSTAR centres.\n\n\nRESULTS\nBased on discussion of the clinical research evidence from published literature, and combining this with current expert opinion and clinical experience, 14 recommendations for the treatment of SSc were formulated. The final set includes the following recommendations: three on SSc-related digital vasculopathy (Raynaud's phenomenon and ulcers); four on SSc-related pulmonary arterial hypertension; three on SSc-related gastrointestinal involvement; two on scleroderma renal crisis; one on SSc-related interstitial lung disease and one on skin involvement. Experts also formulated several questions for a future research agenda.\n\n\nCONCLUSIONS\nEvidence-based, consensus-derived recommendations are useful for rheumatologists to help guide treatment for patients with SSc. These recommendations may also help to define directions for future clinical research in SSc.", "title": "" }, { "docid": "2c2574e1eb29ad45bedf346417c85e2d", "text": "Technology has shown great promise in providing access to textual information for visually impaired people. Optical Braille Recognition (OBR) allows people with visual impairments to read volumes of typewritten documents with the help of flatbed scanners and OBR software. This project looks at developing a system to recognize an image of embossed Arabic Braille and then convert it to text. It particularly aims to build fully functional Optical Arabic Braille Recognition system. It has two main tasks, first is to recognize printed Braille cells, and second is to convert them to regular text. Converting Braille to text is not simply a one to one mapping, because one cell may represent one symbol (alphabet letter, digit, or special character), two or more symbols, or part of a symbol. Moreover, multiple cells may represent a single symbol.", "title": "" }, { "docid": "557694b6db3f20adc700876d75ad7720", "text": "Unseen Action Recognition (UAR) aims to recognise novel action categories without training examples. While previous methods focus on inner-dataset seen/unseen splits, this paper proposes a pipeline using a large-scale training source to achieve a Universal Representation (UR) that can generalise to a more realistic Cross-Dataset UAR (CDUAR) scenario. We first address UAR as a Generalised Multiple-Instance Learning (GMIL) problem and discover 'building-blocks' from the large-scale ActivityNet dataset using distribution kernels. Essential visual and semantic components are preserved in a shared space to achieve the UR that can efficiently generalise to new datasets. Predicted UR exemplars can be improved by a simple semantic adaptation, and then an unseen action can be directly recognised using UR during the test. Without further training, extensive experiments manifest significant improvements over the UCF101 and HMDB51 benchmarks.", "title": "" }, { "docid": "3d401d8d3e6968d847795ccff4646b43", "text": "In spite of growing frequency and sophistication of attacks two factor authentication schemes have seen very limited adoption in the US, and passwords remain the single factor of authentication for most bank and brokerage accounts. Clearly the cost benefit analysis is not as strongly in favor of two factor as we might imagine. Upgrading from passwords to a two factor authentication system usually involves a large engineering effort, a discontinuity of user experience and a hard key management problem. In this paper we describe a system to convert a legacy password authentication server into a two factor system. The existing password system is untouched, but is cascaded with a new server that verifies possession of a smartphone device. No alteration, patching or updates to the legacy system is necessary. There are now two alternative authentication paths: one using passwords alone, and a second using passwords and possession of the trusted device. The bank can leave the password authentication path available while users migrate to the two factor scheme. Once migration is complete the password-only path can be severed. We have implemented the system and carried out two factor authentication against real accounts at several major banks.", "title": "" }, { "docid": "ca509048385b8cf28bd7b89c685f21b2", "text": "Recent studies on knowledge base completion, the task of recovering missing relationships based on recorded relations, demonstrate the importance of learning embeddings from multi-step relations. However, due to the size of knowledge bases, learning multi-step relations directly on top of observed instances could be costly. In this paper, we propose Implicit ReasoNets (IRNs), which is designed to perform large-scale inference implicitly through a search controller and shared memory. Unlike previous work, IRNs use training data to learn to perform multi-step inference through the shared memory, which is also jointly updated during training. While the inference procedure is not operating on top of observed instances for IRNs, our proposed model outperforms all previous approaches on the popular FB15k benchmark by more than 5.7%.", "title": "" }, { "docid": "16a0329d2b7a6995a48bdef0e845658a", "text": "Digital market has never been so unstable due to more and more demanding users and new disruptive competitors. CEOs from most of industries investigate digitalization opportunities. Through a Systematic Literature Review, we found that digital transformation is more than just a technological shift. According to this study, these transformations have had an impact on the business models, the operational processes and the end-users experience. Considering the richness of this topic, we had proposed a research agenda of digital transformation in a managerial perspective.", "title": "" }, { "docid": "05a5e3849c9fca4d788aa0210d8f7294", "text": "The growth of mobile phone users has lead to a dramatic increasing of SMS spam messages. Recent reports clearly indicate that the volume of mobile phone spam is dramatically increasing year by year. In practice, fighting such plague is difficult by several factors, including the lower rate of SMS that has allowed many users and service providers to ignore the issue, and the limited availability of mobile phone spam-filtering software. Probably, one of the major concerns in academic settings is the scarcity of public SMS spam datasets, that are sorely needed for validation and comparison of different classifiers. Moreover, traditional content-based filters may have their performance seriously degraded since SMS messages are fairly short and their text is generally rife with idioms and abbreviations. In this paper, we present details about a new real, public and non-encoded SMS spam collection that is the largest one as far as we know. Moreover, we offer a comprehensive analysis of such dataset in order to ensure that there are no duplicated messages coming from previously existing datasets, since it may ease the task of learning SMS spam classifiers and could compromise the evaluation of methods. Additionally, we compare the performance achieved by several established machine learning techniques. In summary, the results indicate that the procedure followed to build the collection does not lead to near-duplicates and, regarding the classifiers, the Support Vector Machines outperforms other evaluated techniques and, hence, it can be used as a good baseline for further comparison. Keywords—Mobile phone spam; SMS spam; spam filtering; text categorization; classification.", "title": "" }, { "docid": "bc2bc8b2d9db3eb14e126c627248a66a", "text": "With the growing complexity of today's software applications injunction with the increasing competitive pressure has pushed the quality assurance of developed software towards new heights. Software testing is an inevitable part of the Software Development Lifecycle, and keeping in line with its criticality in the pre and post development process makes it something that should be catered with enhanced and efficient methodologies and techniques. This paper aims to discuss the existing as well as improved testing techniques for the better quality assurance purposes.", "title": "" }, { "docid": "11806624e22ec2b72cd692755e8b2764", "text": "The improvement of file access performance is a great challenge in real-time cloud services. In this paper, we analyze preconditions of dealing with this problem considering the aspects of requirements, hardware, software, and network environments in the cloud. Then we describe the design and implementation of a novel distributed layered cache system built on the top of the Hadoop Distributed File System which is named HDFS-based Distributed Cache System (HDCache). The cache system consists of a client library and multiple cache services. The cache services are designed with three access layers an in-memory cache, a snapshot of the local disk, and the actual disk view as provided by HDFS. The files loading from HDFS are cached in the shared memory which can be directly accessed by a client library. Multiple applications integrated with a client library can access a cache service simultaneously. Cache services are organized in the P2P style using a distributed hash table. Every file cached has three replicas in different cache service nodes in order to improve robustness and alleviates the workload. Experimental results show that the novel cache system can store files with a wide range in their sizes and has the access performance in a millisecond level in highly concurrent environments.", "title": "" }, { "docid": "09baf9c55e7ae35bdcf88742ecdc01d5", "text": "This paper presents the experimental evaluation of a Bluetooth-based positioning system. The method has been implemented in a Bluetooth-capable handheld device. Empirical tests of the developed considered positioning system have been realized in different indoor scenarios. The range estimation of the positioning system is based on an approximation of the relation between the RSSI (Radio Signal Strength Indicator) and the associated distance between sender and receiver. The actual location estimation is carried out by using the triangulation method. The implementation of the positioning system in a PDA (Personal Digital Assistant) has been realized by using the Software “Microsoft eMbedded Visual C++ Version 3.0”.", "title": "" }, { "docid": "6c829f1d93b0b943065bafab433e61b9", "text": "recognition by using the Mel-Scale Frequency Cepstral Coefficients (MFCC) extracted from speech signal of spoken words. Principal Component Analysis is employed as the supplement in feature dimensional reduction state, prior to training and testing speech samples via Maximum Likelihood Classifier (ML) and Support Vector Machine (SVM). Based on experimental database of total 40 times of speaking words collected under acoustically controlled room, the sixteen-ordered MFCC extracts have shown the improvement in recognition rates significantly when training the SVM with more MFCC samples by randomly selected from database, compared with the ML.", "title": "" }, { "docid": "bfe62c8e438ff5ec697203295e658450", "text": "Using the qualitative participatory action methodology, collective memory work, this study explored how transgender, queer, and questioning (TQQ) youth make meaning of their sexual orientation and gender identity through high school experiences. Researchers identified three major conceptual but overlapping themes from the data generated in the transgender, queer, and questioning youth focus group: a need for resilience, you should be able to be safe, and this is what action looks like! The researchers discuss how as a research product, a documentary can effectively \"capture voices\" of participants, making research accessible and attractive to parents, practitioners, policy makers, and participants.", "title": "" }, { "docid": "cbaff0ba24a648e8228a7663e3d32e97", "text": "Microservice architecture has started a new trend for application development/deployment in cloud due to its flexibility, scalability, manageability and performance. Various microservice platforms have emerged to facilitate the whole software engineering cycle for cloud applications from design, development, test, deployment to maintenance. In this paper, we propose a performance analytical model and validate it by experiments to study the provisioning performance of microservice platforms. We design and develop a microservice platform on Amazon EC2 cloud using Docker technology family to identify important elements contributing to the performance of microservice platforms. We leverage the results and insights from experiments to build a tractable analytical performance model that can be used to perform what-if analysis and capacity planning in a systematic manner for large scale microservices with minimum amount of time and cost.", "title": "" }, { "docid": "241a1589619c2db686675327cab1e8da", "text": "This paper describes a simple computational model of joint torque and impedance in human arm movements that can be used to simulate three-dimensional movements of the (redundant) arm or leg and to design the control of robots and human-machine interfaces. This model, based on recent physiological findings, assumes that (1) the central nervous system learns the force and impedance to perform a task successfully in a given stable or unstable dynamic environment and (2) stiffness is linearly related to the magnitude of the joint torque and increased to compensate for environment instability. Comparison with existing data shows that this simple model is able to predict impedance geometry well.", "title": "" }, { "docid": "8390fd7e559832eea895fabeb48c3549", "text": "An algorithm is presented to perform connected component labeling of images of arbitrary dimension that are represented by a linear bintree. The bintree is a generalization of the quadtree data structure that enables dealing with images of arbitrary dimension. The linear bintree is a pointerless representation. The algorithm uses an active border which is represented by linked lists instead of arrays. This results in a significant reduction in the space requirements, thereby making it feasible to process threeand higher dimensional images. Analysis of the execution time of the algorithm shows almost linear behavior with respect to the number of leaf nodes in the image, and empirical tests are in agreement. The algorithm can be modified easily to compute a ( d 1)-dimensional boundary measure (e.g., perimeter in two dimensions and surface area in three dimensions) with linear", "title": "" } ]
scidocsrr
3770571c1c2367eb8dfd087594ff127a
An exact algorithm for team orienteering problems
[ { "docid": "47bfe9238083f0948c16d7beeac75155", "text": "In this paper, we propose a solution procedure for the Elementary Shortest Path Problem with Resource Constraints (ESPPRC). A relaxed version of this problem in which the path does not have to be elementary has been the backbone of a number of solution procedures based on column generation for several important problems, such as vehicle routing and crew-pairing. In many cases relaxing the restriction of an elementary path resulted in optimal solutions in a reasonable computation time. However, for a number of other problems, the elementary path restriction has too much impact on the solution to be relaxed or might even be necessary. We propose an exact solution procedure for the ESPPRC which extends the classical label correcting algorithm originally developed for the relaxed (non-elementary) path version of this problem. We present computational experiments of this algorithm for our specific problem and embedded in a column generation scheme for the classical Vehicle Routing Problem with Time Windows.", "title": "" } ]
[ { "docid": "df2070a04f13c444e9aa466eaa3d45eb", "text": "0020-0255/$ see front matter 2012 Elsevier Inc http://dx.doi.org/10.1016/j.ins.2012.08.023 ⇑ Address: Islamic Azad University, Khoy Branch, E-mail addresses: hatamlou@iaukhoy.ac.ir, hatam Nature has always been a source of inspiration. Over the last few decades, it has stimulated many successful algorithms and computational tools for dealing with complex and optimization problems. This paper proposes a new heuristic algorithm that is inspired by the black hole phenomenon. Similar to other population-based algorithms, the black hole algorithm (BH) starts with an initial population of candidate solutions to an optimization problem and an objective function that is calculated for them. At each iteration of the black hole algorithm, the best candidate is selected to be the black hole, which then starts pulling other candidates around it, called stars. If a star gets too close to the black hole, it will be swallowed by the black hole and is gone forever. In such a case, a new star (candidate solution) is randomly generated and placed in the search space and starts a new search. To evaluate the performance of the black hole algorithm, it is applied to solve the clustering problem, which is a NP-hard problem. The experimental results show that the proposed black hole algorithm outperforms other traditional heuristic algorithms for several benchmark datasets. 2012 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "25eee8be0a4e4e5dd29fe31ccc902b77", "text": "3D printing technology can produce complex objects directly from computer aided digital designs. The technology has traditionally been used by large companies to produce fit and form concept prototypes ('rapid prototyping') before production. In recent years however there has been a move to adopt the technology as full-scale manufacturing solution. The advent of low-cost, desktop 3D printers such as the RepRap and Fab@Home has meant a wider user base are now able to have access to desktop manufacturing platforms enabling them to produce highly customised products for personal use and sale. This uptake in usage has been coupled with a demand for printing technology and materials able to print functional elements such as electronic sensors. Here we present formulation of a simple conductive thermoplastic composite we term 'carbomorph' and demonstrate how it can be used in an unmodified low-cost 3D printer to print electronic sensors able to sense mechanical flexing and capacitance changes. We show how this capability can be used to produce custom sensing devices and user interface devices along with printed objects with embedded sensing capability. This advance in low-cost 3D printing with offer a new paradigm in the 3D printing field with printed sensors and electronics embedded inside 3D printed objects in a single build process without requiring complex or expensive materials incorporating additives such as carbon nanotubes.", "title": "" }, { "docid": "bda892eb6cdcc818284f56b74c932072", "text": "In this paper, a low power and low jitter 12-bit CMOS digitally controlled oscillator (DCO) design is presented. The CMOS DCO is designed based on a ring oscillator implemented with Schmitt trigger based inverters. Simulations of the proposed DCO using 32 nm CMOS predictive transistor model (PTM) achieves controllable frequency range of 570 MHz~850 MHz with a wide linearity. Monte Carlo simulation demonstrates that the time-period jitter due to random power supply fluctuation is under 75 ps and the power consumption is 2.3 mW at 800 MHz with 0.9 V power supply.", "title": "" }, { "docid": "635ef4eb79aeea85f58676334c16be71", "text": "We propose a deep learning framework for modeling complex high-dimensional densities via Nonlinear Independent Component Estimation (NICE). It is based on the idea that a good representation is one in which the data has a distribution that is easy to model. For this purpose, a non-linear deterministic transformation of the data is learned that maps it to a latent space so as to make the transformed data conform to a factorized distribution, i.e., resulting in independent latent variables. We parametrize this transformation so that computing the determinant of the Jacobian and inverse Jacobian is trivial, yet we maintain the ability to learn complex non-linear transformations, via a composition of simple building blocks, each based on a deep neural network. The training criterion is simply the exact log-likelihood, which is tractable, and unbiased ancestral sampling is also easy. We show that this approach yields good generative models on four image datasets and can be used for inpainting.", "title": "" }, { "docid": "55a37995369fe4f8ddb446d83ac0cecf", "text": "With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the visual-unpleasant appearance of QR codes, existing works have developed a series of techniques. However, these works still leave much to be desired, such as personalization, artistry, and robustness. To address these issues, in this paper, we propose a novel type of aesthetic QR codes, SEE (Stylize aEsthEtic) QR code, and a three-stage approach to automatically produce such robust style-oriented codes. Specifically, in the first stage, we propose a method to generate an optimized baseline aesthetic QR code, which reduces the visual contrast between the noise-like black/white modules and the blended image. In the second stage, to obtain art style QR code, we tailor an appropriate neural style transformation network to endow the baseline aesthetic QR code with artistic elements. In the third stage, we design an error-correction mechanism by balancing two competing terms, visual quality and readability, to ensure the performance robust. Extensive experiments demonstrate that SEE QR code has high quality in terms of both visual appearance and robustness, and also offers a greater variety of personalized choices to users.", "title": "" }, { "docid": "8726e80818f0619f5157ad2295dee7df", "text": "The OptaSense® Distributed Acoustic Sensing (DAS) system is an acoustic and seismic sensing capability that uses simple fibre optic communications cables as the sensor. Using existing or new cables, it can provide low-cost and high-reliability surface crossing and tunnel construction detection, with power and communications services needed only every 80-100 km. The technology has been proven in worldwide security operations at over one hundred locations in a variety of industries including oil and gas pipelines, railways, and high-value facility perimeters - a total of 100,000,000 kilometre-hours of linear asset protection. The system reliably detects a variety of border threats with very few nuisance alarms. It can work in concert with existing border surveillance technologies to provide security personnel a new value proposition for fighting trans-border crime. Its ability to detect, classify and locate activity over hundreds of kilometres and provide information in an accurate and actionable way has proven OptaSense to be a cost-effective solution for monitoring long borders. It has been scaled to cover 1500 km controlled by a single central monitoring station in pipeline applications.", "title": "" }, { "docid": "931c75847fdfec787ad6a31a6568d9e3", "text": "This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development.", "title": "" }, { "docid": "5a9209f792ddd738d44f17b1175afe64", "text": "PURPOSE\nIncrease in muscle force, endurance, and flexibility is desired in elite athletes to improve performance and to avoid injuries, but it is often hindered by the occurrence of myofascial trigger points. Dry needling (DN) has been shown effective in eliminating myofascial trigger points.\n\n\nMETHODS\nThis randomized controlled study in 30 elite youth soccer players of a professional soccer Bundesliga Club investigated the effects of four weekly sessions of DN plus water pressure massage on thigh muscle force and range of motion of hip flexion. A group receiving placebo laser plus water pressure massage and a group with no intervention served as controls. Data were collected at baseline (M1), treatment end (M2), and 4 wk follow-up (M3). Furthermore, a 5-month muscle injury follow-up was performed.\n\n\nRESULTS\nDN showed significant improvement of muscular endurance of knee extensors at M2 (P = 0.039) and M3 (P = 0.008) compared with M1 (M1:294.6 ± 15.4 N·m·s, M2:311 ± 25 N·m·s; M3:316.0 ± 28.6 N·m·s) and knee flexors at M2 compared with M1 (M1:163.5 ± 10.9 N·m·s, M2:188.5 ± 16.3 N·m·s) as well as hip flexion (M1: 81.5° ± 3.3°, M2:89.8° ± 2.8°; M3:91.8° ± 3.8°). Compared with placebo (3.8° ± 3.8°) and control (1.4° ± 2.9°), DN (10.3° ± 3.5°) showed a significant (P = 0.01 and P = 0.0002) effect at M3 compared with M1 on hip flexion; compared with nontreatment control (-10 ± 11.9 N·m), DN (5.2 ± 10.2 N·m) also significantly (P = 0.049) improved maximum force of knee extensors at M3 compared with M1. During the rest of the season, muscle injuries were less frequent in the DN group compared with the control group.\n\n\nCONCLUSION\nDN showed a significant effect on muscular endurance and hip flexion range of motion that persisted 4 wk posttreatment. Compared with placebo, it showed a significant effect on hip flexion that persisted 4 wk posttreatment, and compared with nonintervention control, it showed a significant effect on maximum force of knee extensors 4 wk posttreatment in elite soccer players.", "title": "" }, { "docid": "0eb98d2e5d7e3c46e1ae830c73008fd4", "text": "Twitter, the most famous micro-blogging service and online social network, collects millions of tweets every day. Due to the length limitation, users usually need to explore other ways to enrich the content of their tweets. Some studies have provided findings to suggest that users can benefit from added hyperlinks in tweets. In this paper, we focus on the hyperlinks in Twitter and propose a new application, called hyperlink recommendation in Twitter. We expect that the recommended hyperlinks can be used to enrich the information of user tweets. A three-way tensor is used to model the user-tweet-hyperlink collaborative relations. Two tensor-based clustering approaches, tensor decomposition-based clustering (TDC) and tensor approximation-based clustering (TAC) are developed to group the users, tweets and hyperlinks with similar interests, or similar contexts. Recommendation is then made based on the reconstructed tensor using cluster information. The evaluation results in terms of Mean Absolute Error (MAE) shows the advantages of both the TDC and TAC approaches over a baseline recommendation approach, i.e., memory-based collaborative filtering. Comparatively, the TAC approach achieves better performance than the TDC approach.", "title": "" }, { "docid": "f0505768d42cd9da66520ae380447ab3", "text": "This article demonstrates that convolutional operation can be converted to matrix multiplication, which has the same calculation way with fully connected layer. The article is helpful for the beginners of the neural network to understand how fully connected layer and the convolutional layer work in the backend. To be concise and to make the article more readable, we only consider the linear case. It can be extended to the non-linear case easily through plugging in a non-linear encapsulation to the values like this σ(x) denoted as x′.", "title": "" }, { "docid": "342bcd2509b632480c4f4e8059cfa6a1", "text": "This paper introduces the design and development of a novel axial-flux permanent magnet generator (PMG) using a printed circuit board (PCB) stator winding. This design has the mechanical rigidity, high efficiency and zero cogging torque required for a low speed water current turbine. The PCB stator has simplified the design and construction and avoids any slip rings. The flexible PCB winding represents an ultra thin electromagnetic exciting source where coils are wound in a wedge shape. The proposed multi-poles generator can be used for various low speed applications especially in small marine current energy conversion systems.", "title": "" }, { "docid": "68b7c94a2efb0fefd6ad3d74a08edf87", "text": "Innovations like domain-specific hardware, enhanced security, open instruction sets, and agile chip development will lead the way.", "title": "" }, { "docid": "07fc203735e9da22e0dc49c4a1153db0", "text": "The implementation, diffusion and adoption of e-government in the public sector has been a topic that has been debated by the research community for some time. In particular, the limited adoption of e-government services is attributed to factors such as the heterogeneity of users, lack of user-orientation, the limited transformation of public sector and the mismatch between expectations and supply. In this editorial, we review theories and factors impacting implementation, diffusion and adoption of e-government. Most theories used in prior research follow mainstream information systems concepts, which can be criticized for not taking into account e-government specific characteristics. The authors argue that there is a need for e-government specific theories and methodologies that address the idiosyncratic nature of e-government as the well-known information systems concepts that are primarily developed for business contexts are not equipped to encapsulate the complexities surrounding e-government. Aspects like accountability, digital divide, legislation, public governance, institutional complexity and citizens' needs are challenging issues that have to be taken into account in e-government theory and practices. As such, in this editorial we argue that e-government should develop as an own strand of research, while information systems theories and concepts should not be neglected.", "title": "" }, { "docid": "ef92244350e267d3b5b9251d496e0ee2", "text": "A review of recent advances in power wafer level electronic packaging is presented based on the development of power device integration. The paper covers in more detail how advances in both semiconductor content and power advanced wafer level package design and materials have co-enabled significant advances in power device capability during recent years. Extrapolating the same trends in representative areas for the remainder of the decade serves to highlight where further improvement in materials and techniques can drive continued enhancements in usability, efficiency, reliability and overall cost of power semiconductor solutions. Along with next generation wafer level power packaging development, the role of modeling is a key to assure successful package design. An overview of the power package modeling is presented. Challenges of wafer level power semiconductor packaging and modeling in both next generation design and assembly processes are presented and discussed. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "fac9465df30dd5d9ba5bc415b2be8172", "text": "In the Railway System, Railway Signalling System is the vital control equipment responsible for the safe operation of trains. In Railways, the system of communication from railway stations and running trains is by the means of signals through wired medium. Once the train leaves station, there is no communication between the running train and the station or controller. Hence, in case of failures or in emergencies in between stations, immediate information cannot be given and a particular problem will escalate with valuable time lost. Because of this problem only a single train can run in between two nearest stations. Now a days, Railway all over the world is using Optical Fiber cable for communication between stations and to send signals to trains. The usage of optical fibre cables does not lend itself for providing trackside communication as in the case of copper cable. Hence, another transmission medium is necessary for communication outside the station limits with drivers, guards, maintenance gangs, gateman etc. Obviously the medium of choice for such communication is wireless. With increasing speed and train density, adoption of train control methods such as Automatic warning system, (AWS) or, Automatic train stop (ATS), or Positive train separation (PTS) is a must. Even though, these methods traditionally pick up their signals from track based beacons, Wireless Sensor Network based systems will suit the Railways much more. In this paper, we described a new and innovative medium for railways that is Wireless Sensor Network (WSN) based Railway Signalling System and conclude that Introduction of WSN in Railways will not only achieve economy but will also improve the level of safety and efficiency of train operations.", "title": "" }, { "docid": "f10996698f2596de3ca7436a82e8c326", "text": "Hybrid multiple-antenna transceivers, which combine large-dimensional analog pre/postprocessing with lower-dimensional digital processing, are the most promising approach for reducing the hardware cost and training overhead in massive MIMO systems. This article provides a comprehensive survey of the various incarnations of such structures that have been proposed in the literature. We provide a taxonomy in terms of the required channel state information, that is, whether the processing adapts to the instantaneous or average (second-order) channel state information; while the former provides somewhat better signal- to-noise and interference ratio, the latter has much lower overhead for CSI acquisition. We furthermore distinguish hardware structures of different complexities. Finally, we point out the special design aspects for operation at millimeter-wave frequencies.", "title": "" }, { "docid": "a6bc752bd6a4fc070fa01a5322fb30a1", "text": "The formulation of a generalized area-based confusion matrix for exploring the accuracy of area estimates is presented. The generalized confusion matrix is appropriate for both traditional classiŽ cation algorithms and sub-pixel area estimation models. An error matrix, derived from the generalized confusion matrix, allows the accuracy of maps generated using area estimation models to be assessed quantitatively and compared to the accuracies obtained from traditional classiŽ cation techniques. The application of this approach is demonstrated for an area estimation model applied to Landsat data of an urban area of the United Kingdom.", "title": "" }, { "docid": "4d9312d22dcc37933d0108fbfacd1c38", "text": "This study focuses on the use of different types of shear reinforcement in the reinforced concrete beams. Four different types of shear reinforcement are investigated; traditional stirrups, welded swimmer bars, bolted swimmer bars, and u-link bolted swimmer bars. Beam shear strength as well as beam deflection are the main two factors considered in this study. Shear failure in reinforced concrete beams is one of the most undesirable modes of failure due to its rapid progression. This sudden type of failure made it necessary to explore more effective ways to design these beams for shear. The reinforced concrete beams show different behavior at the failure stage in shear compare to the bending, which is considered to be unsafe mode of failure. The diagonal cracks that develop due to excess shear forces are considerably wider than the flexural cracks. The cost and safety of shear reinforcement in reinforced concrete beams led to the study of other alternatives. Swimmer bar system is a new type of shear reinforcement. It is a small inclined bars, with its both ends bent horizontally for a short distance and welded or bolted to both top and bottom flexural steel reinforcement. Regardless of the number of swimmer bars used in each inclined plane, the swimmer bars form plane-crack interceptor system instead of bar-crack interceptor system when stirrups are used. Several reinforced concrete beams were carefully prepared and tested in the lab. The results of these tests will be presented and discussed. The deflection of each beam is also measured at incrementally increased applied load.", "title": "" }, { "docid": "034f6044eda34a00c64db60fb4144eb6", "text": "Motivation\nDiffusion-based network models are widely used for protein function prediction using protein network data and have been shown to outperform neighborhood-based and module-based methods. Recent studies have shown that integrating the hierarchical structure of the Gene Ontology (GO) data dramatically improves prediction accuracy. However, previous methods usually either used the GO hierarchy to refine the prediction results of multiple classifiers, or flattened the hierarchy into a function-function similarity kernel. No study has taken the GO hierarchy into account together with the protein network as a two-layer network model.\n\n\nResults\nWe first construct a Bi-relational graph (Birg) model comprised of both protein-protein association and function-function hierarchical networks. We then propose two diffusion-based methods, BirgRank and AptRank, both of which use PageRank to diffuse information on this two-layer graph model. BirgRank is a direct application of traditional PageRank with fixed decay parameters. In contrast, AptRank utilizes an adaptive diffusion mechanism to improve the performance of BirgRank. We evaluate the ability of both methods to predict protein function on yeast, fly and human protein datasets, and compare with four previous methods: GeneMANIA, TMC, ProteinRank and clusDCA. We design four different validation strategies: missing function prediction, de novo function prediction, guided function prediction and newly discovered function prediction to comprehensively evaluate predictability of all six methods. We find that both BirgRank and AptRank outperform the previous methods, especially in missing function prediction when using only 10% of the data for training.\n\n\nAvailability and Implementation\nThe MATLAB code is available at https://github.rcac.purdue.edu/mgribsko/aptrank .\n\n\nContact\ngribskov@purdue.edu.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online.", "title": "" } ]
scidocsrr
71e92fac5500ae6f83cd1b7e18112be6
Design of a Cascoded Operational Amplifier with High Gain
[ { "docid": "1884e92beb10bb653af5b8efa967e92d", "text": "Presents an overview of current design techniques for operational amplifiers implemented in CMOS and NMOS technology at a tutorial level. Primary emphasis is placed on CMOS amplifiers because of their more widespread use. Factors affecting voltage gain, input noise, offsets, common mode and power supply rejection, power dissipation, and transient response are considered for the traditional bipolar-derived two-stage architecture. Alternative circuit approaches for optimization of particular performance aspects are summarized, and examples are given.", "title": "" } ]
[ { "docid": "c6741791b8685beb3eee1c721dcc255b", "text": "In on-line search and display advertising, the click-trough rate (CTR) has been traditionally a key measure of ad/campaign effectiveness. More recently, the market has gained interest in more direct measures of profitability, one early alternative is the conversion rate (CVR). CVRs measure the proportion of certain users who take a predefined, desirable action, such as a purchase, registration, download, etc.; as compared to simply page browsing. We provide a detailed analysis of conversion rates in the context of non-guaranteed delivery targeted advertising. In particular we focus on the post-click conversion (PCC) problem or the analysis of conversions after a user click on a referring ad. The key elements we study are the probability of a conversion given a click in a user/page context, P(conversion | click, context). We provide various fundamental properties of this process based on contextual information, formalize the problem of predicting PCC, and propose an approach for measuring attribute relevance when the underlying attribute distribution is non-stationary. We provide experimental analyses based on logged events from a large-scale advertising platform.", "title": "" }, { "docid": "52cb98b269597ca840b74215116f4e45", "text": "The ubiquity of mobile devices has drawn new attention to the field of electronic government. Literature studies report on the significance of m-government, including its motivation, success, and failure in developed and developing countries. However, research on the design of m-government applications is still scarce. Design approaches in the literature lack a comprehensive way of addressing m-government challenges. This paper aims to (1) identify challenges of m-government in developed and developing countries and (2) investigate approaches used for designing m-government applications. The challenges are categorised based on the factors of PESTELMO and are further examined to identify requirements for suitable m-government design. Design approaches are analysed by the Content, Context and Process (CCP) framework and are examined to identify requirements, methods and guidelines addressed. The paper finally outlines research needs for a comprehensive design framework for m-government solutions and presents initial requirements for the framework.", "title": "" }, { "docid": "2e088ce4f7e5b3633fa904eab7563875", "text": "Large numbers of websites have started to markup their content using standards such as Microdata, Microformats, and RDFa. The marked-up content elements comprise descriptions of people, organizations, places, events, products, ratings, and reviews. This development has accelerated in last years as major search engines such as Google, Bing and Yahoo! use the markup to improve their search results. Embedding semantic markup facilitates identifying content elements on webpages. However, the markup is mostly not as fine-grained as desirable for applications that aim to integrate data from large numbers of websites. This paper discusses the challenges that arise in the task of integrating descriptions of electronic products from several thousand e-shops that offer Microdata markup. We present a solution for each step of the data integration process including Microdata extraction, product classification, product feature extraction, identity resolution, and data fusion. We evaluate our processing pipeline using 1.9 million product offers from 9240 e-shops which we extracted from the Common Crawl 2012, a large public Web corpus.", "title": "" }, { "docid": "3298ecc4169ceb0bc6352b3689f65642", "text": "The need to disinfect a patient's skin before subcutaneous or intramuscular injection is a much debated practice. Guidance on this issue varies between NHS organisations that provide primary and secondary care. However, with patients being increasingly concerned with healthcare-associated infections, a general consensus needs to be reached whereby this practice is either rejected or made mandatory.", "title": "" }, { "docid": "d42bdb401ccdd416808bb91e5025f379", "text": "Blockchain technology has evolved from being an immutable ledger of transactions for cryptocurrencies to a programmable interactive environment for building distributed reliable applications. Although, blockchain technology has been used to address various challenges, to our knowledge none of the previous work focused on using blockchain to develop a secure and immutable scientific data provenance management framework that automatically verifies the provenance records. In this work, we leverage blockchain as a platform to facilitate trustworthy data provenance collection, verification and management. The developed system utilizes smart contracts and open provenance model (OPM) to record immutable data trails. We show that our proposed framework can efficiently and securely capture and validate provenance data, and prevent any malicious modification to the captured data as long as majority of the participants are honest.", "title": "" }, { "docid": "0801ef431c6e4dab6158029262a3bf82", "text": "A hallmark of human intelligence is the ability to ask rich, creative, and revealing questions. Here we introduce a cognitive model capable of constructing humanlike questions. Our approach treats questions as formal programs that, when executed on the state of the world, output an answer. The model specifies a probability distribution over a complex, compositional space of programs, favoring concise programs that help the agent learn in the current context. We evaluate our approach by modeling the types of open-ended questions generated by humans who were attempting to learn about an ambiguous situation in a game. We find that our model predicts what questions people will ask, and can creatively produce novel questions that were not present in the training set. In addition, we compare a number of model variants, finding that both question informativeness and complexity are important for producing human-like questions.", "title": "" }, { "docid": "423d8264602c19c313c044fcf08c0717", "text": "Since the last two decades, XML has gained momentum as the standard for web information management and complex data representation. Also, collaboratively built semi-structured information resources, such as Wikipedia, have become prevalent on the Web and can be inherently encoded in XML. Yet most methods for processing XML and semi-structured information handle mainly the syntactic properties of the data, while ignoring the semantics involved. To devise more intelligent applications, one needs to augment syntactic features with machine-readable semantic meaning. This can be achieved through the computational identification of the meaning of data in context, also known as (a.k.a.) automated semantic analysis and disambiguation, which is nowadays one of the main challenges at the core of the Semantic Web. This survey paper provides a concise and comprehensive review of the methods related to XML-based semi-structured semantic analysis and disambiguation. It is made of four logical parts. First, we briefly cover traditional word sense disambiguation methods for processing flat textual data. Second, we describe and categorize disambiguation techniques developed and extended to handle semi-structured and XML data. Third, we describe current and potential application scenarios that can benefit from XML semantic analysis, including: data clustering and semantic-aware indexing, data integration and selective dissemination, semantic-aware and temporal querying, web and mobile services matching and composition, blog and social semantic network analysis, and ontology learning. Fourth, we describe and discuss ongoing challenges and future directions, including: the quantification of semantic ambiguity, expanding XML disambiguation context, combining structure and content, using collaborative/social information sources, integrating explicit and implicit semantic analysis, emphasizing user involvement, and reducing computational complexity.", "title": "" }, { "docid": "3770720cff3a36596df097835f4f10a9", "text": "As mobile computing technologies have been more powerful and inclusive in people’s daily life, the issue of mobile assisted language learning (MALL) has also been widely explored in CALL research. Many researches on MALL consider the emerging mobile technologies have considerable potentials for the effective language learning. This review study focuses on the investigation of newly emerging mobile technologies and their pedagogical applications for language teachers and learners. Recent research or review on mobile assisted language learning tends to focus on more detailed applications of newly emerging mobile technology, rather than has given a broader point focusing on types of mobile device itself. In this paper, I thus reviewed recent research and conference papers for the last decade, which utilized newly emerging and integrated mobile technology. Its pedagogical benefits and challenges are discussed.", "title": "" }, { "docid": "7a18b4e266cb353e523addfacbdf5bdf", "text": "The field of image composition is constantly trying to improve the ways in which an image can be altered and enhanced. While this is usually done in the name of aesthetics and practicality, it also provides tools that can be used to maliciously alter images. In this sense, the field of digital image forensics has to be prepared to deal with the influx of new technology, in a constant arms-race. In this paper, the current state of this armsrace is analyzed, surveying the state-of-the-art and providing means to compare both sides. A novel scale to classify image forensics assessments is proposed, and experiments are performed to test composition techniques in regards to different forensics traces. We show that even though research in forensics seems unaware of the advanced forms of image composition, it possesses the basic tools to detect it.", "title": "" }, { "docid": "68a1c316c50258f924d28f1a2906271c", "text": "Market segmentation is one of the most important area of knowledge-based marketing. In banks, it is really a challenging task as data bases are large and multidimensional. In the paper we consider cluster analysis, which is the methodology, the most often applied in this area. We compare clustering algorithms in cases of high dimensionality with noise. We discuss using three algorithms: density based DBSCAN, k-means and based on it two-phase clustering process. We compare algorithms concerning their effectiveness and scalability. Some experiments with exemplary bank data sets are presented.", "title": "" }, { "docid": "b04ae75e4f444b97976962a397ac413c", "text": "In this paper the new topology DC/DC Boost power converter-inverter-DC motor that allows bidirectional rotation of the motor shaft is presented. In this direction, the system mathematical model is developed considering its different operation modes. Afterwards, the model validation is performed via numerical simulations by using Matlab-Simulink.", "title": "" }, { "docid": "8f304c738458fa2ccae77b3f222b45ab", "text": "A vehicular ad hoc network (VANET) serves as an application of the intelligent transportation system that improves traffic safety as well as efficiency. Vehicles in a VANET broadcast traffic and safety-related information used by road safety applications, such as an emergency electronic brake light. The broadcast of these messages in an open-access environment makes security and privacy critical and challenging issues in the VANET. A misuse of this information may lead to a traffic accident and loss of human lives atworse and, therefore, vehicle authentication is a necessary requirement. During authentication, a vehicle’s privacy-related data, such as identity and location information, must be kept private. This paper presents an approach for privacy-preserving authentication in a VANET. Our hybrid approach combines the useful features of both the pseudonym-based approaches and the group signature-based approaches to preclude their respective drawbacks. The proposed approach neither requires a vehicle to manage a certificate revocation list, nor indulges vehicles in any group management. The proposed approach utilizes efficient and lightweight pseudonyms that are not only used for message authentication, but also serve as a trapdoor in order to provide conditional anonymity. We present various attack scenarios that show the resilience of the proposed approach against various security and privacy threats. We also provide analysis of computational and communication overhead to show the efficiency of the proposed technique. In addition, we carry out extensive simulations in order to present a detailed network performance analysis. The results show the feasibility of our proposed approach in terms of end-to-end delay and packet delivery ratio.", "title": "" }, { "docid": "c26a1d7fc8e632e9e7d3ea149bc80ea0", "text": "Pain associated with integumentary wounds is highly prevalent, yet it remains an area of significant unmet need within health care. Currently, systemically administered opioids are the mainstay of treatment. However, recent publications are casting opioids in a negative light given their high side effect profile, inhibition of wound healing, and association with accidental overdose, incidents that are frequently fatal. Thus, novel analgesic strategies for wound-related pain need to be investigated. The ideal methods of pain relief for wound patients are modalities that are topical, lack systemic side effects, noninvasive, self-administered, and display rapid onset of analgesia. Extracts derived from the cannabis plant have been applied to wounds for thousands of years. The discovery of the human endocannabinoid system and its dominant presence throughout the integumentary system provides a valid and logical scientific platform to consider the use of topical cannabinoids for wounds. We are reporting a prospective case series of three patients with pyoderma gangrenosum that were treated with topical medical cannabis compounded in nongenetically modified organic sunflower oil. Clinically significant analgesia that was associated with reduced opioid utilization was noted in all three cases. Topical medical cannabis has the potential to improve pain management in patients suffering from wounds of all classes.", "title": "" }, { "docid": "6f6cd699a625748522e5e10b6e310e69", "text": "Research on organizational justice has focused primarily on the receivers of just and unjust treatment. Little is known about why managers adhere to or violate rules of justice in the first place. The authors introduce a model for understanding justice rule adherence and violation. They identify both cognitive motives and affective motives that explain why managers adhere to and violate justice rules. They also draw distinctions among the justice rules by specifying which rules offer managers more or less discretion in their execution. They then describe how motives and discretion interact to influence justice-relevant actions. Finally, the authors incorporate managers' emotional reactions to consider how their actions may change over time. Implications of the model for theory, research, and practice are discussed.", "title": "" }, { "docid": "5ae07e0d3157b62f6d5e0e67d2b7f2ea", "text": "G. Francis and F. Hermens (2002) used computer simulations to claim that many current models of metacontrast masking can account for the findings of V. Di Lollo, J. T. Enns, and R. A. Rensink (2000). They also claimed that notions of reentrant processing are not necessary because all of V. Di Lollo et al. 's data can be explained by feed-forward models. The authors show that G. Francis and F. Hermens's claims are vitiated by inappropriate modeling of attention and by ignoring important aspects of V. Di Lollo et al. 's results.", "title": "" }, { "docid": "3ee39231fc2fbf3b6295b1b105a33c05", "text": "We address a text regression problem: given a piece of text, predict a real-world continuous quantity associated with the text’s meaning. In this work, the text is an SEC-mandated financial report published annually by a publiclytraded company, and the quantity to be predicted is volatility of stock returns, an empirical measure of financial risk. We apply wellknown regression techniques to a large corpus of freely available financial reports, constructing regression models of volatility for the period following a report. Our models rival past volatility (a strong baseline) in predicting the target variable, and a single model that uses both can significantly outperform past volatility. Interestingly, our approach is more accurate for reports after the passage of the Sarbanes-Oxley Act of 2002, giving some evidence for the success of that legislation in making financial reports more informative.", "title": "" }, { "docid": "5e806d14356729d7c96dcf2d97ba9c30", "text": "Recently, a variety of bioactive protein drugs have been available in large quantities as a result of advances in biotechnology. Such availability has prompted development of long-term protein delivery systems. Biodegradable microparticulate systems have been used widely for controlled release of protein drugs for days and months. The most widely used biodegradable polymer has been poly(d,l-lactic-co-glycolic acid) (PLGA). Protein-containing microparticles are usually prepared by the water/oil/water (W/O/W) double emulsion method, and variations of this method, such as solid/oil/water (S/O/W) and water/oil/oil (W/O/O), have also been used. Other methods of preparation include spray drying, ultrasonic atomization, and electrospray methods. The important factors in developing biodegradable microparticles for protein drug delivery are protein release profile (including burst release, duration of release, and extent of release), microparticle size, protein loading, encapsulation efficiency, and bioactivity of the released protein. Many studies used albumin as a model protein, and thus, the bioactivity of the release protein has not been examined. Other studies which utilized enzymes, insulin, erythropoietin, and growth factors have suggested that the right formulation to preserve bioactivity of the loaded protein drug during the processing and storage steps is important. The protein release profiles from various microparticle formulations can be classified into four distinct categories (Types A, B, C, and D). The categories are based on the magnitude of burst release, the extent of protein release, and the protein release kinetics followed by the burst release. The protein loading (i.e., the total amount of protein loaded divided by the total weight of microparticles) in various microparticles is 6.7+/-4.6%, and it ranges from 0.5% to 20.0%. Development of clinically successful long-term protein delivery systems based on biodegradable microparticles requires improvement in the drug loading efficiency, control of the initial burst release, and the ability to control the protein release kinetics.", "title": "" }, { "docid": "3d84f5f8322737bf8c6f440180e07660", "text": "Incremental Dialog Processing (IDP) enables Spoken Dialog Systems to gradually process minimal units of user speech in order to give the user an early system response. In this paper, we present an application of IDP that shows its effectiveness in a task-oriented dialog system. We have implemented an IDP strategy and deployed it for one month on a real-user system. We compared the resulting dialogs with dialogs produced over the previous month without IDP. Results show that the incremental strategy significantly improved system performance by eliminating long and often off-task utterances that generally produce poor speech recognition results. User behavior is also affected; the user tends to shorten utterances after being interrupted by the system.", "title": "" }, { "docid": "90125582272e3f16a34d5d0c885f573a", "text": "RNAs have been shown to undergo transfer between mammalian cells, although the mechanism behind this phenomenon and its overall importance to cell physiology is not well understood. Numerous publications have suggested that RNAs (microRNAs and incomplete mRNAs) undergo transfer via extracellular vesicles (e.g., exosomes). However, in contrast to a diffusion-based transfer mechanism, we find that full-length mRNAs undergo direct cell-cell transfer via cytoplasmic extensions characteristic of membrane nanotubes (mNTs), which connect donor and acceptor cells. By employing a simple coculture experimental model and using single-molecule imaging, we provide quantitative data showing that mRNAs are transferred between cells in contact. Examples of mRNAs that undergo transfer include those encoding GFP, mouse β-actin, and human Cyclin D1, BRCA1, MT2A, and HER2. We show that intercellular mRNA transfer occurs in all coculture models tested (e.g., between primary cells, immortalized cells, and in cocultures of immortalized human and murine cells). Rapid mRNA transfer is dependent upon actin but is independent of de novo protein synthesis and is modulated by stress conditions and gene-expression levels. Hence, this work supports the hypothesis that full-length mRNAs undergo transfer between cells through a refined structural connection. Importantly, unlike the transfer of miRNA or RNA fragments, this process of communication transfers genetic information that could potentially alter the acceptor cell proteome. This phenomenon may prove important for the proper development and functioning of tissues as well as for host-parasite or symbiotic interactions.", "title": "" }, { "docid": "cd48c6b722f8e88f0dc514fcb6a0d890", "text": "Multi-tier data-intensive applications are widely deployed in virtualized data centers for high scalability and reliability. As the response time is vital for user satisfaction, this requires achieving good performance at each tier of the applications in order to minimize the overall latency. However, in such virtualized environments, each tier (e.g., application, database, web) is likely to be hosted by different virtual machines (VMs) on multiple physical servers, where a guest VM is unaware of changes outside its domain, and the hypervisor also does not know the configuration and runtime status of a guest VM. As a result, isolated virtualization domains lend themselves to performance unpredictability and variance. In this paper, we propose IOrchestra, a holistic collaborative virtualization framework, which bridges the semantic gaps of I/O stacks and system information across multiple VMs, improves virtual I/O performance through collaboration from guest domains, and increases resource utilization in data centers. We present several case studies to demonstrate that IOrchestra is able to address numerous drawbacks of the current practice and improve the I/O latency of various distributed cloud applications by up to 31%.", "title": "" } ]
scidocsrr
3624179dc3b2b68cfcce38e420b33040
Identifying children with autism spectrum disorder based on their face processing abnormality: A machine learning framework.
[ { "docid": "6ab433155baadb12c514650f57ccaad8", "text": "We present a systematic comparison of machine learning methods applied to the problem of fully automatic recognition of facial expressions. We explored recognition of facial actions from the facial action coding system (FACS), as well as recognition of fall facial expressions. Each video-frame is first scanned in real-time to detect approximately upright frontal faces. The faces found are scaled into image patches of equal size, convolved with a bank of Gabor energy filters, and then passed to a recognition engine that codes facial expressions into 7 dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. We report results on a series of experiments comparing recognition engines, including AdaBoost, support vector machines, linear discriminant analysis, as well as feature selection techniques. Best results were obtained by selecting a subset of Gabor filters using AdaBoost and then training support vector machines on the outputs of the filters selected by AdaBoost. The generalization performance to new subjects for recognition of full facial expressions in a 7-way forced choice was 93% correct, the best performance reported so far on the Cohn-Kanade FACS-coded expression dataset. We also applied the system to fully automated facial action coding. The present system classifies 18 action units, whether they occur singly or in combination with other actions, with a mean agreement rate of 94.5% with human FACS codes in the Cohn-Kanade dataset. The outputs of the classifiers change smoothly as a function of time and thus can be used to measure facial expression dynamics.", "title": "" } ]
[ { "docid": "1b4a97df029e45e8d4cf8b8c548c420a", "text": "Today, online social networks have become powerful tools for the spread of information. They facilitate the rapid and large-scale propagation of content and the consequences of an information -- whether it is favorable or not to someone, false or true -- can then take considerable proportions. Therefore it is essential to provide means to analyze the phenomenon of information dissemination in such networks. Many recent studies have addressed the modeling of the process of information diffusion, from a topological point of view and in a theoretical perspective, but we still know little about the factors involved in it. With the assumption that the dynamics of the spreading process at the macroscopic level is explained by interactions at microscopic level between pairs of users and the topology of their interconnections, we propose a practical solution which aims to predict the temporal dynamics of diffusion in social networks. Our approach is based on machine learning techniques and the inference of time-dependent diffusion probabilities from a multidimensional analysis of individual behaviors. Experimental results on a real dataset extracted from Twitter show the interest and effectiveness of the proposed approach as well as interesting recommendations for future investigation.", "title": "" }, { "docid": "b105711c0aabde844b46c3912cf78363", "text": "CONFLICT OF INTEREST\nnone declared.\n\n\nINTRODUCTION\nThe incidence of diabetes type 2 (diabetes mellitus type 2 - DM 2) is rapidly increasing worldwide. Physical inactivity and obesity are the major determinants of the disease. Primary prevention of DM 2 entails health monitoring of people at risk category. People with impaired glycemic control are at high risk for development of DM 2 and enter the intensive supervision program for primary and secondary prevention.\n\n\nOBJECTIVE OF THE RESEARCH\nTo evaluate the impact of metformin and lifestyle modification on glycemia and obesity in patients with prediabetes.\n\n\nPATIENTS AND METHODS\nThe study was conducted on three groups of 20 patients each (total of 60 patients) aged from 45 to 80, with an abnormal glycoregulation and prediabetes. The study did not include patients who already met the diagnostic criteria for the diagnosis of diabetes. During the study period of 6 months, one group was extensively educated on changing lifestyle (healthy nutrition and increased physical activity), the second group was treated with 500 mg metformin twice a day, while the control group was advised about diet and physical activities but different from the first two groups. At beginning of the study, all patients were measured initial levels of blood glucose, HbA1C, BMI (Body Mass Index), body weight and height and waist size. Also the same measurements were taken at the end of the conducted research, 6 months later. For the assessment of diabetes control was conducted fasting plasma glucose (FPG) test and 2 hours after a glucose load, and HbA1C.\n\n\nRESULTS\nAt the beginning of the study the average HbA1C (%) values in three different groups according to the type of intervention (lifestyle changes, metformin, control group) were as follows: (6.4 ± 0.5 mmol / l), (6.5 ± 1.2 mmol / l), (6.7 ± 0.5 mmol / l). At the end of the research, the average HbA1C values were: 6.2 ± 0.3 mmol / l, 6.33 ± 0.5 mmol / l and 6.7 ± 1.4 mmol / l. In the group of patients who received intensive training on changing lifestyle or group that was treated with metformin, the average reduction in blood glucose and HbA1C remained within the reference range and there were no criteria for the diagnosis of diabetes. Unlike the control group, a group that was well educated on changing habits decreased average body weight by 4.25 kg, BMI by 1.3 and waist size by 2.5 cm. Metformin therapy led to a reduction in the average weight of 3.83 kg, BMI of 1.33 and 3.27 for waist size. Changing lifestyle (healthy diet and increased physical activity) has led to a reduction in total body weight in 60% of patients, BMI in 65% of patients, whereas metformin therapy led to a reduction of the total body weight in 50%, BMI in 45% of patients. In the control group, the overall reduction in body weight was observed in 25%, and BMI in 15% of patients.\n\n\nCONCLUSION\nModification of lifestyle, such as diet and increased physical activity or use of metformin may improve glycemic regulation, reduce obesity and prevent or delay the onset of developing DM 2.", "title": "" }, { "docid": "172a35c941407bb09c8d41953dfc6d37", "text": "Multi-task learning (MTL) is a machine learning paradigm that improves the performance of each task by exploiting useful information contained in multiple related tasks. However, the relatedness of tasks can be exploited by attackers to launch data poisoning attacks, which has been demonstrated a big threat to single-task learning. In this paper, we provide the first study on the vulnerability of MTL. Specifically, we focus on multi-task relationship learning (MTRL) models, a popular subclass of MTL models where task relationships are quantized and are learned directly from training data. We formulate the problem of computing optimal poisoning attacks on MTRL as a bilevel program that is adaptive to arbitrary choice of target tasks and attacking tasks. We propose an efficient algorithm called PATOM for computing optimal attack strategies. PATOM leverages the optimality conditions of the subproblem of MTRL to compute the implicit gradients of the upper level objective function. Experimental results on realworld datasets show that MTRL models are very sensitive to poisoning attacks and the attacker can significantly degrade the performance of target tasks, by either directly poisoning the target tasks or indirectly poisoning the related tasks exploiting the task relatedness. We also found that the tasks being attacked are always strongly correlated, which provides a clue for defending against such attacks.", "title": "" }, { "docid": "b55eb410f2a2c7eb6be1c70146cca203", "text": "Permissioned blockchains are arising as a solution to federate companies prompting accountable interactions. A variety of consensus algorithms for such blockchains have been proposed, each of which has different benefits and drawbacks. Proof-of-Authority (PoA) is a new family of Byzantine fault-tolerant (BFT) consensus algorithms largely used in practice to ensure better performance than traditional Practical Byzantine Fault Tolerance (PBFT). However, the lack of adequate analysis of PoA hinders any cautious evaluation of their effectiveness in real-world permissioned blockchains deployed over the Internet, hence on an eventually synchronous network experimenting Byzantine nodes. In this paper, we analyse two of the main PoA algorithms, named Aura and Clique, both in terms of provided guarantees and performances. First, we derive their functioning including how messages are exchanged, then we weight, by relying on the CAP theorem, consistency, availability and partition tolerance guarantees. We also report a qualitative latency analysis based on message rounds. The analysis advocates that PoA for permissioned blockchains, deployed over the Internet with Byzantine nodes, do not provide adequate consistency guarantees for scenarios where data integrity is essential. We claim that PBFT can fit better such scenarios, despite a limited loss in terms of performance.", "title": "" }, { "docid": "b0ae3875b79f8453a3752d1e684abeaa", "text": "This study applied a functional approach to the assessment of self-mutilative behavior (SMB) among adolescent psychiatric inpatients. On the basis of past conceptualizations of different forms of self-injurious behavior, the authors hypothesized that SMB is performed because of the automatically reinforcing (i.e., reinforced by oneself; e.g., emotion regulation) and/or socially reinforcing (i.e., reinforced by others; e.g., attention, avoidance-escape) properties associated with such behaviors. Data were collected from 108 adolescent psychiatric inpatients referred for self-injurious thoughts or behaviors. Adolescents reported engaging in SMB frequently, using multiple methods, and having an early age of onset. Moreover, the results supported the structural validity and reliability of the hypothesized functional model of SMB. Most adolescents engaged in SMB for automatic reinforcement, although a sizable portion endorsed social reinforcement functions as well. These findings have direct implications for the understanding, assessment, and treatment of SMB.", "title": "" }, { "docid": "394410f85e2911eb95678472e35bb9e1", "text": "The purpose of this article was to build a license plates recognition system with high accuracy at night. The system, based on regular PC, catches video frames which include a visible car license plate and processes them. Once a license plate is detected, its digits are recognized, and then checked against a database. The focus is on the modified algorithms to identify the individual characters. In this article, we use the template-matching method and neural net method together, and make some progress on the study before. The result showed that the accuracy is higher at night.", "title": "" }, { "docid": "be43b90cce9638b0af1c3143b6d65221", "text": "Reasoning on provenance information and property propagation is of significant importance in e-science since it helps scientists manage derived metadata in order to understand the source of an object, reproduce results of processes and facilitate quality control of results and processes. In this paper we introduce a simple, yet powerful reasoning mechanism based on property propagation along the transitive part-of and derivation chains, in order to trace the provenance of an object and to carry useful inferences. We apply our reasoning in semantic repositories using the CIDOC-CRM conceptual schema and its extension CRMdig, which has been develop for representing the digital and empirical provenance of digi-", "title": "" }, { "docid": "d83062e4022f6282d7d9b99b8d239715", "text": "Annexin A1 (ANXA1) is an endogenous protein with potent anti-inflammatory properties in the brain. Although ANXA1 has been predominantly studied for its binding to formyl peptide receptors (FPRs) on plasma membranes, little is known regarding whether this protein has an anti-inflammatory effect in the cytosol. Here, we investigated the mechanism by which the ANXA1 peptide Ac2-26 decreases high TNF-α production and IKKβ activity, which was caused by oxygen glucose deprivation/reperfusion (OGD/R)-induced neuronal conditioned medium (NCM) in microglia. We found that exogenous Ac2-26 crosses into the cytoplasm of microglia and inhibits both gene expression and protein secretion of TNF-α. Ac2-26 also causes a decrease in IKKβ protein but not IKKβ mRNA, and this effect is inverted by lysosome inhibitor NH4CL. Furthermore, we demonstrate that Ac2-26 induces IKKβ accumulation in lysosomes and that lysosomal-associated membrane protein 2A (LAMP-2A), not LC-3, is enhanced in microglia exposed to Ac2-26. We hypothesize that Ac2-26 mediates IKKβ degradation in lysosomes through chaperone-mediated autophagy (CMA). Interestingly, ANXA1 in the cytoplasm does not interact with IKKβ but with HSPB1, and Ac2-26 promotes HSPB1 binding to IKKβ. Furthermore, both ANXA1 and HSPB1 can interact with Hsc70 and LAMP-2A, but IKKβ only associates with LAMP-2A. Downregulation of HSPB1 or LAMP-2A reverses the degradation of IKKβ induced by Ac2-26. Taken together, these findings define an essential role of exogenous Ac2-26 in microglia and demonstrate that Ac2-26 is associated with HSPB1 and promotes HSPB1 binding to IKKβ, which is degraded by CMA, thereby reducing TNF-α expression.", "title": "" }, { "docid": "fbbf7c30f7ebcd2b9bbc9cc7877309b1", "text": "People detection is essential in a lot of different systems. Many applications nowadays tend to require people detection to achieve certain tasks. These applications come under many disciplines, such as robotics, ergonomics, biomechanics, gaming and automotive industries. This wide range of applications makes human body detection an active area of research. With the release of depth sensors or RGB-D cameras such as Micosoft Kinect, this area of research became more active, specially with their affordable price. Human body detection requires the adaptation of many scenarios and situations. Various conditions such as occlusions, background cluttering and props attached to the human body require training on custom built datasets. In this paper we present an approach to prepare training datasets to detect and track human body with attached props. The proposed approach uses rigid body physics simulation to create and animate different props attached to the human body. Three scenarios are implemented. In the first scenario the prop is closely attached to the human body, such as a person carrying a backpack. In the second scenario, the prop is slightly attached to the human body, such as a person carrying a briefcase. In the third scenario the prop is not attached to the human body, such as a person dragging a trolley bag. Our approach gives results with accuracy of 93% in identifying both the human body parts and the attached prop in all the three scenarios.", "title": "" }, { "docid": "2b61a16b47d865197c6c735cefc8e3ec", "text": "The present study investigated the relationship between trauma symptoms and a history of child sexual abuse, adult sexual assault, and physical abuse by a partner as an adult. While there has been some research examining the correlation between individual victimization experiences and traumatic stress, the cumulative impact of multiple victimization experiences has not been addressed. Subjects were recruited from psychological clinics and community advocacy agencies. Additionally, a nonclinical undergraduate student sample was evaluated. The results of this study indicate not only that victimization and revictimization experiences are frequent, but also that the level of trauma specific symptoms are significantly related to the number of different types of reported victimization experiences. The research and clinical implications of these findings are discussed.", "title": "" }, { "docid": "33eebe279e80452aec3e2e5bd28a708d", "text": "Context aware recommender systems go beyond the traditional personalized recommendation models by incorporating a form of situational awareness. They provide recommendations that not only correspond to a user's preference profile, but that are also tailored to a given situation or context. We consider the setting in which contextual information is represented as a subset of an item feature space describing short-term interests or needs of a user in a given situation. This contextual information can be provided by the user in the form of an explicit query, or derived implicitly.\n We propose a unified probabilistic model that integrates user profiles, item representations, and contextual information. The resulting recommendation framework computes the conditional probability of each item given the user profile and the additional context. These probabilities are used as recommendation scores for ranking items. Our model is an extension of the Latent Dirichlet Allocation (LDA) model that provides the capability for joint modeling of users, items, and the meta-data associated with contexts. Each user profile is modeled as a mixture of the latent topics. The discovered latent topics enable our system to handle missing data in item features. We demonstrate the application of our framework for article and music recommendation. In the latter case, the set of popular tags from social tagging Web sites are used for context descriptions. Our evaluation results show that considering context can help improve the quality of recommendations.", "title": "" }, { "docid": "790de0f792c81b9e26676f800e766759", "text": "The ubiquity of online fashion shopping demands effective recommendation services for customers. In this paper, we study two types of fashion recommendation: (i) suggesting an item that matches existing components in a set to form a stylish outfit (a collection of fashion items), and (ii) generating an outfit with multimodal (images/text) specifications from a user. To this end, we propose to jointly learn a visual-semantic embedding and the compatibility relationships among fashion items in an end-to-end fashion. More specifically, we consider a fashion outfit to be a sequence (usually from top to bottom and then accessories) and each item in the outfit as a time step. Given the fashion items in an outfit, we train a bidirectional LSTM (Bi-LSTM) model to sequentially predict the next item conditioned on previous ones to learn their compatibility relationships. Further, we learn a visual-semantic space by regressing image features to their semantic representations aiming to inject attribute and category information as a regularization for training the LSTM. The trained network can not only perform the aforementioned recommendations effectively but also predict the compatibility of a given outfit. We conduct extensive experiments on our newly collected Polyvore dataset, and the results provide strong qualitative and quantitative evidence that our framework outperforms alternative methods.", "title": "" }, { "docid": "b123916f2795ab6810a773ac69bdf00b", "text": "The acceptance of open data practices by individuals and organizations lead to an enormous explosion in data production on the Internet. The access to a large number of these data is carried out through Web services, which provide a standard way to interact with data. This class of services is known as data services. In this context, users' queries often require the composition of multiple data services to be answered. On the other hand, the data returned by a data service is not always certain due to various raisons, e.g., the service accesses different data sources, privacy constraints, etc. In this paper, we study the basic activities of data services that are affected by the uncertainty of data, more specifically, modeling, invocation and composition. We propose a possibilistic approach that treats the uncertainty in all these activities.", "title": "" }, { "docid": "7c3457a5ca761b501054e76965b41327", "text": "Background learning is a pre-processing of motion detection which is a basis step of video analysis. For the static background, many previous works have already achieved good performance. However, the results on learning dynamic background are still much to be improved. To address this challenge, in this paper, a novel and practical method is proposed based on deep auto-encoder networks. Firstly, dynamic background images are extracted through a deep auto-encoder network (called Background Extraction Network) from video frames containing motion objects. Then, a dynamic background model is learned by another deep auto-encoder network (called Background Learning Network) using the extracted background images as the input. To be more flexible, our background model can be updated on-line to absorb more training samples. Our main contributions are 1) a cascade of two deep auto-encoder networks which can deal with the separation of dynamic background and foregrounds very efficiently; 2) a method of online learning is adopted to accelerate the training of Background Extraction Network. Compared with previous algorithms, our approach obtains the best performance over six benchmark data sets. Especially, the experiments show that our algorithm can handle large variation background very well.", "title": "" }, { "docid": "0066d03bf551e64b9b4a1595f1494347", "text": "Visual Text Analytics has been an active area of interdisciplinary research (http://textvis.lnu.se/). This interactive tutorial is designed to give attendees an introduction to the area of information visualization, with a focus on linguistic visualization. After an introduction to the basic principles of information visualization and visual analytics, this tutorial will give an overview of the broad spectrum of linguistic and text visualization techniques, as well as their application areas [3]. This will be followed by a hands-on session that will allow participants to design their own visualizations using tools (e.g., Tableau), libraries (e.g., d3.js), or applying sketching techniques [4]. Some sample datasets will be provided by the instructor. Besides general techniques, special access will be provided to use the VisArgue framework [1] for the analysis of selected datasets.", "title": "" }, { "docid": "b06fd59d5acdf6dd0b896a62f5d8b123", "text": "BACKGROUND\nHippocampal volume reduction has been reported inconsistently in people with major depression.\n\n\nAIMS\nTo evaluate the interrelationships between hippocampal volumes, memory and key clinical, vascular and genetic risk factors.\n\n\nMETHOD\nTotals of 66 people with depression and 20 control participants underwent magnetic resonance imaging and clinical assessment. Measures of depression severity, psychomotor retardation, verbal and visual memory and vascular and specific genetic risk factors were collected.\n\n\nRESULTS\nReduced hippocampal volumes occurred in older people with depression, those with both early-onset and late-onset disorders and those with the melancholic subtype. Reduced hippocampal volumes were associated with deficits in visual and verbal memory performance.\n\n\nCONCLUSIONS\nAlthough reduced hippocampal volumes are most pronounced in late-onset depression, older people with early-onset disorders also display volume changes and memory loss. No clear vascular or genetic risk factors explain these findings. Hippocampal volume changes may explain how depression emerges as a risk factor to dementia.", "title": "" }, { "docid": "0cb944545afbd19d1441433c621a6d66", "text": "In this paper, we propose a fine-grained image categorization system with easy deployment. We do not use any object/part annotation (weakly supervised) in the training or in the testing stage, but only class labels for training images. Fine-grained image categorization aims to classify objects with only subtle distinctions (e.g., two breeds of dogs that look alike). Most existing works heavily rely on object/part detectors to build the correspondence between object parts, which require accurate object or object part annotations at least for training images. The need for expensive object annotations prevents the wide usage of these methods. Instead, we propose to generate multi-scale part proposals from object proposals, select useful part proposals, and use them to compute a global image representation for categorization. This is specially designed for the weakly supervised fine-grained categorization task, because useful parts have been shown to play a critical role in existing annotation-dependent works, but accurate part detectors are hard to acquire. With the proposed image representation, we can further detect and visualize the key (most discriminative) parts in objects of different classes. In the experiments, the proposed weakly supervised method achieves comparable or better accuracy than the state-of-the-art weakly supervised methods and most existing annotation-dependent methods on three challenging datasets. Its success suggests that it is not always necessary to learn expensive object/part detectors in fine-grained image categorization.", "title": "" }, { "docid": "3429145583d25ba1d603b5ade11f4312", "text": "Sequential pattern mining is an important data mining problem with broad applications. It is challenging since one may need to examine a combinatorially explosive number of possible subsequence patterns. Most of the previously developed sequential pattern mining methods follow the methodology of which may substantially reduce the number of combinations to be examined. However, still encounters problems when a sequence database is large and/or when sequential patterns to be mined are numerous and/or long. In this paper, we propose a novel sequential pattern mining method, called PrefixSpan (i.e., Prefix-projected Sequential pattern mining), which explores prefixprojection in sequential pattern mining. PrefixSpan mines the complete set of patterns but greatly reduces the efforts of candidate subsequence generation. Moreover, prefix-projection substantially reduces the size of projected databases and leads to efficient processing. Our performance study shows that PrefixSpan outperforms both the -based GSP algorithm and another recently proposed method, FreeSpan, in mining large sequence", "title": "" }, { "docid": "678a90f1dc8fa7926ce15717d48a2659", "text": "The recent advances in full human body (HB) imaging technology illustrated by the 3D human body scanner (HBS), a device delivering full HB shape data, opened up large perspectives for the deployment of this technology in various fields such as the clothing industry, anthropology, and entertainment. However, these advances also brought challenges on how to process and interpret the data delivered by the HBS in order to bridge the gap between this technology and potential applications. This paper presents a literature survey of research work on HBS data segmentation and modeling aiming at overcoming these challenges, and discusses and evaluates different approaches with respect to several requirements.", "title": "" }, { "docid": "ce020748bd9bc7529036aa41dcd59a92", "text": "In this paper a new isolated SEPIC converter which is a proper choice for PV applications, is introduced and analyzed. The proposed converter has the advantage of high voltage gain while the switch voltage stress is same as a regular SEPIC converter. The converter operating modes are discussed and design considerations are presented. Also simulation results are illustrated which justifies the theoretical analysis. Finally the proposed converter is improved using active clamp technique.", "title": "" } ]
scidocsrr
c319c491304cba0ffe7dcde7737b2f1f
Distributed MAP Inference for Undirected Graphical Models
[ { "docid": "7d3c07b505e27fdfea4ada999a233169", "text": "Discriminatively trained undirected graphical models have had wide empirical success, and there has been increasing interest in toolkits that ease their application to complex relational data. The power in relational models is in their repeated structure and tied parameters; at issue is how to define these structures in a powerful and flexible way. Rather than using a declarative language, such as SQL or first-order logic, we advocate using an imperative language to express various aspects of model structure, inference, and learning. By combining the traditional, declarative, statistical semantics of factor graphs with imperative definitions of their construction and operation, we allow the user to mix declarative and procedural domain knowledge, and also gain significant efficiencies. We have implemented such imperatively defined factor graphs in a system we call FACTORIE, a software library for an object-oriented, strongly-typed, functional language. In experimental comparisons to Markov Logic Networks on joint segmentation and coreference, we find our approach to be 3-15 times faster while reducing error by 20-25%—achieving a new state of the art.", "title": "" }, { "docid": "b35884fa735db116d20ecbe3e03765c2", "text": "This paper introduces dual decomposition as a framework for deriving inference algorithms for NLP problems. The approach relies on standard dynamic-programming algorithms as oracle solvers for sub-problems, together with a simple method for forcing agreement between the different oracles. The approach provably solves a linear programming (LP) relaxation of the global inference problem. It leads to algorithms that are simple, in that they use existing decoding algorithms; efficient, in that they avoid exact algorithms for the full model; and often exact, in that empirically they often recover the correct solution in spite of using an LP relaxation. We give experimental results on two problems: 1) the combination of two lexicalized parsing models; and 2) the combination of a lexicalized parsing model and a trigram part-of-speech tagger.", "title": "" } ]
[ { "docid": "c538390f75ae57ab65e6f9388fbfd1a0", "text": "Deep Deterministic Policy Gradient (DDPG) algorithm has been successful for state-of-the-art performance in high-dimensional continuous control tasks. However, due to the complexity and randomness of the environment, DDPG tends to suffer from inefficient exploration and unstable training. In this work, we propose Self-Adaptive Double Bootstrapped DDPG (SOUP), an algorithm that extends DDPG to bootstrapped actor-critic architecture. SOUP improves the efficiency of exploration by multiple actor heads capturing more potential actions and multiple critic heads evaluating more reasonable Q-values collaboratively. The crux of double bootstrapped architecture is to tackle the fluctuations in performance, caused by multiple heads of spotty capacity varying throughout training. To alleviate the instability, a self-adaptive confidence mechanism is introduced to dynamically adjust the weights of bootstrapped heads and enhance the ensemble performance effectively and efficiently. We demonstrate that SOUP achieves faster learning by at least 45% while improving cumulative reward and stability substantially in comparison to vanilla DDPG on OpenAI Gym’s MuJoCo environments.", "title": "" }, { "docid": "bdb4aba2b34731ffdf3989d6d1186270", "text": "In order to push the performance on realistic computer vision tasks, the number of classes in modern benchmark datasets has significantly increased in recent years. This increase in the number of classes comes along with increased ambiguity between the class labels, raising the question if top-1 error is the right performance measure. In this paper, we provide an extensive comparison and evaluation of established multiclass methods comparing their top-k performance both from a practical as well as from a theoretical perspective. Moreover, we introduce novel top-k loss functions as modifications of the softmax and the multiclass SVM losses and provide efficient optimization schemes for them. In the experiments, we compare on various datasets all of the proposed and established methods for top-k error optimization. An interesting insight of this paper is that the softmax loss yields competitive top-k performance for all k simultaneously. For a specific top-k error, our new top-k losses lead typically to further improvements while being faster to train than the softmax.", "title": "" }, { "docid": "50e9cf4ff8265ce1567a9cc82d1dc937", "text": "Thu, 06 Dec 2018 02:11:00 GMT bayesian reasoning and machine learning pdf Bayesian Reasoning and Machine Learning [David Barber] on Amazon.com. *FREE* shipping on qualifying offers. Machine learning methods extract value from vast data sets ... Thu, 06 Dec 2018 14:35:00 GMT Bayesian Reasoning and Machine Learning: David Barber ... A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of ... Sat, 08 Dec 2018 04:53:00 GMT Bayesian network Wikipedia Bayesian Reasoning and Machine Learning. The book is available in hardcopy from Cambridge University Press. The publishers have kindly agreed to allow the online ... Sun, 09 Dec 2018 20:51:00 GMT Bayesian Reasoning and Machine Learning, David Barber Machine learning (ML) is the study of algorithms and mathematical models that computer systems use to progressively improve their performance on a specific task. Mon, 10 Dec 2018 14:02:00 GMT Machine learning Wikipedia Your friends and colleagues are talking about something called \"Bayes' Theorem\" or \"Bayes' Rule\", or something called Bayesian reasoning. They sound really ... Mon, 10 Dec 2018 14:24:00 GMT Yudkowsky Bayes' Theorem NIPS 2016 Tutorial on ML Methods for Personalization with Application to Medicine. More here. UAI 2017 Tutorial on Machine Learning and Counterfactual Reasoning for ... Thu, 06 Dec 2018 15:33:00 GMT Suchi Saria – Machine Learning, Computational Health ... Gaussian Processes and Kernel Methods Gaussian processes are non-parametric distributions useful for doing Bayesian inference and learning on unknown functions. Mon, 10 Dec 2018 05:12:00 GMT Machine Learning Group Publications University of This practical introduction is geared towards scientists who wish to employ Bayesian networks for applied research using the BayesiaLab software platform. Sun, 09 Dec 2018 17:17:00 GMT Bayesian Networks & BayesiaLab: A Practical Introduction ... Automated Bitcoin Trading via Machine Learning Algorithms Isaac Madan Department of Computer Science Stanford University Stanford, CA 94305 imadan@stanford.edu Tue, 27 Nov 2018 20:01:00 GMT Automated Bitcoin Trading via Machine Learning Algorithms 2.3. Naà ̄ve Bayesian classifier. A Naà ̄ve Bayesian classifier generally seems very simple; however, it is a pioneer in most information and computational applications ... Sun, 09 Dec 2018 03:48:00 GMT Proposed efficient algorithm to filter spam using machine ... Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning) [Kevin P. Murphy, Francis Bach] on Amazon.com. *FREE* shipping on qualifying ... Sun, 01 Jul 2018 19:30:00 GMT Machine Learning: A Probabilistic Perspective (Adaptive ... So itâ€TMs pretty clear by now that statistics and machine learning arenâ€TMt very different fields. I was recently pointed to a very amusing comparison by the ... Fri, 07 Dec 2018 19:56:00 GMT Statistics vs. Machine Learning, fight! | AI and Social ... Need help with Statistics for Machine Learning? Take my free 7-day email crash course now (with sample code). Click to sign-up and also get a free PDF Ebook version ... Thu, 06 Dec 2018 23:39:00 GMT Statistics for Evaluating Machine Learning Models", "title": "" }, { "docid": "def621d47a8ead24754b1eebe590314a", "text": "Existing social-aware routing protocols for packet switched networks make use of the information about the social structure of the network deduced by state information of nodes (e.g., history of past encounters) to optimize routing. Although these approaches are shown to have superior performance to social-oblivious, stateless routing protocols (BinarySW, Epidemic), the improvement comes at the cost of considerable storage overhead required on the nodes. In this paper we present SANE, the first routing mechanism that combines the advantages of both social-aware and stateless approaches. SANE is based on the observation - that we validate on a real-world trace - that individuals with similar interests tend to meet more often. In SANE, individuals (network members) are characterized by their interest profile, a compact representation of their interests. By implementing a simple routing rule based on interest profile similarity, SANE is free of network state information, thus overcoming the storage capacity problem with existing social-aware approaches. Through thorough experiments, we show the superiority of SANE over existing approaches, both stateful, social-aware and stateless, social-oblivious. We discuss the statelessness of our approach in the supplementary file, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TPDS.2014.2307857, of this manuscript. Our interest-based approach easily enables innovative networking services, such as interest-casting. An interest-casting protocol is also introduced in this paper, and evaluated through experiments based on both real-world and synthetic mobility traces.", "title": "" }, { "docid": "58d4b95cc0ce39126c962e88b1bd6ba1", "text": "The quality of image encryption is commonly measured by the Shannon entropy over the ciphertext image. However, this measurement does not consider to the randomness of local image blocks and is inappropriate for scrambling based image encryption methods. In this paper, a new information entropy-based randomness measurement for image encryption is introduced which, for the first time, answers the question of whether a given ciphertext image is sufficiently random-like. It measures the randomness over the ciphertext in a fairer way by calculating the averaged entropy of a series of small image blocks within the entire test image. In order to fulfill both quantitative and qualitative measurement, the expectation and the variance of this averaged block entropy for a true-random image are strictly derived and corresponding numerical reference tables are also provided. Moreover, a hypothesis test at significance α-level is given to help accept or reject the hypothesis that the test image is ideally encrypted/random-like. Simulation results show that the proposed test is able to give both effectively quantitative and qualitative results for image encryption. The same idea can also be applied to measure other digital data, like audio and video.", "title": "" }, { "docid": "c49ed75ce48fb92db6e80e4fe8af7127", "text": "The One Class Classification (OCC) problem is different from the conventional binary/multi-class classification problem in the sense that in OCC, the negative class is either not present or not properly sampled. The problem of classifying positive (or target) cases in the absence of appropriately-characterized negative cases (or outliers) has gained increasing attention in recent years. Researchers have addressed the task of OCC by using different methodologies in a variety of application domains. In this paper we formulate a taxonomy with three main categories based on the way OCC has been envisaged, implemented and applied by various researchers in different application domains. We also present a survey of current state-of-the-art OCC algorithms, their importance, applications and limitations.", "title": "" }, { "docid": "8238edb8ec7b9b1dd076c61c619b5da3", "text": "Two complexity parameters of EEG, i.e. approximate entropy (ApEn) and Kolmogorov complexity (Kc) are utilized to characterize the complexity and irregularity of EEG data under the different mental fatigue states. Then the kernel principal component analysis (KPCA) and Hidden Markov Model (HMM) are combined to differentiate two mental fatigue states. The KPCA algorithm is employed to extract nonlinear features from the complexity parameters of EEG and improve the generalization performance of HMM. The investigation suggests that ApEn and Kc can effectively describe the dynamic complexity of EEG, which is strongly correlated with mental fatigue. Both complexity parameters are significantly decreased (P < 0.005) as the mental fatigue level increases. These complexity parameters may be used as the indices of the mental fatigue level. Moreover, the joint KPCA–HMM method can effectively reduce the dimensionality of the feature vectors, accelerate the classification speed and achieve higher classification accuracy (84%) of mental fatigue. Hence KPCA–HMM could be a promising model for the estimation of mental fatigue. Crown Copyright 2010 Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d7a1985750fe10273c27f7f8121640ac", "text": "The large volumes of data that will be produced by ubiquitous sensors and meters in future smart distribution networks represent an opportunity for the use of data analytics to extract valuable knowledge and, thus, improve Distribution Network Operator (DNO) planning and operation tasks. Indeed, applications ranging from outage management to detection of non-technical losses to asset management can potentially benefit from data analytics. However, despite all the benefits, each application presents DNOs with diverse data requirements and the need to define an adequate approach. Consequently, it is critical to understand the different interactions among applications, monitoring infrastructure and approaches involved in the use of data analytics in distribution networks. To assist DNOs in the decision making process, this work presents some of the potential applications where data analytics are likely to improve distribution network performance and the corresponding challenges involved in its implementation.", "title": "" }, { "docid": "e28ab50c2d03402686cc9a465e1231e7", "text": "Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.", "title": "" }, { "docid": "ed4463ff17bbaf64d45012ae2aaae50b", "text": "Functional Arabic Morphology is a formulation of the Arabic inflectional system seeking the working interface between morphology and syntax. ElixirFM is its high-level implementation that reuses and extends the Functional Morphology library for Haskell. Inflection and derivation are modeled in terms of paradigms, grammatical categories, lexemes and word classes. The computation of analysis or generation is conceptually distinguished from the general-purpose linguistic model. The lexicon of ElixirFM is designed with respect to abstraction, yet is no more complicated than printed dictionaries. It is derived from the open-source Buckwalter lexicon and is enhanced with information sourcing from the syntactic annotations of the Prague Arabic Dependency Treebank.", "title": "" }, { "docid": "3500278940baaf6f510ad47463cbf5ed", "text": "Different word embedding models capture different aspects of linguistic properties. This inspired us to propose a model (MMaxLSTM-CNN) for employing multiple sets of word embeddings for evaluating sentence similarity/relation. Representing each word by multiple word embeddings, the MaxLSTM-CNN encoder generates a novel sentence embedding. We then learn the similarity/relation between our sentence embeddings via Multi-level comparison. Our method M-MaxLSTMCNN consistently shows strong performances in several tasks (i.e., measure textual similarity, identify paraphrase, recognize textual entailment). According to the experimental results on STS Benchmark dataset and SICK dataset from SemEval, M-MaxLSTM-CNN outperforms the state-of-the-art methods for textual similarity tasks. Our model does not use hand-crafted features (e.g., alignment features, Ngram overlaps, dependency features) as well as does not require pretrained word embeddings to have the same dimension.", "title": "" }, { "docid": "11ce5da16cf0c0c6cfb85e0d0bbdc13e", "text": "Recently, fully-connected and convolutional neural networks have been trained to reach state-of-the-art performance on a wide variety of tasks such as speech recognition, image classification, natural language processing, and bioinformatics data. For classification tasks, much of these “deep learning” models employ the softmax activation functions to learn output labels in 1-of-K format. In this paper, we demonstrate a small but consistent advantage of replacing softmax layer with a linear support vector machine. Learning minimizes a margin-based loss instead of the cross-entropy loss. In almost all of the previous works, hidden representation of deep networks are first learned using supervised or unsupervised techniques, and then are fed into SVMs as inputs. In contrast to those models, we are proposing to train all layers of the deep networks by backpropagating gradients through the top level SVM, learning features of all layers. Our experiments show that simply replacing softmax with linear SVMs gives significant gains on datasets MNIST, CIFAR-10, and the ICML 2013 Representation Learning Workshop’s face expression recognition challenge.", "title": "" }, { "docid": "fd7799d569bdc4ad48a88070974f6c13", "text": "This paper presents a new large scale dataset targeting evaluation of local shape descriptors and 3d object recognition algorithms. The dataset consists of point clouds and triangulated meshes from 292 physical scenes taken from 11 different views, a total of approximately 3204 views. Each of the physical scenes contain 10 occluded objects resulting in a dataset with 32040 unique object poses and 45 different object models. The 45 object models are full 360 degree models which are scanned with a high precision structured light scanner and a turntable. All the included objects belong to different geometric groups, concave, convex, cylindrical and flat 3D object models. The object models have varying amount of local geometric features to challenge existing local shape feature descriptors in terms of descriptiveness and robustness. The dataset is validated in a benchmark which evaluates the matching performance of 7 different state-of-the-art local shape descriptors. Further, we validate the dataset in a 3D object recognition pipeline. Our benchmark shows as expected that local shape feature descriptors without any global point relation across the surface have a poor matching performance with flat and cylindrical objects. It is our objective that this dataset contributes to the future development of next generation of 3D object recognition algorithms. The dataset is public available at http://roboimagedata.compute.dtu.dk/.", "title": "" }, { "docid": "984ff576f35553d793aeec6cc48c5ff0", "text": "The popularity of the stakeholder model has been achieved thanks to its powerful visual scheme and its very simplicity. Stakeholder management has become an important tool to transfer ethics to management practice and strategy. Nevertheless, legitimate criticism continues to insist on clarification and emphasises on the perfectible nature of the model. Here, rather than building on the discussion from a philosophical or theoretical point of view, a different and innovative approach has been chosen: the analysis will return to the origin of stakeholder theory and will keep the graphical framework firmly in perspective. It will confront the stakeholder model’s graphical representation to the discussion on stakeholder definition, stakeholder identification and categorisation, to re-centre the debate to the strategic origin of the stakeholder model. The ambiguity and the vagueness of the stakeholder concept are discussed from managerial and legal approaches. The impacts of two major shortcomings of the popular stakeholder framework are examined: the boundaries and the level of the firm’s environment, and the ambivalent position of pressure groups and regulators. Working pragmatically, with a focus on the managerial and organisational perspective, an attempt is made to clarify the categorisations and classifications by introducing new terminology with a distinction between stakeholders, stakewatchers and stakekeepers. The analysis will finally lead to a proposed upgraded and refined version of the stakeholder model, with incremental ameliorations close to Freeman’s original model and a return of focus to its essence, the managerial implications in a strategic approach.", "title": "" }, { "docid": "471471cfc90e7f212dd7bbbee08d714e", "text": "Every year, a large number of children in the United States enter the foster care system. Many of them are eventually reunited with their biological parents or quickly adopted. A significant number, however, face long-term foster care, and some of these children are eventually adopted by their foster parents. The decision by foster parents to adopt their foster child carries significant economic consequences, including forfeiting foster care payments while also assuming responsibility for medical, legal, and educational expenses, to name a few. Since 1980, U.S. states have begun to offer adoption subsidies to offset some of these expenses, significantly lowering the cost of adopting a child who is in the foster care system. This article presents empirical evidence of the role that these economic incentives play in foster parents’ decision of when, or if, to adopt their foster child. We find that adoption subsidies increase adoptions through two distinct price mechanisms: by lowering the absolute cost of adoption, and by lowering the relative cost of adoption versus long-term foster care.", "title": "" }, { "docid": "569f8890a294b69d688977fc235aef17", "text": "Traditionally, voice communication over the local loop has been provided by wired systems. In particular, twisted pair has been the standard means of connection for homes and offices for several years. However in the recent past there has been an increased interest in the use of radio access technologies in local loops. Such systems which are now popular for their ease and low cost of installation and maintenance are called Wireless in Local Loop (WLL) systems. Subscribers' demands for greater capacity has grown over the years especially with the advent of the Internet. Wired local loops have responded to these increasing demands through the use of digital technologies such as ISDN and xDSL. Demands for enhanced data rates are being faced by WLL system operators too, thus entailing efforts towards more efficient bandwidth use. Multi-hop communication has already been studied extensively in Ad hoc network environments and has begun making forays into cellular systems as well. Multi-hop communication has been proven as one of the best ways to enhance throughput in a wireless network. Through this effort we study the issues involved in multi-hop communication in a wireless local loop system and propose a novel WLL architecture called Throughput enhanced Wireless in Local Loop (TWiLL). Through a realistic simulation model we show the tremendous performance improvement achieved by TWiLL over WLL. Traditional pricing schemes employed in single hop wireless networks cannot be applied in TWiLL -- a multi-hop environment. We also propose three novel cost reimbursement based pricing schemes which could be applied in such a multi-hop environment.", "title": "" }, { "docid": "22445127362a9a2b16521a4a48f24686", "text": "This work introduces the engineering design of a device capable to detect serum turbidity. We hypothesized that an electronic, portable, and low cost device that can provide objective, quantitative measurements of serum turbidity might have the potential to improve the early detection of neonatal sepsis. The design features, testing methodologies, and the obtained results are described. The final electronic device was evaluated in two experiments. The first one consisted in recording the turbidity value measured by the device for different solutions with known concentrations and different degrees of turbidity. The second analysis demonstrates a positive correlation between visual turbidity estimation and electronic turbidity measurement. Furthermore, our device demonstrated high turbidity in serum from two neonates with sepsis (one with a confirmed positive blood culture; the other one with a clinical diagnosis). We conclude that our electronic device may effectively measure serum turbidity at the bedside. Future studies will widen the possibility of additional clinical implications.", "title": "" }, { "docid": "ed44c393c44ee6e63cab1305146a4f9d", "text": "This paper presents a novel method for online and incremental appearance-based localization and mapping in a highly dynamic environment. Using position-invariant robust features (PIRFs), the method can achieve a high rate of recall with 100% precision. It can handle both strong perceptual aliasing and dynamic changes of places efficiently. Its performance also extends beyond conventional images; it is applicable to omnidirectional images for which the major portions of scenes are similar for most places. The proposed PIRF-based Navigation method named PIRF-Nav is evaluated by testing it on two standard datasets as is in FAB-MAP and on an additional omnidirectional image dataset that we collected. This extra dataset is collected on two days with different specific events, i.e., an open-campus event, to present challenges related to illumination variance and strong dynamic changes, and to test assessment of dynamic scene changes. Results show that PIRF-Nav outperforms FAB-MAP; PIRF-Nav at precision-1 yields a recall rate about two times (approximately 80%) higher than that of FAB-MAP. Its computation time is sufficiently short for real-time applications. The method is fully incremental, and requires no offline process for dictionary creation. Additional testing using combined datasets proves that PIRF-Nav can function over a long term and can solve the kidnapped robot problem.", "title": "" }, { "docid": "27c125643ffc8f1fee7ed5ee22025c01", "text": "In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, IMAGENET-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called IMAGENET-P which enables researchers to benchmark a classifier’s robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize.", "title": "" }, { "docid": "d438491c76e6afcdd7ad9a6351f1fda8", "text": "Acoustic word embeddings — fixed-dimensional vector representations of variable-length spoken word segments — have begun to be considered for tasks such as speech recognition and query-by-example search. Such embeddings can be learned discriminatively so that they are similar for speech segments corresponding to the same word, while being dissimilar for segments corresponding to different words. Recent work has found that acoustic word embeddings can outperform dynamic time warping on query-by-example search and related word discrimination tasks. However, the space of embedding models and training approaches is still relatively unexplored. In this paper we present new discriminative embedding models based on recurrent neural networks (RNNs). We consider training losses that have been successful in prior work, in particular a cross entropy loss for word classification and a contrastive loss that explicitly aims to separate same-word and different-word pairs in a “Siamese network” training setting. We find that both classifier-based and Siamese RNN embeddings improve over previously reported results on a word discrimination task, with Siamese RNNs outperforming classification models. In addition, we present analyses of the learned embeddings and the effects of variables such as dimensionality and network structure.", "title": "" } ]
scidocsrr
d75d1cdb473873b2d4e8e2f13715c738
How Teachers Use Data to Help Students Learn: Contextual Inquiry for the Design of a Dashboard
[ { "docid": "2c8c8511e1391d300bfd4b0abd5ecea4", "text": "In 2009, we reported on a new Intelligent Tutoring Systems (ITS) technology, example-tracing tutors, that can be built without programming using the Cognitive Tutor Authoring Tools (CTAT). Creating example-tracing tutors was shown to be 4–8 times as cost-effective as estimates for ITS development from the literature. Since 2009, CTAT and its associated learning management system, the Tutorshop, have been extended and have been used for both research and real-world instruction. As evidence that example-tracing tutors are an effective and mature ITS paradigm, CTAT-built tutors have been used by approximately 44,000 students and account for 40 % of the data sets in DataShop, a large open repository for educational technology data sets. We review 18 example-tracing tutors built since 2009, which have been shown to be effective in helping students learn in real educational settings, often with large pre/post effect sizes. These tutors support a variety of pedagogical approaches, beyond step-based problem solving, including collaborative learning, educational games, and guided invention activities. CTAT and other ITS authoring tools illustrate that non-programmer approaches to building ITS are viable and useful and will likely play a key role in making ITS widespread.", "title": "" }, { "docid": "04d75786e12cabf5c849971ea4eb34c8", "text": "In this paper we present a learning analytics conceptual framework that supports enquiry-based evaluation of learning designs. The dimensions of the proposed framework emerged from a review of existing analytics tools, the analysis of interviews with teachers, and user scenarios to understand what types of analytics would be useful in evaluating a learning activity in relation to pedagogical intent. The proposed framework incorporates various types of analytics, with the teacher playing a key role in bringing context to the analysis and making decisions on the feedback provided to students as well as the scaffolding and adaptation of the learning design. The framework consists of five dimensions: temporal analytics, tool-specific analytics, cohort dynamics, comparative analytics and contingency. Specific metrics and visualisations are defined for each dimension of the conceptual framework. Finally the development of a tool that partially implements the conceptual framework is discussed.", "title": "" }, { "docid": "273153d0cf32162acb48ed989fa6d713", "text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.", "title": "" } ]
[ { "docid": "375470d901a7d37698d34747621667ce", "text": "RNA interference (RNAi) has recently emerged as a specific and efficient method to silence gene expression in mammalian cells either by transfection of short interfering RNAs (siRNAs; ref. 1) or, more recently, by transcription of short hairpin RNAs (shRNAs) from expression vectors and retroviruses. But the resistance of important cell types to transduction by these approaches, both in vitro and in vivo, has limited the use of RNAi. Here we describe a lentiviral system for delivery of shRNAs into cycling and non-cycling mammalian cells, stem cells, zygotes and their differentiated progeny. We show that lentivirus-delivered shRNAs are capable of specific, highly stable and functional silencing of gene expression in a variety of cell types and also in transgenic mice. Our lentiviral vectors should permit rapid and efficient analysis of gene function in primary human and animal cells and tissues and generation of animals that show reduced expression of specific genes. They may also provide new approaches for gene therapy.", "title": "" }, { "docid": "5e1f51b3d9b6ff91fbba6b7d155ecfaf", "text": "If a teleoperation scenario foresees complex and fine manipulation tasks a multi-fingered telemanipulation system is required. In this paper a multi-fingered telemanipulation system is presented, whereby the human hand controls a three-finger robotic gripper and force feedback is provided by using an exoskeleton. Since the human hand and robotic grippers have different kinematic structures, appropriate mappings for forces and positions are applied. A point-to-point position mapping algorithm as well as a simple force mapping algorithm are presented and evaluated in a real experimental setup.", "title": "" }, { "docid": "b325f262a6f84637c8a175c29f07db34", "text": "The aim of this article is to present a synthetic overview of the state of knowledge regarding the Celtic cultures in the northwestern Iberian Peninsula. It reviews the difficulties linked to the fact that linguists and archaeologists do not agree on this subject, and that the hegemonic view rejects the possibility that these populations can be considered Celtic. On the other hand, the examination of a range of direct sources of evidence, including literary and epigraphic texts, and the application of the method of historical anthropology to the available data, demonstrate the validity of the consideration of Celtic culture in this region, which can be described as a protohistorical society of the Late Iron Age, exhibiting a hierarchical organization based on ritually chosen chiefs whose power was based in part on economic redistribution of resources, together with a priestly elite more or less of the druidic type. However, the method applied cannot on its own answer the questions of when and how this Celtic cultural dimension of the proto-history of the northwestern Iberian Peninsula developed.", "title": "" }, { "docid": "94076bd2a4587df2bee9d09e81af2109", "text": "Public genealogical databases are becoming increasingly populated with historical data and records of the current population's ancestors. As this increasing amount of available information is used to link individuals to their ancestors, the resulting trees become deeper and more dense, which justifies the need for using organized, space-efficient layouts to display the data. Existing layouts are often only able to show a small subset of the data at a time. As a result, it is easy to become lost when navigating through the data or to lose sight of the overall tree structure. On the contrary, leaving space for unknown ancestors allows one to better understand the tree's structure, but leaving this space becomes expensive and allows fewer generations to be displayed at a time. In this work, we propose that the H-tree based layout be used in genealogical software to display ancestral trees. We will show that this layout presents an increase in the number of displayable generations, provides a nicely arranged, symmetrical, intuitive and organized fractal structure, increases the user's ability to understand and navigate through the data, and accounts for the visualization requirements necessary for displaying such trees. Finally, user-study results indicate potential for user acceptance of the new layout.", "title": "" }, { "docid": "4e924d619325ca939955657db1280db1", "text": "This paper presents the dynamic modeling of a nonholonomic mobile robot and the dynamic stabilization problem. The dynamic model is based on the kinematic one including nonholonomic constraints. The proposed control strategy allows to solve the control problem using linear controllers and only requires the robot localization coordinates. This strategy was tested by simulation using Matlab-Simulink. Key-words: Mobile robot, kinematic and dynamic modeling, simulation, point stabilization problem.", "title": "" }, { "docid": "985e8fae88a81a2eec2ca9cc73740a0f", "text": "Negative symptoms account for much of the functional disability associated with schizophrenia and often persist despite pharmacological treatment. Cognitive behavioral therapy (CBT) is a promising adjunctive psychotherapy for negative symptoms. The treatment is based on a cognitive formulation in which negative symptoms arise and are maintained by dysfunctional beliefs that are a reaction to the neurocognitive impairment and discouraging life events frequently experienced by individuals with schizophrenia. This article outlines recent innovations in tailoring CBT for negative symptoms and functioning, including the use of a strong goal-oriented recovery approach, in-session exercises designed to disconfirm dysfunctional beliefs, and adaptations to circumvent neurocognitive and engagement difficulties. A case illustration is provided.", "title": "" }, { "docid": "21326db81a613fc84184c19408bc67ac", "text": "In the scenario where an underwater vehicle tracks an underwater target, reliable estimation of the target position is required.While USBL measurements provide target position measurements at low but regular update rate, multibeam sonar imagery gives high precision measurements but in a limited field of view. This paper describes the development of the tracking filter that fuses USBL and processed sonar image measurements for tracking underwater targets for the purpose of obtaining reliable tracking estimates at steady rate, even in cases when either sonar or USBL measurements are not available or are faulty. The proposed algorithms significantly increase safety in scenarios where underwater vehicle has to maneuver in close vicinity to human diver who emits air bubbles that can deteriorate tracking performance. In addition to the tracking filter development, special attention is devoted to adaptation of the region of interest within the sonar image by using tracking filter covariance transformation for the purpose of improving detection and avoiding false sonar measurements. Developed algorithms are tested on real experimental data obtained in field conditions. Statistical analysis shows superior performance of the proposed filter compared to conventional tracking using pure USBL or sonar measurements.", "title": "" }, { "docid": "8f9309ebfc87de5eb7cf715c0370da54", "text": "Hyperbolic discounting of future outcomes is widely observed to underlie choice behavior in animals. Additionally, recent studies (Kobayashi & Schultz, 2008) have reported that hyperbolic discounting is observed even in neural systems underlying choice. However, the most prevalent models of temporal discounting, such as temporal difference learning, assume that future outcomes are discounted exponentially. Exponential discounting has been preferred largely because it can be expressed recursively, whereas hyperbolic discounting has heretofore been thought not to have a recursive definition. In this letter, we define a learning algorithm, hyperbolically discounted temporal difference (HDTD) learning, which constitutes a recursive formulation of the hyperbolic model.", "title": "" }, { "docid": "cb47cc2effac1404dd60a91a099699d1", "text": "We survey recent trends in practical algorithms for balanced graph partitioning, point to applications and discuss future research directions.", "title": "" }, { "docid": "fb1b80f1e7109b382994ca61b993ad71", "text": "We present a novel approach to real-time dense visual SLAM. Our system is capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments explored using an RGB-D camera in an incremental online fashion, without pose graph optimisation or any postprocessing steps. This is accomplished by using dense frame-tomodel camera tracking and windowed surfel-based fusion coupled with frequent model refinement through non-rigid surface deformations. Our approach applies local model-to-model surface loop closure optimisations as often as possible to stay close to the mode of the map distribution, while utilising global loop closure to recover from arbitrary drift and maintain global consistency.", "title": "" }, { "docid": "9dec1ac5acaef4ae9ddb5e65e4097773", "text": "We propose a novel fully convolutional network architecture for shapes, denoted by Shape Fully Convolutional Networks (SFCN). 3D shapes are represented as graph structures in the SFCN architecture, based on novel graph convolution and pooling operations, which are similar to convolution and pooling operations used on images. Meanwhile, to build our SFCN architecture in the original image segmentation fully convolutional network (FCN) architecture, we also design and implement a generating operation with bridging function. This ensures that the convolution and pooling operation we have designed can be successfully applied in the original FCN architecture. In this paper, we also present a new shape segmentation approach based on SFCN. Furthermore, we allow more general and challenging input, such as mixed datasets of different categories of shapes which can prove the ability of our generalisation. In our approach, SFCNs are trained triangles-to-triangles by using three low-level geometric features as input. Finally, the feature voting-based multi-label graph cuts is adopted to optimise the segmentation results obtained by SFCN prediction. The experiment results show that our method can effectively learn and predict mixed shape datasets of either similar or different characteristics, and achieve excellent segmentation results.", "title": "" }, { "docid": "9a6de540169834992134eb02927d889d", "text": "In this paper we argue why it is necessary to associate linguistic information with ontologies and why more expressive models, beyond RDFS, OWL and SKOS, are needed to capture the relation between natural language constructs on the one hand and ontological entities on the other. We argue that in the light of tasks such as ontology-based information extraction, ontology learning and population from text and natural language generation from ontologies, currently available datamodels are not sufficient as they only allow to associate atomic terms without linguistic grounding or structure to ontology elements. Towards realizing a more expressive model for associating linguistic information to ontology elements, we base our work presented here on previously developed models (LingInfo, LexOnto, LMF) and present a new joint model for linguistic grounding of ontologies called LexInfo. LexInfo combines essential design aspects of LingInfo and LexOnto and builds on a sound model for representing computational lexica called LMF which has been recently approved as a standard under ISO.", "title": "" }, { "docid": "8b550446a16158b7d3eefacd2d6396ff", "text": "We propose a theory of eigenvalues, eigenvectors, singular values, and singular vectors for tensors based on a constrained variational approach much like the Rayleigh quotient for symmetric matrix eigenvalues. These notions are particularly useful in generalizing certain areas where the spectral theory of matrices has traditionally played an important role. For illustration, we will discuss a multilinear generalization of the Perron-Frobenius theorem.", "title": "" }, { "docid": "513455013ecb2f4368566ba30cdb8d7f", "text": "Many modern multi-core processors sport a large shared cache with the primary goal of enhancing the statistic performance of computing workloads. However, due to resulting cache interference among tasks, the uncontrolled use of such a shared cache can significantly hamper the predictability and analyzability of multi-core real-time systems. Software cache partitioning has been considered as an attractive approach to address this issue because it does not require any hardware support beyond that available on many modern processors. However, the state-of-the-art software cache partitioning techniques face two challenges: (1) the memory co-partitioning problem, which results in page swapping or waste of memory, and (2) the availability of a limited number of cache partitions, which causes degraded performance. These are major impediments to the practical adoption of software cache partitioning. In this paper, we propose a practical OS-level cache management scheme for multi-core real-time systems. Our scheme provides predictable cache performance, addresses the aforementioned problems of existing software cache partitioning, and efficiently allocates cache partitions to schedule a given task set. We have implemented and evaluated our scheme in Linux/RK running on the Intel Core i7 quad-core processor. Experimental results indicate that, compared to the traditional approaches, our scheme is up to 39% more memory space efficient and consumes up to 25% less cache partitions while maintaining cache predictability. Our scheme also yields a significant utilization benefit that increases with the number of tasks.", "title": "" }, { "docid": "f8854602bbb2f5295a5fba82f22ca627", "text": "Models such as latent semantic analysis and those based on neural embeddings learn distributed representations of text, and match the query against the document in the latent semantic space. In traditional information retrieval models, on the other hand, terms have discrete or local representations, and the relevance of a document is determined by the exact matches of query terms in the body text. We hypothesize that matching with distributed representations complements matching with traditional local representations, and that a combination of the two is favourable. We propose a novel document ranking model composed of two separate deep neural networks, one that matches the query and the document using a local representation, and another that matches the query and the document using learned distributed representations. The two networks are jointly trained as part of a single neural network. We show that this combination or ‘duet’ performs significantly better than either neural network individually on a Web page ranking task, and significantly outperforms traditional baselines and other recently proposed models based on neural networks.", "title": "" }, { "docid": "10512cddabf509100205cb241f2f206a", "text": "Due to an increasing growth of Internet usage, cybercrimes has been increasing at an Alarming rate and has become most profitable criminal activity. Botnet is an emerging threat to the cyber security and existence of Command and Control Server(C&C Server) makes it very dangerous attack as compare to all other malware attacks. Botnet is a network of compromised machines which are remotely controlled by bot master to do various malicious activities with the help of command and control server and n-number of slave machines called bots. The main motive behind botnet is Identity theft, Denial of Service attack, Click fraud, Phishing and many other malware activities. Botnets rely on different protocols such as IRC, HTTP and P2P for transmission. Different botnet detection techniques have been proposed in recent years. This paper discusses Botnet, Botnet history, and life cycle of Botnet apart from classifying various Botnet detection techniques. Paper highlights the recent research work under botnets in cyber realm and proposes directions for future research in this area.", "title": "" }, { "docid": "0d1193978e4f8be0b78c6184d7ece3fe", "text": "Network representations of systems from various scientific and societal domains are neither completely random nor fully regular, but instead appear to contain recurring structural building blocks [1]. These features tend to be shared by networks belonging to the same broad class, such as the class of social networks or the class of biological networks. At a finer scale of classification within each such class, networks describing more similar systems tend to have more similar features. This occurs presumably because networks representing similar purposes or constructions would be expected to be generated by a shared set of domain specific mechanisms, and it should therefore be possible to classify these networks into categories based on their features at various structural levels. Here we describe and demonstrate a new, hybrid approach that combines manual selection of features of potential interest with existing automated classification methods. In particular, selecting well-known and well-studied features that have been used throughout social network analysis and network science [2, 3] and then classifying with methods such as random forests [4] that are of special utility in the presence of feature collinearity, we find that we achieve higher accuracy, in shorter computation time, with greater interpretability of the network classification results. Past work in the area of network classification has primarily focused on distinguishing networks from different categories using two different broad classes of approaches. In the first approach , network classification is carried out by examining certain specific structural features and investigating whether networks belonging to the same category are similar across one or more dimensions as defined by these features [5, 6, 7, 8]. In other words, in this approach the investigator manually chooses the structural characteristics of interest and more or less manually (informally) determines the regions of the feature space that correspond to different classes. These methods are scalable to large networks and yield results that are easily interpreted in terms of the characteristics of interest, but in practice they tend to lead to suboptimal classification accuracy. In the second approach, network classification is done by using very flexible machine learning classi-fiers that, when presented with a network as an input, classify its category or class as an output To somewhat oversimplify, the first approach relies on manual feature specification followed by manual selection of a classification system, whereas the second approach is its opposite, relying on automated feature detection followed by automated classification. While …", "title": "" }, { "docid": "165195f20110158a26bc62b74821dc46", "text": "Prior studies on knowledge contribution started with the motivating role of social capital to predict knowledge contribution but did not specifically examine how they can be built in the first place. Our research addresses this gap by highlighting the role technology plays in supporting the development of social capital and eventual knowledge sharing intention. Herein, we propose four technology-based social capital builders – identity profiling, sub-community building, feedback mechanism, and regulatory practice – and theorize that individuals’ use of these IT artifacts determine the formation of social capital, which in turn, motivate knowledge contribution in online communities. Data collected from 253 online community users provide support for the proposed structural model. The results show that use of IT artifacts facilitates the formation of social capital (network ties, shared language, identification, trust in online community, and norms of cooperation) and their effects on knowledge contribution operate indirectly through social capital.", "title": "" }, { "docid": "4958f4a85b531a2d5a846d1f6eb1a5a3", "text": "The n-channel lateral double-diffused metal-oxide- semiconductor (nLDMOS) devices in high-voltage (HV) technologies are known to have poor electrostatic discharge (ESD) robustness. To improve the ESD robustness of nLDMOS, a co-design method combining a new waffle layout structure and a trigger circuit is proposed to fulfill the body current injection technique in this work. The proposed layout and circuit co-design method on HV nLDMOS has successfully been verified in a 0.5-¿m 16-V bipolar-CMOS-DMOS (BCD) process and a 0.35- ¿m 24-V BCD process without using additional process modification. Experimental results through transmission line pulse measurement and failure analyses have shown that the proposed body current injection technique can significantly improve the ESD robustness of HV nLDMOS.", "title": "" }, { "docid": "6d9f5f9e61c9b94febdd8e04cf999636", "text": "The Internet o€ers the hope of a more democratic society. By promoting a decentralized form of social mobilization, it is said, the Internet can help us to renovate our institutions and liberate ourselves from our authoritarian legacies. The Internet does indeed hold these possibilities, but they are hardly inevitable. In order for the Internet to become a tool for social progress, not a tool of oppression or another centralized broadcast medium or simply a waste of money, concerned citizens must understand the di€erent ways in which the Internet can become embedded in larger social processes. In thinking about culturally appropriate ways of using technologies like the Internet, the best starting-point is with peopleÐcoherent communities of people and the ways they think together. Let us consider an example. A photocopier company asked an anthropologist named Julian Orr to study its repair technicians and recommend the best ways to use technology in supporting their work. Orr (1996) took a broad view of the technicians' lives, learning some of their skills and following them around. Each morning the technicians would come to work, pick up their company vehicles, and drive to customers' premises where photocopiers needed ®xing; each evening they would return to the company, go to a bar together, and drink beer. Although the company had provided the technicians with formal training, Orr discovered that they actually acquired much of their expertise informally while drinking beer together. Having spent the day contending with dicult repair problems, they would entertain one another with ``war stories'', and these stories often helped them with future repairs. He suggested, therefore, that the technicians be given radio equipment so that they could remain in contact all day, telling stories and helping each other with their repair tasks. As Orr's (1996) story suggests, people think together best when they have something important in common. Networking technologies can often be used to create a Telematics and Informatics 15 (1998) 231±234", "title": "" } ]
scidocsrr
9ebc7a07fb187da08612b5538e4ad9ed
Multitask learning for semantic sequence prediction under varying data conditions
[ { "docid": "960252eeff41c4ad9cb330b02aaf241c", "text": "• TranslaCon improvement with liQle parsing / capCon data. • State-of-the-art consCtuent parsing. • TranslaCon: (Luong et al., 2015) – WMT English ⇄ German: 4.5M examples. • Parsing: (Vinyals et al., 2015a) – Penn Tree Bank (PTB): 40K examples. – High Confidence (HC): 11M examples. • CapCon: (Vinyals et al., 2015b) – 600K examples. • Unsupervised: auto-encoders & skip-thought – 12.1M English and 13.8M German examples. • Setup: (Sutskever et al., 2014), a@en-on-free – 4-layer deep LSTMs: 1000-dim cells/embeddings. Can we benefit from mulit-task seq2seq learning?", "title": "" }, { "docid": "7161122eaa9c9766e9914ba0f2ee66ef", "text": "Cross-linguistically consistent annotation is necessary for sound comparative evaluation and cross-lingual learning experiments. It is also useful for multilingual system development and comparative linguistic studies. Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. In this paper, we describe v1 of the universal guidelines, the underlying design principles, and the currently available treebanks for 33 languages.", "title": "" }, { "docid": "0188bdf1c03995b6ae2218083864fc58", "text": "We present a simple and effective scheme for dependency parsing which is based on bidirectional-LSTMs (BiLSTMs). Each sentence token is associated with a BiLSTM vector representing the token in its sentential context, and feature vectors are constructed by concatenating a few BiLSTM vectors. The BiLSTM is trained jointly with the parser objective, resulting in very effective feature extractors for parsing. We demonstrate the effectiveness of the approach by applying it to a greedy transition-based parser as well as to a globally optimized graph-based parser. The resulting parsers have very simple architectures, and match or surpass the state-of-the-art accuracies on English and Chinese.", "title": "" } ]
[ { "docid": "9a8133fbfe2c9422b6962dd88505a9e9", "text": "The amino acid sequences of 301 glycosyl hydrolases and related enzymes have been compared. A total of 291 sequences corresponding to 39 EC entries could be classified into 35 families. Only ten sequences (less than 5% of the sample) could not be assigned to any family. With the sequences available for this analysis, 18 families were found to be monospecific (containing only one EC number) and 17 were found to be polyspecific (containing at least two EC numbers). Implications on the folding characteristics and mechanism of action of these enzymes and on the evolution of carbohydrate metabolism are discussed. With the steady increase in sequence and structural data, it is suggested that the enzyme classification system should perhaps be revised.", "title": "" }, { "docid": "d2cbeb1f764b5a574043524bb4a0e1a9", "text": "The latest 6th generation Carrier Stored Trench Gate Bipolar Transistor (CSTBT™) provides state of the art optimization of conduction and switching losses in IGBT modules. Use of low values of resistance in series with the IGBT gate produces low turn-on losses but increases stress on the recovery of the free-wheel diode resulting in higher dv/dt and increased EMI. The latest modules also incorporate new, improved recovery free-wheel diode chips which improve this situation but detailed evaluation of the trade-off between turn-on loss and dv/dt performance is required. This paper describes the evaluation, test results, and a comparative analysis of dv/dt versus turn-on loss as a function of gate drive conditions for the 6th generation IGBT compared to the standard 5th generation module.", "title": "" }, { "docid": "7de923c310b38193b2d4d3bd9e7096bb", "text": "To date, most research into massively multiplayer online role-playing games (MMORPGs) has examined the demographics of play. This study explored the social interactions that occur both within and outside of MMORPGs. The sample consisted of 912 self-selected MMORPG players from 45 countries. MMORPGs were found to be highly socially interactive environments providing the opportunity to create strong friendships and emotional relationships. The study demonstrated that the social interactions in online gaming form a considerable element in the enjoyment of playing. The study showed MMORPGs can be extremely social games, with high percentages of gamers making life-long friends and partners. It was concluded that virtual gaming may allow players to express themselves in ways they may not feel comfortable doing in real life because of their appearance, gender, sexuality, and/or age. MMORPGs also offer a place where teamwork, encouragement, and fun can be experienced.", "title": "" }, { "docid": "7af26168ae1557d8633a062313d74b78", "text": "This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.", "title": "" }, { "docid": "a75ab88f3b7f672bc357429793e74635", "text": "To save life, casualty care requires that trauma injuries are accurately and expeditiously assessed in the field. This paper describes the initial bench testing of a wireless wearable pulse oximeter developed based on a small forehead mounted sensor. The battery operated device employs a lightweight optical reflectance sensor and incorporates an annular photodetector to reduce power consumption. The system also has short range wireless communication capabilities to transfer arterial oxygen saturation (SpO2), heart rate (HR), body acceleration, and posture information to a PDA. It has the potential for use in combat casualty care, such as for remote triage, and by first responders, such as firefighters", "title": "" }, { "docid": "4f846635e4f23b7630d0c853559f71dc", "text": "Parkinson's disease, known also as striatal dopamine deficiency syndrome, is a degenerative disorder of the central nervous system characterized by akinesia, muscular rigidity, tremor at rest, and postural abnormalities. In early stages of parkinsonism, there appears to be a compensatory increase in the number of dopamine receptors to accommodate the initial loss of dopamine neurons. As the disease progresses, the number of dopamine receptors decreases, apparently due to the concomitant degeneration of dopamine target sites on striatal neurons. The loss of dopaminergic neurons in Parkinson's disease results in enhanced metabolism of dopamine, augmenting the formation of H2O2, thus leading to generation of highly neurotoxic hydroxyl radicals (OH.). The generation of free radicals can also be produced by 6-hydroxydopamine or MPTP which destroys striatal dopaminergic neurons causing parkinsonism in experimental animals as well as human beings. Studies of the substantia nigra after death in Parkinson's disease have suggested the presence of oxidative stress and depletion of reduced glutathione; a high level of total iron with reduced level of ferritin; and deficiency of mitochondrial complex I. New approaches designed to attenuate the effects of oxidative stress and to provide neuroprotection of striatal dopaminergic neurons in Parkinson's disease include blocking dopamine transporter by mazindol, blocking NMDA receptors by dizocilpine maleate, enhancing the survival of neurons by giving brain-derived neurotrophic factors, providing antioxidants such as vitamin E, or inhibiting monoamine oxidase B (MAO-B) by selegiline. Among all of these experimental therapeutic refinements, the use of selegiline has been most successful in that it has been shown that selegiline may have a neurotrophic factor-like action rescuing striatal neurons and prolonging the survival of patients with Parkinson's disease.", "title": "" }, { "docid": "d44080fc547355ff8389f9da53d03c45", "text": "High profile attacks such as Stuxnet and the cyber attack on the Ukrainian power grid have increased research in Industrial Control System (ICS) and Supervisory Control and Data Acquisition (SCADA) network security. However, due to the sensitive nature of these networks, there is little publicly available data for researchers to evaluate the effectiveness of the proposed solution. The lack of representative data sets makes evaluation and independent validation of emerging security solutions difficult and slows down progress towards effective and reusable solutions. This paper presents our work to generate representative labeled data sets for SCADA networks that security researcher can use freely. The data sets include packet captures including both malicious and non-malicious Modbus traffic and accompanying CSV files that contain labels to provide the ground truth for supervised machine learning. To provide representative data at the network level, the data sets were generated in a SCADA sandbox, where electrical network simulators were used to introduce realism in the physical component. Also, real attack tools, some of them custom built for Modbus networks, were used to generate the malicious traffic. Even though they do not fully replicate a production network, these data sets represent a good baseline to validate detection tools for SCADA systems.", "title": "" }, { "docid": "2affffd57677d58df6fc63cc4a83da5d", "text": "Dealing with failure is easy: Work hard to improve. Success is also easy to handle: You've solved the wrong problem. Work hard to improve.", "title": "" }, { "docid": "135785028bac0bbc219d2ae19bb3a9dd", "text": "MOTIVATION\nBiomarker discovery is an important topic in biomedical applications of computational biology, including applications such as gene and SNP selection from high-dimensional data. Surprisingly, the stability with respect to sampling variation or robustness of such selection processes has received attention only recently. However, robustness of biomarkers is an important issue, as it may greatly influence subsequent biological validations. In addition, a more robust set of markers may strengthen the confidence of an expert in the results of a selection method.\n\n\nRESULTS\nOur first contribution is a general framework for the analysis of the robustness of a biomarker selection algorithm. Secondly, we conducted a large-scale analysis of the recently introduced concept of ensemble feature selection, where multiple feature selections are combined in order to increase the robustness of the final set of selected features. We focus on selection methods that are embedded in the estimation of support vector machines (SVMs). SVMs are powerful classification models that have shown state-of-the-art performance on several diagnosis and prognosis tasks on biological data. Their feature selection extensions also offered good results for gene selection tasks. We show that the robustness of SVMs for biomarker discovery can be substantially increased by using ensemble feature selection techniques, while at the same time improving upon classification performances. The proposed methodology is evaluated on four microarray datasets showing increases of up to almost 30% in robustness of the selected biomarkers, along with an improvement of approximately 15% in classification performance. The stability improvement with ensemble methods is particularly noticeable for small signature sizes (a few tens of genes), which is most relevant for the design of a diagnosis or prognosis model from a gene signature.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online.", "title": "" }, { "docid": "199084b75740e020d66f91dab57610c4", "text": "In double-stage grid-connected photovoltaic (PV) inverters, the dynamic interactions among the DC/DC and DC/AC stages and the maximum power point tracking (MPPT) controller may reduce the system performances. In this paper, the detrimental effects, particularly in terms of system efficiency and MPPT performances, of the oscillations of the PV array voltage, taking place at the second harmonic of the grid frequency are evidenced. The use of a proper compensation network acting on the error signal between a reference signal provided by the MPPT controller and a signal that is proportional to the PV array voltage is proposed. The guidelines for the proper joint design of the compensation network (which is able to cancel out the PV voltage oscillations) and of the main MPPT parameters are provided in this paper. Simulation results and experimental measurements confirm the effectiveness of the proposed approach.", "title": "" }, { "docid": "7a8f79e2cf62e61a4602d532e9afaf7e", "text": "Existing works on sentiment analysis on product reviews suffer from the following limitations: (1) The knowledge of hierarchical relationships of products attributes is not fully utilized. (2) Reviews or sentences mentioning several attributes associated with complicated sentiments are not dealt with very well. In this paper, we propose a novel HL-SOT approach to labeling a product’s attributes and their associated sentiments in product reviews by a Hierarchical Learning (HL) process with a defined Sentiment Ontology Tree (SOT). The empirical analysis against a humanlabeled data set demonstrates promising and reasonable performance of the proposed HL-SOT approach. While this paper is mainly on sentiment analysis on reviews of one product, our proposed HLSOT approach is easily generalized to labeling a mix of reviews of more than one products.", "title": "" }, { "docid": "d3b6ba3e4b8e80c3c371226d7ae6d610", "text": "Interest in collecting and mining large sets of educational data on student background and performance to conduct research on learning and instruction has developed as an area generally referred to as learning analytics. Higher education leaders are recognizing the value of learning analytics for improving not only learning and teaching but also the entire educational arena. However, theoretical concepts and empirical evidence need to be generated within the fast evolving field of learning analytics. The purpose of the two reported cases studies is to identify alternative approaches to data analysis and to determine the validity and accuracy of a learning analytics framework and its corresponding student and learning profiles. The findings indicate that educational data for learning analytics is context specific and variables carry different meanings and can have different implications across educational institutions and area of studies. Benefits, concerns, and challenges of learning analytics are critically reflected, indicating that learning analytics frameworks need to be sensitive to idiosyncrasies of the educational institution and its stakeholders.", "title": "" }, { "docid": "5f6d142860a4bd9ff1fa9c4be9f17890", "text": "Local conditioning (LC) is an exact algorithm for computing probability in Bayesian networks, developed as an extension of Kim and Pearl’s algorithm for singly-connected networks. A list of variables associated to each node guarantees that only the nodes inside a loop are conditioned on the variable which breaks it. The main advantage of this algorithm is that it computes the probability directly on the original network instead of building a cluster tree, and this can save time when debugging a model and when the sparsity of evidence allows a pruning of the network. The algorithm is also advantageous when some families in the network interact through AND/OR gates. A parallel implementation of the algorithm with a processor for each node is possible even in the case of multiply-connected networks.", "title": "" }, { "docid": "2ca5118d8f4402ed1a2d1c26fbcf9f53", "text": "Weakly supervised data is an important machine learning data to help improve learning performance. However, recent results indicate that machine learning techniques with the usage of weakly supervised data may sometimes cause performance degradation. Safely leveraging weakly supervised data is important, whereas there is only very limited effort, especially on a general formulation to help provide insight to guide safe weakly supervised learning. In this paper we present a scheme that builds the final prediction results by integrating several weakly supervised learners. Our resultant formulation brings two advantages. i) For the commonly used convex loss functions in both regression and classification tasks, safeness guarantees exist under a mild condition; ii) Prior knowledge related to the weights of base learners can be embedded in a flexible manner. Moreover, the formulation can be addressed globally by simple convex quadratic or linear program efficiently. Experiments on multiple weakly supervised learning tasks such as label noise learning, domain adaptation and semi-supervised learning validate the effectiveness.", "title": "" }, { "docid": "38cf4762ce867ff39a3e0f892758ddfd", "text": "Quality control of food inventories in the warehouse is complex as well as challenging due to the fact that food can easily deteriorate. Currently, this difficult storage problem is managed mostly by using a human dependent quality assurance and decision making process. This has however, occasionally led to unimaginative, arduous and inconsistent decisions due to the injection of subjective human intervention into the process. Therefore, it could be said that current practice is not powerful enough to support high-quality inventory management. In this paper, the development of an integrative prototype decision support system, namely, Intelligent Food Quality Assurance System (IFQAS) is described which will assist the process by automating the human based decision making process in the quality control of food storage. The system, which is composed of a Case-based Reasoning (CBR) engine and a Fuzzy rule-based Reasoning (FBR) engine, starts with the receipt of incoming food inventory. With the CBR engine, certain quality assurance operations can be suggested based on the attributes of the food received. Further of this, the FBR engine can make suggestions on the optimal storage conditions of inventory by systematically evaluating the food conditions when the food is receiving. With the assistance of the system, a holistic monitoring in quality control of the receiving operations and the storage conditions of the food in the warehouse can be performed. It provides consistent and systematic Quality Assurance Guidelines for quality control which leads to improvement in the level of customer satisfaction and minimization of the defective rate. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0fca0826e166ddbd4c26fe16086ff7ec", "text": "Enteric redmouth disease (ERM) is a serious septicemic bacterial disease of salmonid fish species. It is caused by Yersinia ruckeri, a Gram-negative rod-shaped enterobacterium. It has a wide host range, broad geographical distribution, and causes significant economic losses in the fish aquaculture industry. The disease gets its name from the subcutaneous hemorrhages, it can cause at the corners of the mouth and in gums and tongue. Other clinical signs include exophthalmia, darkening of the skin, splenomegaly and inflammation of the lower intestine with accumulation of thick yellow fluid. The bacterium enters the fish via the secondary gill lamellae and from there it spreads to the blood and internal organs. Y. ruckeri can be detected by conventional biochemical, serological and molecular methods. Its genome is 3.7 Mb with 3406-3530 coding sequences. Several important virulence factors of Y. ruckeri have been discovered, including haemolyin YhlA and metalloprotease Yrp1. Both non-specific and specific immune responses of fish during the course of Y. ruckeri infection have been well characterized. Several methods of vaccination have been developed for controlling both biotype 1 and biotype 2 Y. ruckeri strains in fish. This review summarizes the current state of knowledge regarding enteric redmouth disease and Y. ruckeri: diagnosis, genome, virulence factors, interaction with the host immune responses, and the development of vaccines against this pathogen.", "title": "" }, { "docid": "607cff7a41d919bef9f4aa0cec3c1c9d", "text": "The goal of this work was to develop and validate a neuro-fuzzy intelligent system (LOLIMOT) for rectal temperature prediction of broiler chickens. The neuro-fuzzy network was developed using SCILAB 4.1, on the ground of three Departamento de Engenharia, Universidade Federal de Lavras (UFLA), Caixa Postal 3037, Lavras/MG, Brasil le.ferreira@gmail.com yanagi@deg.ufla.br alisonzille@gmail.com Desenvolvimento de uma rede neuro-fuzzy para predição da temperatura retal de frangos de corte 222 RITA • Volume 17 • Número 2 • 2010 input variables: air temperature, relative humidity and air velocity. The output variable was rectal temperature. Experimental results, used for validation, showed that the average standard deviation between simulated and measured values of RT was 0.11 °C. The neuro-fuzzy system presents as a satisfactory hybrid intelligent system for rectal temperature prediction of broiler chickens, which adds fuzzy logic features based on the fuzzy sets theory to artificial neural networks.", "title": "" }, { "docid": "4f8bd885eb918b5b79395a1f6a6542c9", "text": "This paper presents an exposition of a new method of swarm intelligence–based algorithm for optimization. Modeling swallow swarm movement and their other behavior, this optimization method represents a new optimization method. There are three kinds of particles in this method: explorer particles, aimless particles, and leader particles. Each particle has a personal feature but all of them have a central colony of flying. Each particle exhibits an intelligent behavior and, perpetually, explores its surroundings with an adaptive radius. The situations of neighbor particles, local leader, and public leader are considered, and a move is made then. Swallow swarm optimization algorithm has proved high efficiency, such as fast move in flat areas (areas that there is no hope to find food and, derivation is equal to zero), not getting stuck in local extremum points, high convergence speed, and intelligent participation in the different groups of particles. SSO algorithm has been tested by 19 benchmark functions. It achieved good results in multimodal, rotated and shifted functions. Results of this method have been compared to standard PSO, FSO algorithm, and ten different kinds of PSO.", "title": "" }, { "docid": "62284eed1a821099d6776cccb59459d8", "text": "This paper describes a method of stereo-based road boundary tracking for mobile robot navigation. Since sensory evidence for road boundaries might change from place to place, we cannot depend on a single cue but have to use multiple sensory features. The method uses color, edge, and height information obtained from a single stereo camera. To cope with a variety of road types and shapes and that of their changes, we adopt a particle filter in which road boundary hypotheses are represented by particles. The proposed method has been tested in various road scenes and conditions, and verified to be effective for autonomous driving of a mobile robot.", "title": "" }, { "docid": "2ce31e318505bd3795d5db9ea5fcd7cc", "text": "Energy efficiency is the main objective in the design of a wireless sensor network (WSN). In many applications, sensing data must be transmitted from sources to a sink in a timely manner. This paper describes an investigation of the trade-off between two objectives in WSN design: minimizing energy consumption and minimizing end-to-end delay. We first propose a new distributed clustering approach to determining the best clusterhead for each cluster by considering both energy consumption and end-to-end delay requirements. Next, we propose a new energy-cost function and a new end-to-end delay function for use in an inter-cluster routing algorithm. We present a multi-hop routing algorithm for use in disseminating sensing data from clusterheads to a sink at the minimum energy cost subject to an end-to-end delay constraint. The results of a simulation are consistent with our theoretical analysis results and show that our proposed performs much better than similar protocols in terms of energy consumption and end-to-end delay.", "title": "" } ]
scidocsrr
6c8266e6dff973b7fdc5211cb243e6a8
Remembering the real me: Nostalgia offers a window to the intrinsic self.
[ { "docid": "3512d0a45a764330c8a66afab325d03d", "text": "Self-concept clarity (SCC) references a structural aspect oftbe self-concept: the extent to which selfbeliefs are clearly and confidently defined, internally consistent, and stable. This article reports the SCC Scale and examines (a) its correlations with self-esteem (SE), the Big Five dimensions, and self-focused attention (Study l ); (b) its criterion validity (Study 2); and (c) its cultural boundaries (Study 3 ). Low SCC was independently associated with high Neuroticism, low SE, low Conscientiousness, low Agreeableness, chronic self-analysis, low internal state awareness, and a ruminative form of self-focused attention. The SCC Scale predicted unique variance in 2 external criteria: the stability and consistency of self-descriptions. Consistent with theory on Eastern and Western selfconstruals, Japanese participants exhibited lower levels of SCC and lower correlations between SCC and SE than did Canadian participants.", "title": "" }, { "docid": "2192004c3aa0e43180016e8ef7207ce9", "text": "Measures of well-being were created to assess psychological flourishing and feelings—positive feelings, negative feelings, and the difference between the two. The scales were evaluated in a sample of 689 college students from six locations. The Flourishing Scale is a brief 8-item summary measure of the respondent’s self-perceived success in important areas such as relationships, self-esteem, purpose, and optimism. The scale provides a single psychological well-being score. The measure has good psychometric properties, and is strongly associated with other psychological well-being scales. The Scale of Positive and Negative Experience produces a score for positive feelings (6 items), a score for negative feelings (6 items), and the two can be combined to create a balance score. This 12-item brief scale has a number of desirable features compared to earlier measures of positive and negative emotions. In particular, the scale assesses with a few E. Diener (&) Department of Psychology, University of Illinois, 603 E. Daniel Street, Champaign, IL 61820, USA e-mail: ediener@cyrus.psych.uiuc.edu E. Diener The Gallup Organization, Omaha, NE, USA D. Wirtz East Carolina University, Greenville, NC, USA W. Tov Singapore Management University, Bras Basah, Singapore C. Kim-Prieto College of New Jersey, Ewing, NJ, USA D. Choi California State University, East Bay, Hayward, CA, USA S. Oishi University of Virginia, Charlottesville, VA, USA R. Biswas-Diener Center for Applied Positive Psychology, Milwaukie, OR, USA 123 Soc Indic Res (2010) 97:143–156 DOI 10.1007/s11205-009-9493-y", "title": "" } ]
[ { "docid": "661d5db6f4a8a12b488d6f486ea5995e", "text": "Reliability and high availability have always been a major concern in distributed systems. Providing highly available and reliable services in cloud computing is essential for maintaining customer confidence and satisfaction and preventing revenue losses. Although various solutions have been proposed for cloud availability and reliability, but there are no comprehensive studies that completely cover all different aspects in the problem. This paper presented a ‘Reference Roadmap’ of reliability and high availability in cloud computing environments. A big picture was proposed which was divided into four steps specifying through four pivotal questions starting with ‘Where?’, ‘Which?’, ‘When?’ and ‘How?’ keywords. The desirable result of having a highly available and reliable cloud system could be gained by answering these questions. Each step of this reference roadmap proposed a specific concern of a special portion of the issue. Two main research gaps were proposed by this reference roadmap.", "title": "" }, { "docid": "24e10d8e12d8b3c618f88f1f0d33985d", "text": "W -algebras of finite type are certain finitely generated associative algebras closely related to universal enveloping algebras of semisimple Lie algebras. In this paper we prove a conjecture of Premet that gives an almost complete classification of finite dimensional irreducible modules for W -algebras. Also we get some partial results towards a conjecture by Ginzburg on their finite dimensional bimodules.", "title": "" }, { "docid": "af2ca5822b18b983fee34cf9e1e8b077", "text": "Achievement of solutions in Navier-Stokes equation is one of challenging quests, especially for its closure problem. For achievement of particular solutions, there are variety of numerical simulations including Direct Numerical Simulation (DNS) or Large Eddy Simulation (LES). These methods analyze flow physics through efficient reduced-order modeling such as proper orthogonal decomposition or Koopman method, showing prominent fidelity in fluid dynamics. Generative adversarial network (GAN) is a reprint of neurons in brain as combinations of linear operations, using competition between generator and discriminator. Current paper propose deep learning network for prediction of small-scale movements with large-scale inspections only, using GAN. Therefore DNS result of three-dimensional mixing-layer was filtered blurring out the small-scaled structures, then is predicted of its detailed structures, utilizing Generative Adversarial Network (GAN). This enables multi-resolution analysis being asked to predict fine-resolution solution with only inspection of blurry one. Within the grid scale, current paper present deep learning approach of modeling small scale features in turbulent flow. The presented method is expected to have its novelty in utilization of unprocessed simulation data, achievement of 3D structures in prediction by processing 3D convolutions, and predicting precise solution with less computational costs.", "title": "" }, { "docid": "ab7c239f93aef2c8528294d8ec62f244", "text": "Nevus depigmentosus presents with areas of hypopigmentation along Blaschko’s lines and may be associated with disorders of the central nervous system, musculoskeletal system, eyes, and teeth. Nevus of Ito is a dermal melanocytosis of the acromioclavicular and upper chest area. Although both nevus depigmentosus and nevus of Ito occur commonly, their coexistence in a manner representative of allelic twin spotting has not previously been reported.", "title": "" }, { "docid": "9b05928e76a8ab764ea558947438694d", "text": "Developing scalable solution algorithms is one of the central problems in computational game theory. We present an iterative algorithm for computing an exact Nash equilibrium for two-player zero-sum extensive-form games with imperfect information. Our approach combines two key elements: (1) the compact sequence-form representation of extensiveform games and (2) the algorithmic framework of double-oracle methods. The main idea of our algorithm is to restrict the game by allowing the players to play only selected sequences of available actions. After solving the restricted game, new sequences are added by finding best responses to the current solution using fast algorithms. We experimentally evaluate our algorithm on a set of games inspired by patrolling scenarios, board, and card games. The results show significant runtime improvements in games admitting an equilibrium with small support, and substantial improvement in memory use even on games with large support. The improvement in memory use is particularly important because it allows our algorithm to solve much larger game instances than existing linear programming methods. Our main contributions include (1) a generic sequence-form double-oracle algorithm for solving zero-sum extensive-form games; (2) fast methods for maintaining a valid restricted game model when adding new sequences; (3) a search algorithm and pruning methods for computing best-response sequences; (4) theoretical guarantees about the convergence of the algorithm to a Nash equilibrium; (5) experimental analysis of our algorithm on several games, including an approximate version of the algorithm.", "title": "" }, { "docid": "fb116c7cd3ab8bd88fb7817284980d4a", "text": "Sentence-level sentiment classification is important to understand users' fine-grained opinions. Existing methods for sentence-level sentiment classification are mainly based on supervised learning. However, it is difficult to obtain sentiment labels of sentences since manual annotation is expensive and time-consuming. In this paper, we propose an approach for sentence-level sentiment classification without the need of sentence labels. More specifically, we propose a unified framework to incorporate two types of weak supervision, i.e., document-level and word-level sentiment labels, to learn the sentence-level sentiment classifier. In addition, the contextual information of sentences and words extracted from unlabeled sentences is incorporated into our approach to enhance the learning of sentiment classifier. Experiments on benchmark datasets show that our approach can effectively improve the performance of sentence-level sentiment classification.", "title": "" }, { "docid": "aab83f305b6519c091f883d869a0b92c", "text": "With the development of the web of data, recent statistical, data-to-text generation approaches have focused on mapping data (e.g., database records or knowledge-base (KB) triples) to natural language. In contrast to previous grammar-based approaches, this more recent work systematically eschews syntax and learns a direct mapping between meaning representations and natural language. By contrast, I argue that an explicit model of syntax can help support NLG in several ways. Based on case studies drawn from KB-to-text generation, I show that syntax can be used to support supervised training with little training data; to ensure domain portability; and to improve statistical hypertagging.", "title": "" }, { "docid": "6b214fdd60a1a4efe27258c2ab948086", "text": "Ambient Assisted Living (AAL) aims to create innovative technical solutions and services to support independent living among older adults, improve their quality of life and reduce the costs associated with health and social care. AAL systems provide health monitoring through sensor based technologies to preserve health and functional ability and facilitate social support for the ageing population. Human activity recognition (HAR) is an enabler for the development of robust AAL solutions, especially in safety critical environments. Therefore, HAR models applied within this domain (e.g. for fall detection or for providing contextual information to caregivers) need to be accurate to assist in developing reliable support systems. In this paper, we evaluate three machine learning algorithms, namely Support Vector Machine (SVM), a hybrid of Hidden Markov Models (HMM) and SVM (SVM-HMM) and Artificial Neural Networks (ANNs) applied on a dataset collected between the elderly and their caregiver counterparts. Detected activities will later serve as inputs to a bidirectional activity awareness system for increasing social connectedness. Results show high classification performances for all three algorithms. Specifically, the SVM-HMM hybrid demonstrates the best classification performance. In addition to this, we make our dataset publicly available for use by the machine learning community.", "title": "" }, { "docid": "56bd18820903da1917ca5d194b520413", "text": "The problem of identifying subtle time-space clustering of dis ease, as may be occurring in leukemia, is described and reviewed. Published approaches, generally associated with studies of leuke mia, not dependent on knowledge of the underlying population for their validity, are directed towards identifying clustering by establishing a relationship between the temporal and the spatial separations for the n(n —l)/2 possible pairs which can be formed from the n observed cases of disease. Here it is proposed that statistical power can be improved by applying a reciprocal trans form to these separations. While a permutational approach can give valid probability levels for any observed association, for reasons of practicability, it is suggested that the observed associa tion be tested relative to its permutational variance. Formulas and computational procedures for doing so are given. While the distance measures between points represent sym metric relationships subject to mathematical and geometric regu larities, the variance formula developed is appropriate for ar bitrary relationships. Simplified procedures are given for the ease of symmetric and skew-symmetric relationships. The general pro cedure is indicated as being potentially useful in other situations as, for example, the study of interpersonal relationships. Viewing the procedure as a regression approach, the possibility for extend ing it to nonlinear and mult ¡variatesituations is suggested. Other aspects of the problem and of the procedure developed are discussed.", "title": "" }, { "docid": "246c00f833bf74645eabd8bd773f93d7", "text": "What kinds of content do children and teenagers author and share on public video platforms? We approached this question through a qualitative directed content analysis of over 250 youth-authored videos filtered by crowdworkers from public videos on YouTube and Vine. We found differences between YouTube and Vine platforms in terms of the age of the youth authors, the type of collaborations witnessed in the videos, and the significantly greater amount of violent, sexual, and obscene content on Vine. We also highlight possible differences in how adults and youths approach online video sharing. Specifically, we consider that adults may view online video as an archive to keep precious memories of everyday life with their family, friends, and pets, humorous moments, and special events, while children and teenagers treat online video as a stage to perform, tell stories, and express their opinions and identities in a performative way.", "title": "" }, { "docid": "dc310f1a5fb33bd3cbe9de95b2a0159c", "text": "The MYO armband from Thalmic Labs is a complete and wireless motion and muscle sensing platform. This paper evaluates the armband’s sensors and its potential for NIME applications. This is followed by a presentation of the prototype instrument MuMYO. We conclude that, despite some shortcomings, the armband has potential of becoming a new “standard” controller in the NIME community.", "title": "" }, { "docid": "c8a2804a0c1a32956d1d850daa57bfff", "text": "BACKGROUND\nData for the causes of maternal deaths are needed to inform policies to improve maternal health. We developed and analysed global, regional, and subregional estimates of the causes of maternal death during 2003-09, with a novel method, updating the previous WHO systematic review.\n\n\nMETHODS\nWe searched specialised and general bibliographic databases for articles published between between Jan 1, 2003, and Dec 31, 2012, for research data, with no language restrictions, and the WHO mortality database for vital registration data. On the basis of prespecified inclusion criteria, we analysed causes of maternal death from datasets. We aggregated country level estimates to report estimates of causes of death by Millennium Development Goal regions and worldwide, for main and subcauses of death categories with a Bayesian hierarchical model.\n\n\nFINDINGS\nWe identified 23 eligible studies (published 2003-12). We included 417 datasets from 115 countries comprising 60 799 deaths in the analysis. About 73% (1 771 000 of 2 443 000) of all maternal deaths between 2003 and 2009 were due to direct obstetric causes and deaths due to indirect causes accounted for 27·5% (672 000, 95% UI 19·7-37·5) of all deaths. Haemorrhage accounted for 27·1% (661 000, 19·9-36·2), hypertensive disorders 14·0% (343 000, 11·1-17·4), and sepsis 10·7% (261 000, 5·9-18·6) of maternal deaths. The rest of deaths were due to abortion (7·9% [193 000], 4·7-13·2), embolism (3·2% [78 000], 1·8-5·5), and all other direct causes of death (9·6% [235 000], 6·5-14·3). Regional estimates varied substantially.\n\n\nINTERPRETATION\nBetween 2003 and 2009, haemorrhage, hypertensive disorders, and sepsis were responsible for more than half of maternal deaths worldwide. More than a quarter of deaths were attributable to indirect causes. These analyses should inform the prioritisation of health policies, programmes, and funding to reduce maternal deaths at regional and global levels. Further efforts are needed to improve the availability and quality of data related to maternal mortality.", "title": "" }, { "docid": "8165132bed6f74274c7a9aa3ba91767b", "text": "Pattern detection over streams of events is gaining more and more attention, especially in the field of eCommerce. Our industrial partner Cdiscount, which is one of the largest eCommerce companies in France, wants to use pattern detection for real-time customer behavior analysis. The main challenges to consider are efficiency and scalability, as the detection of customer behavior must be achieved within a few seconds, while millions of unique customers visit the website every day, each performing hundreds of actions. In this paper, we present our approach to large-scale and efficient pattern detection for eCommerce. It relies on a domain-specific language to define behavior patterns. Patterns are then compiled into deterministic finite automata, which are run on a Big Data streaming platform to carry out the detection work. Our evaluation shows that our approach is efficient and scalable, and fits the requirements of Cdiscount.", "title": "" }, { "docid": "7b5f0c88eaf8c23b8e2489e140d0022f", "text": "Deep learning has been integrated into several existing left ventricle (LV) endocardium segmentation methods to yield impressive accuracy improvements. However, challenges remain for segmentation of LV epicardium due to its fuzzier appearance and complications from the right ventricular insertion points. Segmenting the myocardium collectively (i.e., endocardium and epicardium together) confers the potential for better segmentation results. In this work, we develop a computational platform based on deep learning to segment the whole LV myocardium simultaneously from a cardiac magnetic resonance (CMR) image. The deep convolutional network is constructed using Caffe platform, which consists of 6 convolutional layers, 2 pooling layers, and 1 de-convolutional layer. A preliminary result with Dice metric of 0.75±0.04 is reported on York MR dataset. While in its current form, our proposed one-step deep learning method cannot compete with state-of-art myocardium segmentation methods, it delivers promising first pass segmentation results.", "title": "" }, { "docid": "5d63c5820cc8035822b86ef5fdaebefd", "text": "As the third most popular social network among millennials, Snapchat is well known for its picture and video messaging system that deletes content after it is viewed. However, the Stories feature of Snapchat offers a different perspective of ephemeral content sharing, with pictures and videos that are available for friends to watch an unlimited number of times for 24 hours. We conduct-ed an in-depth qualitative investigation by interviewing 18 participants and reviewing 14 days of their Stories posts. We identify five themes focused on how participants perceive and use the Stories feature, and apply a Goffmanesque metaphor to our analysis. We relate the Stories medium to other research on self-presentation and identity curation in social media.", "title": "" }, { "docid": "d911ccb1bbb761cbfee3e961b8732534", "text": "This paper presents a study on SIFT (Scale Invariant Feature transform) which is a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. There are various applications of SIFT that includes object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving.", "title": "" }, { "docid": "a1aa698df4509c093cbef1b283d2384e", "text": "Agent-based modeling and simulation (ABMS) is an approach to modeling systems comprised of individual, autonomous, interacting \"agents.\" There is much interest in many application problem domains in developing agent-based models. Agent-based modeling offers ways to model individual behaviors and how behaviors affect others in ways that have not been available before. Applications range from modeling agent behavior in supply chains and the stock market, to predicting the success of marketing campaigns and the spread of epidemics, to projecting the future needs of the healthcare system. Progress in the area suggests that ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use agent-based models as electronic laboratories to aid in discovery. This brief tutorial introduces agent-based modeling by describing the basic ideas of ABMS, discussing some applications, and addressing methods for developing agent-based models.", "title": "" }, { "docid": "71d78a9a2e4ceb7026335fa914ce5e83", "text": "In multi-instance learning, each learning object consists of many descriptive instances. In the corresponding classification problems, each training object is labeled, but its constituent instances are not. The classification objective is to predict the class label of unseen objects. As in traditional single-instance classification, when the class sizes of multi-instance data are imbalanced, classification is degraded. Many multi-instance classifiers have been proposed, but few take into account the possibility of class imbalance, which causes them to fail in this situation. In this paper, we propose a new type of classifier that embodies a solution to the multi-instance class imbalance problem. Our proposal relies on the use of fuzzy rough set theory. We present two families of classifiers respectively based on information extracted at bag-level and at instance-level. We experimentally show that our algorithms outperform state-of-theart solutions to multi-instance imbalanced data classification, evaluated by the popular metrics AUC and geometric mean. & 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1cbbc5af1327338283ca75e0bed7d53c", "text": "Microscopic examination revealed polymorphic cells with abundant cytoplasm and large nuclei within the acanthotic epidermis (Figure 3). There were aggregated melanin granules in the epidermis, as well as a subepidermal lymphocytic infiltrate. The atypical cells were positive for CK7 (Figure 4). A few scattered cells were positive with the Melan-A stain (Figure 5). Pigmented lesion of the left nipple in a 49-year-old woman Case for Diagnosis", "title": "" }, { "docid": "a16405ebf57685a3324571ff64522114", "text": "The objective of this study is to analyze the possibility of blending conventional instruction with online instruction via a social networking website, Facebook, in EFL classrooms in order to motivate students and improve their English language learning. Thus, this paper seeks to examine specific ways in which EFL teachers can use Facebook as an educational tool, describing the benefits of this technological instrument and analyzing the potential pitfalls and challenges that it could create. Besides, it includes practical strategies that teachers can apply in order to overcome these pitfalls and get the most out of this social network.", "title": "" } ]
scidocsrr
f102af60577b83ae25d969c6a15917b1
Wearable Endfire Textile Antenna for On-Body Communications at 60 GHz
[ { "docid": "66dc20e12d8b6b99b67485203293ad07", "text": "A parametric model was developed to describe the variation of dielectric properties of tissues as a function of frequency. The experimental spectrum from 10 Hz to 100 GHz was modelled with four dispersion regions. The development of the model was based on recently acquired data, complemented by data surveyed from the literature. The purpose is to enable the prediction of dielectric data that are in line with those contained in the vast body of literature on the subject. The analysis was carried out on a Microsoft Excel spreadsheet. Parameters are given for 17 tissue types.", "title": "" } ]
[ { "docid": "d0d7016430b55ae6dec0edf3b5e1b1fd", "text": "• Our goal is to extend the Julia static analyzer, based on abstract interpretation, to perform formally correct analyses of Android programs. This article is an in depth description of such an extension,of the difficulties that we faced and of the results that we obtained. • We have extended the class analysis of the Julia analyzer, which lies at the heart of many other analyses, by considering some Android key specific features • Classcast, dead code, nullness and termination analysis are done. • Formally correct results in at most 7 min and on standard hardware. • As a language, Android is Java with an extended library for mobile and interactive applications, hence based on an eventdriven architecture. (WRONG)", "title": "" }, { "docid": "19d554b2ef08382418979bf7ceb15baf", "text": "In this paper, we address the cross-lingual topic modeling, which is an important technique that enables global enterprises to detect and compare topic trends across global markets. Previous works in cross-lingual topic modeling have proposed methods that utilize parallel or comparable corpus in constructing the polylingual topic model. However, parallel or comparable corpus in many cases are not available. In this research, we incorporate techniques of mapping cross-lingual word space and the topic modeling (LDA) and propose two methods: Translated Corpus with LDA (TC-LDA) and Post Match LDA (PM-LDA). The cross-lingual word space mapping allows us to compare words of different languages, and LDA enables us to group words into topics. Both TC-LDA and PM-LDA do not need parallel or comparable corpus and hence have more applicable domains. The effectiveness of both methods is evaluated using UM-Corpus and WS-353. Our evaluation results indicate that both methods are able to identify similar documents written in different language. In addition, PM-LDA is shown to achieve better performance than TC-LDA, especially when document length is short.", "title": "" }, { "docid": "0c5143b222e1a8956dfb058b222ddc28", "text": "Partially observed control problems are a challenging aspect of reinforcement learning. We extend two related, model-free algorithms for continuous control – deterministic policy gradient and stochastic value gradient – to solve partially observed domains using recurrent neural networks trained with backpropagation through time. We demonstrate that this approach, coupled with long-short term memory is able to solve a variety of physical control problems exhibiting an assortment of memory requirements. These include the short-term integration of information from noisy sensors and the identification of system parameters, as well as long-term memory problems that require preserving information over many time steps. We also demonstrate success on a combined exploration and memory problem in the form of a simplified version of the well-known Morris water maze task. Finally, we show that our approach can deal with high-dimensional observations by learning directly from pixels. We find that recurrent deterministic and stochastic policies are able to learn similarly good solutions to these tasks, including the water maze where the agent must learn effective search strategies.", "title": "" }, { "docid": "5006770c9f7a6fb171a060ad3d444095", "text": "We developed a 56-GHz-bandwidth 2.0-Vppd linear MZM driver in 65-nm CMOS. It consumes only 180 mW for driving a 50-Ω impedance. We demonstrated the feasibility of drivers with less than 1 W for dual-polarization IQ modulation in 400-Gb/s systems.", "title": "" }, { "docid": "d0811a8c8b760b8dadfa9a51df568bd9", "text": "A strain of the microalga Chlorella pyrenoidosa F-9 in our laboratory showed special characteristics when transferred from autotrophic to heterotrophic culture. In order to elucidate the possible metabolic mechanism, the gene expression profiles of the autonomous organelles in the green alga C. pyrenoidosa under autotrophic and heterotrophic cultivation were compared by suppression subtractive hybridization technology. Two subtracted libraries of autotrophic and heterotrophic C. pyrenoidosa F-9 were constructed, and 160 clones from the heterotrophic library were randomly selected for DNA sequencing. Dot blot hybridization showed that the ratio of positivity was 70.31% from the 768 clones. Five chloroplast genes (ftsH, psbB, rbcL, atpB, and infA) and two mitochondrial genes (cox2 and nad6) were selected to verify their expression levels by real-time quantitative polymerase chain reaction. Results showed that the seven genes were abundantly expressed in the heterotrophic culture. Among the seven genes, the least increment of gene expression was ftsH, which was expressed 1.31-1.85-fold higher under heterotrophy culture than under autotrophy culture, and the highest increment was psbB, which increased 28.07-39.36 times compared with that under autotrophy conditions. The expression levels of the other five genes were about 10 times higher in heterotrophic algae than in autotrophic algae. In inclusion, the chloroplast and mitochondrial genes in C. pyrenoidosa F-9 might be actively involved in heterotrophic metabolism.", "title": "" }, { "docid": "c185493668b49314afea915d1a2fc839", "text": "In recent years, Particle Swarm Optimization has evolved as an effective global optimization algorithm whose dynamics has been inspired from swarming or collaborative behavior of biological populations. In this paper, PSO has been applied to Triple Link Inverted Pendulum model to find its reduced order model by minimization of error between the step responses of higher and reduced order model. Model Order Reduction using PSO algorithm is advantageous due to ease in implementation, higher accuracy and decreased time of computation. The second and third order reduced transfer functions of Triple Link Inverted Pendulum have been computed for comparison. Keywords—Particle Swarm Optimization, Triple Link Inverted Pendulum, Model Order Reduction, Pole Placement technique.", "title": "" }, { "docid": "2805fdd4cd97931497b6c42263a20534", "text": "The well-established Modulation Transfer Function (MTF) is an imaging performance parameter that is well suited to describing certain sources of detail loss, such as optical focus and motion blur. As performance standards have developed for digital imaging systems, the MTF concept has been adapted and applied as the spatial frequency response (SFR). The international standard for measuring digital camera resolution, ISO 12233, was adopted over a decade ago. Since then the slanted edge-gradient analysis method on which it was based has been improved and applied beyond digital camera evaluation. Practitioners have modified minor elements of the standard method to suit specific system characteristics, unique measurement needs, or computational shortcomings in the original method. Some of these adaptations have been documented and benchmarked, but a number have not. In this paper we describe several of these modifications, and how they have improved the reliability of the resulting system evaluations. We also review several ways the method has been adapted and applied beyond camera resolution.", "title": "" }, { "docid": "8d8e7327f79b256b1ee9dac9a2573b55", "text": "The objective of this work is set-based face recognition, i.e. to decide if two sets of images of a face are of the same person or not. Conventionally, the set-wise feature descriptor is computed as an average of the descriptors from individual face images within the set. In this paper, we design a neural network architecture that learns to aggregate based on both “visual” quality (resolution, illumination), and “content” quality (relative importance for discriminative classification). To this end, we propose a Multicolumn Network (MN) that takes a set of images (the number in the set can vary) as input, and learns to compute a fix-sized feature descriptor for the entire set. To encourage high-quality representations, each individual input image is first weighted by its “visual” quality, determined by a self-quality assessment module, and followed by a dynamic recalibration based on “content” qualities relative to the other images within the set. Both of these qualities are learnt implicitly during training for setwise classification. Comparing with the previous state-of-the-art architectures trained with the same dataset (VGGFace2), our Multicolumn Networks show an improvement of between 2-6% on the IARPA IJB face recognition benchmarks, and exceed the state of the art for all methods on these benchmarks.", "title": "" }, { "docid": "d4fbd2f212367706cf47b6b25b5e9dcf", "text": "Web Services are considered an essential services-oriented technology today on networked application architectures due to their language and platform-independence. Their language and platform independence also brings difficulties in testing them especially in an automated manner. In this paper, a comparative evaluation of testing techniques based on, TTCN-3 and SoapUI, in order of contributing towards resolving these difficulties is performed. Aspects of TTCN-3 and SoapUI are highlighted, including test abstraction, performance efficiency and powerful matching mechanisms in TTCN-3 that allow a separation between behaviour and the conditions governing behaviour. Keywords— Web Services Testing, Automated Testing, Web Testing, SoapUI, TTCN-3, Titan TTCN-3, Testing", "title": "" }, { "docid": "1495ed50a24703566b2bda35d7ec4931", "text": "This paper examines the passive dynamics of quadrupedal bounding. First, an unexpected difference between local and global behavior of the forward speed versus touchdown angle in the selfstabilized Spring Loaded Inverted Pendulum (SLIP) model is exposed and discussed. Next, the stability properties of a simplified sagittal plane model of our Scout II quadrupedal robot are investigated. Despite its simplicity, this model captures the targeted steady state behavior of Scout II without dependence on the fine details of the robot structure. Two variations of the bounding gait, which are observed experimentally in Scout II, are considered. Surprisingly, numerical return map studies reveal that passive generation of a large variety of cyclic bounding motion is possible. Most strikingly, local stability analysis shows that the dynamics of the open loop passive system alone can confer stability to the motion! These results can be used in developing a general control methodology for legged robots, resulting from the synthesis of feedforward and feedback models that take advantage of the mechanical sysPortions of this paper have previously appeared in conference publications Poulakakis, Papadopoulos, and Buehler (2003) and Poulakakis, Smith, and Buehler (2005b). The first and third authors were with the Centre for Intelligent Machines at McGill University when this work was performed. Address all correspondence related to this paper to the first author. The International Journal of Robotics Research Vol. 25, No. 7, July 2006, pp. 669-687 DOI: 10.1177/0278364906066768 ©2006 SAGE Publications Figures appear in color online: http://ijr.sagepub.com tem, and might explain the success of simple, open loop bounding controllers on our experimental robot. KEY WORDS—passive dynamics, bounding gait, dynamic running, quadrupedal robot", "title": "" }, { "docid": "c58fb835c15cd7a55500bb953a336a96", "text": "A stretchable, flexible loop antenna working at 2.4GHz ISM band was fabricated by the additive manufacturing (AM) 3-D printing technology. NinjaFlex, a flexible 3-D printable material was utilized for the first time as a 3-D hemi-sphere substrate for the loop antenna. A 3-D printer based on the Fused Diffusion Modelling (FDM) technology was employed to 3-D print the substrate material. The resonance frequency of the antenna shifts in response to the applied force which makes the configuration suitable for the wireless strain sensing application. The proposed antenna was designed for wearable electronics application such as health monitoring earrings. Hence it was designed in such a way that it maintains the Specific Absorption Rate (SAR) of the human head tissues within the assigned standard limits when placed near human replicating head. The proposed antenna system could be useful in the additively manufactured wearable packaging and IoT applications.", "title": "" }, { "docid": "c19aaa19662d495b0bcde005c825bcc7", "text": "Legacy information systems typically form the backbone of the information flow within an organisation and are the main vehicle for consolidating information about the business. As a solution to the problems these systems pose brittleness, inflexibility, isolation, non-extensibility, lack of openness etc. many companies are migrating their legacy systems to new environments which allow the information system to more easily adapt to new business requirements. This paper presents a survey of research into Migration of Legacy Information Systems. The main problems that companies with legacy systems must face are analysed, and the challenges possible solutions must solve discussed. The paper provides an overview of the most important currently available solutions, and their main downsides are Jesus Bisbal, Deirdre Lawless, Ray Richardson, Donie O’Sulli van, Bing Wu, Jane Grimson, Vincent Wade, Broadcom Éireann Research, Trinity College, Dublin, Ireland. Dublin, Ireland.", "title": "" }, { "docid": "dd289b9e7b8e1f40863d4e2097f5f29a", "text": "Successful software development is becoming increasingly important as software basedsystems are at the core of a company`s new products. However, recent surveys show that most projects fail to meet their targets highlighting the inadequacies of traditional project management techniques to cope with the unique characteristics of this field. Despite the major breakthroughs in the discipline of software engineering, improvement of management methodologies has not occurred, and it is now recognised that the major opportunities for better results are to be found in this area. Poor strategic management and related human factors have been cited as a major cause for failures in several industries. Traditional project management techniques have proven inadequate to incorporate explicitly these higher-level and softer issues. System Dynamics emerged as a methodology for modelling the behaviour of complex socio-economic systems. There has been a number of applications to project management, and in particular in the field of software development. This new approach provides the opportunity for an alternative view in which the major project influences are considered and quantified explicitly. Grounded on a holistic perspective it avoids consideration of the detail required by the traditional tools and ensures that the key aspects of the general project behaviour are the main priority. However, if the approach is to play a core role in future of software project management it needs to embedded within the traditional decision-making framework. The authors developed a conceptual integrated model, the PMIM, which is now being tested and improved within a large on-going software project. Such a framework should specify the roles of system dynamics models, how they are to be used within the traditional management process, how they exchange information with the traditional models, and a general method to support model development. This paper identifies the distinctive contribution of System Dynamics to software management, proposes a conceptual model for an integrated management framework, and discusses its underlying principles. Research News Join our email list to receive details of when new research papers are published and the quarterly departmental newsletter. To subscribe send a blank email to managementsciencesubscribe@egroups.com. Details of our research papers can be found at www.mansci.strath.ac.uk/papers.html. Management Science, University of Strathclyde, Graham Hills Building, 40 George Street, Glasgow, Scotland. Email: mgtsci@mansci.strath.ac.uk Tel: +44 (0)141 548 3613 Fax: +44 (0)141 552 6686", "title": "" }, { "docid": "52faf4868f53008eec1f3ea4f39ed3f0", "text": "Hyaluronic acid (HA) soft-tissue fillers are the most popular degradable injectable products used for correcting skin depressions and restoring facial volume loss. From a rheological perspective, HA fillers are commonly characterised through their viscoelastic properties under shear-stress. However, despite the continuous mechanical pressure that the skin applies on the fillers, compression properties in static and dynamic modes are rarely considered. In this article, three different rheological tests (shear-stress test and compression tests in static and dynamic mode) were carried out on nine CE-marked cross-linked HA fillers. Corresponding shear-stress (G', tanδ) and compression (E', tanδc, normal force FN) parameters were measured. We show here that the tested products behave differently under shear-stress and under compression even though they are used for the same indications. G' showed the expected influence on the tissue volumising capacity, and the same influence was also observed for the compression parameters E'. In conclusion, HA soft-tissue fillers exhibit widely different biophysical characteristics and many variables contribute to their overall performance. The elastic modulus G' is not the only critical parameter to consider amongst the rheological properties: the compression parameters E' and FN also provide key information, which should be taken into account for a better prediction of clinical outcomes, especially for predicting the volumising capacity and probably the ability to stimulate collagen production by fibroblasts.", "title": "" }, { "docid": "311d186966b7d697731e4c2450289418", "text": "PURPOSE OF REVIEW\nThe goal of this paper is to review current literature on nutritional ketosis within the context of weight management and metabolic syndrome, namely, insulin resistance, lipid profile, cardiovascular disease risk, and development of non-alcoholic fatty liver disease. We provide background on the mechanism of ketogenesis and describe nutritional ketosis.\n\n\nRECENT FINDINGS\nNutritional ketosis has been found to improve metabolic and inflammatory markers, including lipids, HbA1c, high-sensitivity CRP, fasting insulin and glucose levels, and aid in weight management. We discuss these findings and elaborate on potential mechanisms of ketones for promoting weight loss, decreasing hunger, and increasing satiety. Humans have evolved with the capacity for metabolic flexibility and the ability to use ketones for fuel. During states of low dietary carbohydrate intake, insulin levels remain low and ketogenesis takes place. These conditions promote breakdown of excess fat stores, sparing of lean muscle, and improvement in insulin sensitivity.", "title": "" }, { "docid": "1d234016baf0a3652c7ca668598ea8b6", "text": "The dilemma between information gathering (exploration) and reward seeking (exploitation) is a fundamental problem for reinforcement learning agents. How humans resolve this dilemma is still an open question, because experiments have provided equivocal evidence about the underlying algorithms used by humans. We show that two families of algorithms can be distinguished in terms of how uncertainty affects exploration. Algorithms based on uncertainty bonuses predict a change in response bias as a function of uncertainty, whereas algorithms based on sampling predict a change in response slope. Two experiments provide evidence for both bias and slope changes, and computational modeling confirms that a hybrid model is the best quantitative account of the data.", "title": "" }, { "docid": "c2d0a4934c6c61d65d8b137ebbeb2f26", "text": "The fifth generation (5G) mobile communication networks will require a major paradigm shift to satisfy the increasing demand for higher data rates, lower network latencies, better energy efficiency, and reliable ubiquitous connectivity. With prediction of the advent of 5G systems in the near future, many efforts and revolutionary ideas have been proposed and explored around the world. The major technological breakthroughs that will bring renaissance to wireless communication networks include (1) a wireless software-defined network, (2) network function virtualization, (3) millimeter wave spectrum, (4) massive MIMO, (5) network ultra-densification, (6) big data and mobile cloud computing, (7) scalable Internet of Things, (8) device-to-device connectivity with high mobility, (9) green communications, and (10) new radio access techniques. In this paper, the state-of-the-art and the potentials of these ten enabling technologies are extensively surveyed. Furthermore, the challenges and limitations for each technology are treated in depth, while the possible solutions are highlighted. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "916c7a159dd22d0a0c0d3f00159ad790", "text": "The concept of scalability was introduced to the IEEE 802.16 WirelessMAN Orthogonal Frequency Division Multiplexing Access (OFDMA) mode by the 802.16 Task Group e (TGe). A scalable physical layer enables standard-based solutions to deliver optimum performance in channel bandwidths ranging from 1.25 MHz to 20 MHz with fixed subcarrier spacing for both fixed and portable/mobile usage models, while keeping the product cost low. The architecture is based on a scalable subchannelization structure with variable Fast Fourier Transform (FFT) sizes according to the channel bandwidth. In addition to variable FFT sizes, the specification supports other features such as Advanced Modulation and Coding (AMC) subchannels, Hybrid Automatic Repeat Request (H-ARQ), high-efficiency uplink subchannel structures, Multiple-Input-MultipleOutput (MIMO) diversity, and coverage enhancing safety channels, as well as other OFDMA default features such as different subcarrier allocations and diversity schemes. The purpose of this paper is to provide a brief tutorial on the IEEE 802.16 WirelessMAN OFDMA with an emphasis on scalable OFDMA. INTRODUCTION The IEEE 802.16 WirelessMAN standard [1] provides specifications for an air interface for fixed, portable, and mobile broadband wireless access systems. The standard includes requirements for high data rate Line of Sight (LOS) operation in the 10-66 GHz range for fixed wireless networks as well as requirements for Non Line of Sight (NLOS) fixed, portable, and mobile systems operating in sub 11 GHz licensed and licensed-exempt bands. Because of its superior performance in multipath fading wireless channels, Orthogonal Frequency Division Multiplexing (OFDM) signaling is recommended in OFDM and WirelessMAN OFDMA Physical (PHY) layer modes of the 802.16 standard for operation in sub 11 GHz NLOS applications. OFDM technology has been recommended in other wireless standards such as Digital Video Broadcasting (DVB) [2] and Wireless Local Area Networking (WLAN) [3]-[4], and it has been successfully implemented in the compliant solutions. Amendments for PHY and Medium Access Control (MAC) layers for mobile operation are being developed (working drafts [5] are being debated at the time of publication of this paper) by TGe of the 802.16 Working Group. The task group’s responsibility is to develop enhancement specifications to the standard to support Subscriber Stations (SS) moving at vehicular speeds and thereby specify a system for combined fixed and mobile broadband wireless access. Functions to support optional PHY layer structures, mobile-specific MAC enhancements, higher-layer handoff between Base Stations (BS) or sectors, and security features are among those specified. Operation in mobile mode is limited to licensed bands suitable for mobility between 2 and 6 GHz. Unlike many other OFDM-based systems such as WLAN, the 802.16 standard supports variable bandwidth sizes between 1.25 and 20 MHz for NLOS operations. This feature, along with the requirement for support of combined fixed and mobile usage models, makes the need for a scalable design of OFDM signaling inevitable. More specifically, neither one of the two OFDM-based modes of the 802.16 standard, WirelessMAN OFDM and OFDMA (without scalability option), can deliver the kind of performance required for operation in vehicular mobility multipath fading environments for all bandwidths in the specified range, without scalability enhancements that guarantee fixed subcarrier spacing for OFDM signals. The concept of scalable OFDMA is introduced to the IEEE 802.16 WirelessMAN OFDMA mode by the 802.16 TGe and has been the subject of many contributions to the standards committee [6]-[9]. Other features such as AMC subchannels, Hybrid Automatic Repeat Request Intel Technology Journal, Volume 8, Issue 3, 2004 Scalable OFDMA Physical Layer in IEEE 802.16 WirelessMAN 202 (H-ARQ), high-efficiency Uplink (UL) subchannel structures, Multiple-Input-Multiple-Output (MIMO) diversity, enhanced Advanced Antenna Systems (AAS), and coverage enhancing safety channels were introduced [10]-[14] simultaneously to enhance coverage and capacity of mobile systems while providing the tools to trade off mobility with capacity. The rest of the paper is organized as follows. In the next section we cover multicarrier system requirements, drivers of scalability, and design tradeoffs. We follow that with a discussion in the following six sections of the OFDMA frame structure, subcarrier allocation modes, Downlink (DL) and UL MAP messaging, diversity options, ranging in OFDMA, and channel coding options. Note that although the IEEE P802.16-REVd was ratified shortly before the submission of this paper, the IEEE P802.16e was still in draft stage at the time of submission, and the contents of this paper therefore are based on proposed contributions to the working group. MULTICARRIER DESIGN REQUIREMENTS AND TRADEOFFS A typical early step in the design of an Orthogonal Frequency Division Multiplexing (OFDM)-based system is a study of subcarrier design and the size of the Fast Fourier Transform (FFT) where optimal operational point balancing protection against multipath, Doppler shift, and design cost/complexity is determined. For this, we use Wide-Sense Stationary Uncorrelated Scattering (WSSUS), a widely used method to model time varying fading wireless channels both in time and frequency domains using stochastic processes. Two main elements of the WSSUS model are briefly discussed here: Doppler spread and coherence time of channel; and multipath delay spread and coherence bandwidth. A maximum speed of 125 km/hr is used here in the analysis for support of mobility. With the exception of high-speed trains, this provides a good coverage of vehicular speed in the US, Europe, and Asia. The maximum Doppler shift [15] corresponding to the operation at 3.5 GHz (selected as a middle point in the 26 GHz frequency range) is given by Equation (1). Hz m s m f m 408 086 . 0 / 35 = = = λ ν Equation (1) The worst-case Doppler shift value for 125 km/hr (35 m/s) would be ~700 Hz for operation at the 6 GHz upper limit specified by the standard. Using a 10 KHz subcarrier spacing, the Inter Channel Interference (ICI) power corresponding to the Doppler shift calculated in Equation (1) can be shown [16] to be limited to ~-27 dB. The coherence time of the channel, a measure of time variation in the channel, corresponding to the Doppler shift specified above, is calculated in Equation (2) [15].", "title": "" }, { "docid": "f6227013273d148321cab1eef83c40e5", "text": "The advanced features of 5G mobile wireless network systems yield new security requirements and challenges. This paper presents a comprehensive study on the security of 5G wireless network systems compared with the traditional cellular networks. The paper starts with a review on 5G wireless networks particularities as well as on the new requirements and motivations of 5G wireless security. The potential attacks and security services are summarized with the consideration of new service requirements and new use cases in 5G wireless networks. The recent development and the existing schemes for the 5G wireless security are presented based on the corresponding security services, including authentication, availability, data confidentiality, key management, and privacy. This paper further discusses the new security features involving different technologies applied to 5G, such as heterogeneous networks, device-to-device communications, massive multiple-input multiple-output, software-defined networks, and Internet of Things. Motivated by these security research and development activities, we propose a new 5G wireless security architecture, based on which the analysis of identity management and flexible authentication is provided. As a case study, we explore a handover procedure as well as a signaling load scheme to show the advantages of the proposed security architecture. The challenges and future directions of 5G wireless security are finally summarized.", "title": "" }, { "docid": "57cb8a4cf69a2be4dc02e93ed2152331", "text": "Suicidal behavior is a leading cause of death and disability worldwide. Fortunately, recent developments in suicide theory and research promise to meaningfully advance knowledge and prevention. One key development is the ideation-to-action framework, which stipulates that (a) the development of suicidal ideation and (b) the progression from ideation to suicide attempts are distinct phenomena with distinct explanations and predictors. A second key development is a growing body of research distinguishing factors that predict ideation from those that predict suicide attempts. For example, it is becoming clear that depression, hopelessness, most mental disorders, and even impulsivity predict ideation, but these factors struggle to distinguish those who have attempted suicide from those who have only considered suicide. Means restriction is also emerging as a highly effective way to block progression from ideation to attempt. A third key development is the proliferation of theories of suicide that are positioned within the ideation-to-action framework. These include the interpersonal theory, the integrated motivational-volitional model, and the three-step theory. These perspectives can and should inform the next generation of suicide research and prevention.", "title": "" } ]
scidocsrr
302dfdfa6b9e127de4413a1484c48c8c
Domain Adaptation for CNN Based Iris Segmentation
[ { "docid": "957e103d533b3013e24aebd3617edd87", "text": "The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks.", "title": "" } ]
[ { "docid": "ec5dc7aaa399af3a3db080588df1376f", "text": "Dimensionality reduction plays an important role in many data mining applications involving high-dimensional data. Many existing dimensionality reduction techniques can be formulated as a generalized eigenvalue problem, which does not scale to large-size problems. Prior work transforms the generalized eigenvalue problem into an equivalent least squares formulation, which can then be solved efficiently. However, the equivalence relationship only holds under certain assumptions without regularization, which severely limits their applicability in practice. In this paper, an efficient two-stage approach is proposed to solve a class of dimensionality reduction techniques, including Canonical Correlation Analysis, Orthonormal Partial Least Squares, linear Discriminant Analysis, and Hypergraph Spectral Learning. The proposed two-stage approach scales linearly in terms of both the sample size and data dimensionality. The main contributions of this paper include (1) we rigorously establish the equivalence relationship between the proposed two-stage approach and the original formulation without any assumption; and (2) we show that the equivalence relationship still holds in the regularization setting. We have conducted extensive experiments using both synthetic and real-world data sets. Our experimental results confirm the equivalence relationship established in this paper. Results also demonstrate the scalability of the proposed two-stage approach.", "title": "" }, { "docid": "dbb5081b819938a3a8d6003576874d10", "text": "The importance of recognizing early melanoma is generally accepted. Because not all pigmented skin lesions can be diagnosed correctly by their clinical appearance, additional criteria are required for the clinical diagnosis of such lesions. In vivo epiluminescence microscopy provides for a more detailed inspection of the surface of pigmented skin lesions, and, by using the oil immersion technic, which renders the epidermis translucent, opens a new dimension of skin morphology by including the dermoepidermal junction into the macroscopic evaluation of a lesion. In an epiluminescence microscopy study of more than 3000 pigmented skin lesions we have defined morphologic criteria that are not readily apparent to the naked eye but that are detected easily by epiluminescence microscopy and represent relatively reliable markers of benign and malignant pigmented skin lesions. These features include specific patterns, colors, and intensities of pigmentation, as well as the configuration, regularity, and other characteristics of both the margin and the surface of pigmented skin lesions. Pattern analysis of these features permits a distinction between different types of pigmented skin lesions and, in particular, between benign and malignant growth patterns. Epiluminescence microscopy is thus a valuable addition to the diagnostic armamentarium of pigmented skin lesions at a clinical level.", "title": "" }, { "docid": "1d1484dd1924ab7a0620a82cd80eac4a", "text": "Storytelling is a practical and powerful teaching tool, especially for language learning. Teachers in language classrooms, however, may hesitate to incorporate storytelling into language instruction because of an already overloaded curriculum. English foreign language (EFL) teachers in Taiwan report additional problems such as having little prior experience with integrating storytelling into language teaching, locating appropriate stories, and lacking the cultural and language abilities to handle storytelling in English. On the other hand, researchers have demonstrated successful usages of computer and network-assisted English learning. The researchers in this study have developed a multimedia Storytelling Website to study how webbased technology can assist overcoming the obstacles mentioned above. The website contains an accounts administration module, multimedia story composing module, and story re-playing module. In order to demonstrate the effectiveness of this Website in significantly facilitating teacher s storytelling and children s story recall processes in EFL classrooms, it was implemented in one elementary school to test its effectiveness in instruction and in resultant student learning. The results of the study support the significance and the education value of the multimedia Storytelling Website on EFL teaching and learning. If such a Website 0360-1315/$ see front matter 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.compedu.2004.08.013 * Corresponding author. E-mail address: wtsou@ipx.ntntc.edu.tw (W. Tsou). 18 W. Tsou et al. / Computers & Education 47 (2006) 17–28 can be applied within elementary EFL classrooms, the quality of teaching and learning can be improved and students enjoyment and success in EFL learning may increase. 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c1d5df0e2058e3f191a8227fca51a2fb", "text": "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "title": "" }, { "docid": "200e57c377bd2d211fe2948b83a425f9", "text": "A protein secondary structure defines the local conformation of the protein's polypeptide backbone, which provides important information for protein 3D structure prediction and protein functions. In this study, a new deep neural network, the deep neighbor residual network (DeepNRN), is proposed for protein secondary structure predictions. The network takes three types of inputs, namely protein sequence features, profile features generated by PSI-BLAST, and profile features generated by HHBlits, and predicts the protein secondary structure in either one of eight states (Q8) or one of three states (Q3). The basic building block of the network, the neighbor residual unit, is designed with two types of short-cut connections that are more general and expressive than residual units in existing residual deep neural networks, yet can still be computed efficiently. In addition, the prediction result of DeepNRN can be refined by a Struct2Struct network to make the result more protein-like. Extensive experimental results on multiple widely used benchmark data sets show that the new DeepNRN-based method outperformed existing methods and obtained the best results across multiple data sets.", "title": "" }, { "docid": "85aa1fb0b2e902ca2f52e597590c5736", "text": "Identities are known as the most sensitive information. With the increasing number of connected objects and identities (a connected object may have one or many identities), the computing and communication capabilities improved to manage these connected devices and meet the needs of this progress. Therefore, new IoT Identity Management System (IDMS) requirements have been introduced. In this work, we suggest an IDMS approach to protect private information and ensures domain change in IoT for mobile clients using a personal authentication device. Firstly, we present basic concepts, existing requirements and limits of related works. We also propose new requirements and show our motivations. Next, we describe our proposal. Finally, we give our security approach validation, perspectives, and some concluding remarks.", "title": "" }, { "docid": "9a2d79d9df9e596e26f8481697833041", "text": "Novelty search is a recent artificial evolution technique that challenges traditional evolutionary approaches. In novelty search, solutions are rewarded based on their novelty, rather than their quality with respect to a predefined objective. The lack of a predefined objective precludes premature convergence caused by a deceptive fitness function. In this paper, we apply novelty search combined with NEAT to the evolution of neural controllers for homogeneous swarms of robots. Our empirical study is conducted in simulation, and we use a common swarm robotics task—aggregation, and a more challenging task—sharing of an energy recharging station. Our results show that novelty search is unaffected by deception, is notably effective in bootstrapping evolution, can find solutions with lower complexity than fitness-based evolution, and can find a broad diversity of solutions for the same task. Even in non-deceptive setups, novelty search achieves solution qualities similar to those obtained in traditional fitness-based evolution. Our study also encompasses variants of novelty search that work in concert with fitness-based evolution to combine the exploratory character of novelty search with the exploitatory character of objective-based evolution. We show that these variants can further improve the performance of novelty search. Overall, our study shows that novelty search is a promising alternative for the evolution of controllers for robotic swarms.", "title": "" }, { "docid": "155c692223bf8698278023c04e07f135", "text": "Structure-function studies with mammalian reoviruses have been limited by the lack of a reverse-genetic system for engineering mutations into the viral genome. To circumvent this limitation in a partial way for the major outer-capsid protein sigma3, we obtained in vitro assembly of large numbers of virion-like particles by binding baculovirus-expressed sigma3 protein to infectious subvirion particles (ISVPs) that lack sigma3. A level of sigma3 binding approaching 100% of that in native virions was routinely achieved. The sigma3 coat in these recoated ISVPs (rcISVPs) appeared very similar to that in virions by electron microscopy and three-dimensional image reconstruction. rcISVPs retained full infectivity in murine L cells, allowing their use to study sigma3 functions in virus entry. Upon infection, rcISVPs behaved identically to virions in showing an extended lag phase prior to exponential growth and in being inhibited from entering cells by either the weak base NH4Cl or the cysteine proteinase inhibitor E-64. rcISVPs also mimicked virions in being incapable of in vitro activation to mediate lysis of erythrocytes and transcription of the viral mRNAs. Last, rcISVPs behaved like virions in showing minor loss of infectivity at 52 degrees C. Since rcISVPs contain virion-like levels of sigma3 but contain outer-capsid protein mu1/mu1C mostly cleaved at the delta-phi junction as in ISVPs, the fact that rcISVPs behaved like virions (and not ISVPs) in all of the assays that we performed suggests that sigma3, and not the delta-phi cleavage of mu1/mu1C, determines the observed differences in behavior between virions and ISVPs. To demonstrate the applicability of rcISVPs for genetic studies of protein functions in reovirus entry (an approach that we call recoating genetics), we used chimeric sigma3 proteins to localize the primary determinants of a strain-dependent difference in sigma3 cleavage rate to a carboxy-terminal region of the ISVP-bound protein.", "title": "" }, { "docid": "14d480e4c9256d0ef5e5684860ae4d7f", "text": "Changes in land use and land cover (LULC) as well as climate are likely to affect the geographic distribution of malaria vectors and parasites in the coming decades. At present, malaria transmission is concentrated mainly in the Amazon basin where extensive agriculture, mining, and logging activities have resulted in changes to local and regional hydrology, massive loss of forest cover, and increased contact between malaria vectors and hosts. Employing presence-only records, bioclimatic, topographic, hydrologic, LULC and human population data, we modeled the distribution of malaria and two of its dominant vectors, Anopheles darlingi, and Anopheles nuneztovari s.l. in northern South America using the species distribution modeling platform Maxent. Results from our land change modeling indicate that about 70,000 km2 of forest land would be lost by 2050 and 78,000 km2 by 2070 compared to 2010. The Maxent model predicted zones of relatively high habitat suitability for malaria and the vectors mainly within the Amazon and along coastlines. While areas with malaria are expected to decrease in line with current downward trends, both vectors are predicted to experience range expansions in the future. Elevation, annual precipitation and temperature were influential in all models both current and future. Human population mostly affected An. darlingi distribution while LULC changes influenced An. nuneztovari s.l. distribution. As the region tackles the challenge of malaria elimination, investigations such as this could be useful for planning and management purposes and aid in predicting and addressing potential impediments to elimination.", "title": "" }, { "docid": "c71ada1231703f2ecb2c2872ef7d5632", "text": "We present a spatial multiplex optical transmission system named the “Smart Light” (See Figure 1), which provides multiple data streams to multiple points simultaneously. This system consists of a projector and some devices along with a photo-detector. The projector projects images with invisible information to the devices, and devices receive some data. In this system, the data stream is expandable to a positionbased audio or video stream by using DMDs (Digital Micro-mirror Device) or LEDs (Light Emitting Diode) with unperceivable space-time modulation. First, in a preliminary experiment, we confirmed with a commercially produced XGA grade projector transmitting a million points that the data rate of its path is a few bits per second. Detached devices can receive relative position data and other properties from the projector. Second, we made an LED type high-speed projector to transmit audio streams using modulated light on an object and confirmed the transmission of positionbased audio stream data.", "title": "" }, { "docid": "6dbabfe7370b19c55a52671c82c3e3c8", "text": "The development of a compact circular polarization Orthomode Trasducer (OMT) working in two frequency bands with dual circular polarization (RHCP & LHCP) is presented. The device covers the complete communication spectrum allocated at C-band. At the same time, the device presents high power handling capability and very low mass and envelope size. The OMT plus a feed horn are used to illuminate a Reflector antenna, the surface of which is shaped to provide domestic or regional coverage from geostationary orbit. The full band operation increases the earth-satellite communication capability. The paper will show the OMT selected architecture, the RF performances at unit level and at component level. RF power aspects like multipaction and PIM are addressed. This development was performed under European Space Agency ESA ARTES-4 program.", "title": "" }, { "docid": "e00c05ab9796c6c217e00695adcb07ac", "text": "Web 2.0 technologies opened up new perspectives in learning and teaching activities. Collaboration, communication and sharing between learners contribute to the self-regulated learning, a bottom-up approach. The market for smartphones and tablets are growing rapidly. They are being used more often in everyday life. This allows us to support self-regulated learning in a way that learning resources and applications are accessible any time and at any place. This publication focuses on the Personal Learning Environment (PLE) that was launched at Graz University of Technology in 2010. After a first prototype a complete redesign was carried out to fulfill a change towards learner-centered framework. Statistical data show a high increase of attractiveness of the whole system in general. As the next step a mobile version is integrated. A converter for browser-based learning apps within PLE to native smartphone apps leads to the Ubiquitous PLE, which is discussed in this paper in detail.", "title": "" }, { "docid": "3ebc26643334c88ccc44fb01f60d600f", "text": "Skin whitening products are commercially available for cosmetic purposes in order to obtain a lighter skin appearance. They are also utilized for clinical treatment of pigmentary disorders such as melasma or postinflammatory hyperpigmentation. Whitening agents act at various levels of melanin production in the skin. Many of them are known as competitive inhibitors of tyrosinase, the key enzyme in melanogenesis. Others inhibit the maturation of this enzyme or the transport of pigment granules (melanosomes) from melanocytes to surrounding keratinocytes. In this review we present an overview of (natural) whitening products that may decrease skin pigmentation by their interference with the pigmentary processes.", "title": "" }, { "docid": "7ec6540b44b23a0380dcb848239ccac4", "text": "There is plenty of theoretical and empirical evidence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difficult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture designed to ease gradient-based training of very deep networks. We refer to networks with this architecture as highway networks, since they allow unimpeded information flow across several layers on information highways. The architecture is characterized by the use of gating units which learn to regulate the flow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient descent and with a variety of activation functions, opening up the possibility of studying extremely deep and efficient architectures. Note: A full paper extending this study is available at http://arxiv.org/abs/1507.06228, with additional references, experiments and analysis.", "title": "" }, { "docid": "5bece01bed7c5a9a2433d95379882a37", "text": "n The polarization of electromagnetic signals is an important feature in the design of modern radar and telecommunications. Standard electromagnetic theory readily shows that a linearly polarized plane wave propagating in free space consists of two equal but counter-rotating components of circular polarization. In magnetized media, these circular modes can be arranged to produce the nonreciprocal propagation effects that are the basic properties of isolator and circulator devices. Independent phase control of right-hand (+) and left-hand (–) circular waves is accomplished by splitting their propagation velocities through differences in the e ± m ± parameter. A phenomenological analysis of the permeability m and permittivity e in dispersive media serves to introduce the corresponding magneticand electric-dipole mechanisms of interaction length with the propagating signal. As an example of permeability dispersion, a Lincoln Laboratory quasi-optical Faradayrotation isolator circulator at 35 GHz (l ~ 1 cm) with a garnet-ferrite rotator element is described. At infrared wavelengths (l = 1.55 mm), where fiber-optic laser sources also require protection by passive isolation of the Faraday-rotation principle, e rather than m provides the dispersion, and the frequency is limited to the quantum energies of the electric-dipole atomic transitions peculiar to the molecular structure of the magnetic garnet. For optimum performance, bismuth additions to the garnet chemical formula are usually necessary. Spectroscopic and molecular theory models developed at Lincoln Laboratory to explain the bismuth effects are reviewed. In a concluding section, proposed advances in present technology are discussed in the context of future radar and telecommunications challenges.", "title": "" }, { "docid": "61a8db272f21704ed4afc9d21a4b1bdc", "text": "Today's business enterprises must deal with global competition, reduce the cost of doing business, and rapidly develop new services and products. To address these requirements enterprises must constantly reconsider and optimize the way they do business and change their information systems and applications to support evolving business processes. Workflow technology facilitates these by providing methodologies and software to support (i) business process modeling to capture business processes as workflow specifications, (ii) business process reengineering to optimize specified processes, and (iii) workflow automation to generate workflow implementations from workflow specifications. This paper provides a high-level overview of the current workflow management methodologies and software products. In addition, we discuss the infrastructure technologies that can address the limitations of current commercial workflow technology and extend the scope and mission of workflow management systems to support increased workflow automation in complex real-world environments involving heterogeneous, autonomous, and distributed information systems. In particular, we discuss how distributed object management and customized transaction management can support further advances in the commercial state of the art in this area.", "title": "" }, { "docid": "ece75610b34e3c5353bceb757bb7d90b", "text": "Biometric system provides a way of automatic verification or identification a person. But nowadays due to lack of secrecy, there is lot of security threat due to spoofing. Spoofing with photograph or video is one of the most common manners to attack a face recognition system. Liveness detection is a technique that can be used for validating whether the data originate is from a valid user or not. Liveness detection can be hardware based or software based or a combination of both. In this paper, we present a non intrusive and real time method to address this problem, based on skin elasticity of human face. In this technique user is asked to do some movement like chewing and forehead movement simultaneously, so that a full movement to face skin can be given and then sequence of face images is captured with a gap of few milliseconds. Then by applying correlation coefficient between images and then discriminate analysis using some method, face skin is discriminate from the other materials like gelatin, rubber, cadaver, clay etc. In comparison to other face liveness detection, this method will be much user friendly. On the other hand, one of the images captured for liveness detection can be used for face recognition. Keywords— Biometrics, Face Recognition, Fake Face Detection, Liveness Detection, Skin Elasticity,", "title": "" }, { "docid": "0e5a7266493c746c107171de8d3c4392", "text": "STUDY OBJECTIVE\nTo determine the reliability, validity, and stability of a triaxial accelerometer for walking and daily activity measurement in a COPD sample.\n\n\nDESIGN\nCross-sectional, correlational, descriptive design.\n\n\nSETTING\nOutpatient pulmonary rehabilitation program in a university-affiliated Veterans Affairs medical center.\n\n\nPARTICIPANTS\nForty-seven outpatients (44 men and 3 women) with stable COPD (FEV(1), 37% predicted; SD, 16%) prior to entry into a pulmonary rehabilitation program.\n\n\nMEASUREMENTS AND RESULTS\nTest-retest reliability of a triaxial movement sensor (Tritrac R3D Research Ergometer; Professional Products; Madison, WI) was evaluated in 35 of the 47 subjects during three standardized 6-min walks (intraclass correlation coefficient [rICC] = 0.84). Pearson correlations evaluated accelerometer concurrent validity as a measure of walking (in vector magnitude units), compared to walking distance in all 47 subjects during three sequential 6-min walks (0. 84, 0.85, and 0.95, respectively; p < 0.001). The validity of the accelerometer as a measure of daily activity over 3 full days at home was evaluated in all subjects using Pearson correlations with other indicators of functional capacity. The accelerometer correlated with exercise capacity (maximal 6-min walk, r = 0.74; p < 0.001); level of obstructive disease (FEV(1) percent predicted, r = 0.62; p < 0.001); dyspnea (Functional Status and Dyspnea Questionnaire, dyspnea over the past 30 days, r = - 0.29; p < 0.05); and activity self-efficacy (Activity Self-Efficacy Questionnaire, r = 0.43; p < 0.01); but not with self-report of daily activity (Modified Activity Recall Questionnaire, r = 0.14; not significant). Stability of the accelerometer to measure 3 full days of activity at home was determined by an rICC of 0.69.\n\n\nCONCLUSIONS\nThis study provides preliminary data suggesting that a triaxial movement sensor is a reliable, valid, and stable measure of walking and daily physical activity in COPD patients. It has the potential to provide more precise measurement of everyday physical functioning in this population than self-report measures currently in use, and measures an important dimension of functional status not previously well-described.", "title": "" }, { "docid": "db8b26229ced95bab2028d0b8eb8a43f", "text": "OBJECTIVES\nThis study investigated isometric and isokinetic hip strength in individuals with and without symptomatic femoroacetabular impingement (FAI). The specific aims were to: (i) determine whether differences exist in isometric and isokinetic hip strength measures between groups; (ii) compare hip strength agonist/antagonist ratios between groups; and (iii) examine relationships between hip strength and self-reported measures of either hip pain or function in those with FAI.\n\n\nDESIGN\nCross-sectional.\n\n\nMETHODS\nFifteen individuals (11 males; 25±5 years) with symptomatic FAI (clinical examination and imaging (alpha angle >55° (cam FAI), and lateral centre edge angle >39° and/or positive crossover sign (combined FAI))) and 14 age- and sex-matched disease-free controls (no morphological FAI on magnetic resonance imaging) underwent strength testing. Maximal voluntary isometric contraction strength of hip muscle groups and isokinetic hip internal (IR) and external rotation (ER) strength (20°/s) were measured. Groups were compared with independent t-tests and Mann-Whitney U tests.\n\n\nRESULTS\nParticipants with FAI had 20% lower isometric abduction strength than controls (p=0.04). There were no significant differences in isometric strength for other muscle groups or peak isokinetic ER or IR strength. The ratio of isometric, but not isokinetic, ER/IR strength was significantly higher in the FAI group (p=0.01). There were no differences in ratios for other muscle groups. Angle of peak IR torque was the only feature correlated with symptoms.\n\n\nCONCLUSIONS\nIndividuals with symptomatic FAI demonstrate isometric hip abductor muscle weakness and strength imbalance in the hip rotators. Strength measurement, including agonist/antagonist ratios, may be relevant for clinical management of FAI.", "title": "" }, { "docid": "19da793660c1ab90b0da41842efa790b", "text": "In this paper, we propose a method to optimally set the tap position of voltage regulation transformers in distribution systems. We cast the problem as a rank-constrained semidefinite program (SDP), in which the transformer tap ratios are captured by 1) introducing a secondary-side “virtual” bus per transformer, and 2) constraining the values that these virtual bus voltages can take according to the limits on the tap positions. Then, by relaxing the non-convex rank-1 constraint in the rank-constrained SDP formulation, one obtains a convex SDP problem. The tap positions are determined as the ratio between the primary-side bus voltage and the secondary-side virtual bus voltage that result from the optimal solution of the relaxed SDP, and then rounded to the nearest discrete tap values. To efficiently solve the relaxed SDP, we propose a distributed algorithm based on the alternating direction method of multipliers (ADMM). We present several case studies with single- and three-phase distribution systems to demonstrate the effectiveness of the distributed ADMM-based algorithm, and compare its results with centralized solution methods.", "title": "" } ]
scidocsrr
0da49d505b8f9ae7159387be8707995b
Single Image Action Recognition Using Semantic Body Part Actions
[ { "docid": "cf5829d1bfa1ae243bbf67776b53522d", "text": "There are multiple cues in an image which reveal what action a person is performing. For example, a jogger has a pose that is characteristic for jogging, but the scene (e.g. road, trail) and the presence of other joggers can be an additional source of information. In this work, we exploit the simple observation that actions are accompanied by contextual cues to build a strong action recognition system. We adapt RCNN to use more than one region for classification while still maintaining the ability to localize the action. We call our system R*CNN. The action-specific models and the feature maps are trained jointly, allowing for action specific representations to emerge. R*CNN achieves 90.2% mean AP on the PASAL VOC Action dataset, outperforming all other approaches in the field by a significant margin. Last, we show that R*CNN is not limited to action recognition. In particular, R*CNN can also be used to tackle fine-grained tasks such as attribute classification. We validate this claim by reporting state-of-the-art performance on the Berkeley Attributes of People dataset.", "title": "" } ]
[ { "docid": "be3f18e5fbaf3ad45976ca867698a4bc", "text": "Widespread adoption of internet technologies has changed the way that news is created and consumed. The current online news environment is one that incentivizes speed and spectacle in reporting, at the cost of fact-checking and verification. The line between user generated content and traditional news has also become increasingly blurred. This poster reviews some of the professional and cultural issues surrounding online news and argues for a two-pronged approach inspired by Hemingway’s “automatic crap detector” (Manning, 1965) in order to address these problems: a) proactive public engagement by educators, librarians, and information specialists to promote digital literacy practices; b) the development of automated tools and technologies to assist journalists in vetting, verifying, and fact-checking, and to assist news readers by filtering and flagging dubious information.", "title": "" }, { "docid": "40e55e77a59e3ed63ae0a86b0c832f32", "text": "Decision tree is an important method for both induction research and data mining, which is mainly used for model classification and prediction. ID3 algorithm is the most widely used algorithm in the decision tree so far. Through illustrating on the basic ideas of decision tree in data mining, in this paper, the shortcoming of ID3's inclining to choose attributes with many values is discussed, and then a new decision tree algorithm combining ID3 and Association Function(AF) is presented. The experiment results show that the proposed algorithm can overcome ID3's shortcoming effectively and get more reasonable and effective rules", "title": "" }, { "docid": "c2d41a58c4c11dd65f5f8e5215be7655", "text": "We present the task of second language acquisition (SLA) modeling. Given a history of errors made by learners of a second language, the task is to predict errors that they are likely to make at arbitrary points in the future. We describe a large corpus of more than 7M words produced by more than 6k learners of English, Spanish, and French using Duolingo, a popular online language-learning app. Then we report on the results of a shared task challenge aimed studying the SLA task via this corpus, which attracted 15 teams and synthesized work from various fields including cognitive science, linguistics, and machine learning.", "title": "" }, { "docid": "cc7c3b21f189d53ba3525d02d95d25c9", "text": "A polarization reconfigurable slot antenna with a novel coplanar waveguide (CPW)-to-slotline transition for wireless local area networks (WLANs) is proposed and tested. The antenna consists of a square slot, a reconfigurable CPW-to-slotline transition, and two p-i-n diodes. No extra matching structure is needed for modes transiting, which makes it much more compact than all reference designs. The -10 dB bandwidths of an antenna with an implemented bias circuit are 610 (25.4%) and 680 MHz (28.3%) for vertical and horizontal polarizations, respectively. The radiation pattern and gain of the proposed antenna are also tested, and the radiation pattern data were compared to simulation results.", "title": "" }, { "docid": "798e7781345a88acdd2f3d388a03802d", "text": "Measuring the similarity between nominal variables is an important problem in data mining. It's the base to measure the similarity of data objects which contain nominal variables. There are two kinds of traditional methods for this task, the first one simply distinguish variables by same or not same while the second one measures the similarity based on co-occurrence with variables of other attributes. Though they perform well in some conditions, but are still not enough in accuracy. This paper proposes an algorithm to measure the similarity between nominal variables of the same attribute based on the fact that the similarity between nominal variables depends on the relationship between subsets which hold them in the same dataset. This algorithm use the difference of the distribution which is quantified by f-divergence to form feature vector of nominal variables. The theoretical analysis helps to choose the best metric from four most common used forms of f-divergence. Time complexity of the method is linear with the size of dataset and it makes this method suitable for processing the large-scale data. The experiments which use the derived similarity metrics with K-modes on extensive UCI datasets demonstrate the effectiveness of our proposed method.", "title": "" }, { "docid": "9bc182298ad6158dbb5de4da15353312", "text": "We present Spectral Inference Networks, a framework for learning eigenfunctions of linear operators by stochastic optimization. Spectral Inference Networks generalize Slow Feature Analysis to generic symmetric operators, and are closely related to Variational Monte Carlo methods from computational physics. As such, they can be a powerful tool for unsupervised representation learning from video or pairs of data. We derive a training algorithm for Spectral Inference Networks that addresses the bias in the gradients due to finite batch size and allows for online learning of multiple eigenfunctions. We show results of training Spectral Inference Networks on problems in quantum mechanics and feature learning for videos on synthetic datasets as well as the Arcade Learning Environment. Our results demonstrate that Spectral Inference Networks accurately recover eigenfunctions of linear operators, can discover interpretable representations from video and find meaningful subgoals in reinforcement learning environments.", "title": "" }, { "docid": "9126eda46fe299bc3067bace979cdf5e", "text": "This paper considers the intersection of technology and play through the novel approach of gamification and its application to early years education. The intrinsic connection between play and technology is becoming increasingly significant in early years education. By creating an awareness of the early years adoption of technology into guiding frameworks, and then exploring the makeup of gaming elements, this paper draws connections for guiding principles in adopting more technology-focused play opportunities for Generation Alpha.", "title": "" }, { "docid": "74c6600ea1027349081c08c687119ee3", "text": "Segmentation of clitics has been shown to improve accuracy on a variety of Arabic NLP tasks. However, state-of-the-art Arabic word segmenters are either limited to formal Modern Standard Arabic, performing poorly on Arabic text featuring dialectal vocabulary and grammar, or rely on linguistic knowledge that is hand-tuned for each dialect. We extend an existing MSA segmenter with a simple domain adaptation technique and new features in order to segment informal and dialectal Arabic text. Experiments show that our system outperforms existing systems on broadcast news and Egyptian dialect, improving segmentation F1 score on a recently released Egyptian Arabic corpus to 92.09%, compared to 91.60% for another segmenter designed specifically for Egyptian Arabic.", "title": "" }, { "docid": "d3ae7f70b1d3fb1fbbf5fe9cd1a33bc8", "text": "Due to significant advances in SAT technology in the last years, its use for solving constraint satisfaction problems has been gaining wide acceptance. Solvers for satisfiability modulo theories (SMT) generalize SAT solving by adding the ability to handle arithmetic and other theories. Although there are results pointing out the adequacy of SMT solvers for solving CSPs, there are no available tools to extensively explore such adequacy. For this reason, in this paper we introduce a tool for translating FLATZINC (MINIZINC intermediate code) instances of CSPs to the standard SMT-LIB language. We provide extensive performance comparisons between state-of-the-art SMT solvers and most of the available FLATZINC solvers on standard FLATZINC problems. The obtained results suggest that state-of-the-art SMT solvers can be effectively used to solve CSPs.", "title": "" }, { "docid": "a9975365f0bad734b77b67f63bdf7356", "text": "Most existing models for multilingual natural language processing (NLP) treat language as a discrete category, and make predictions for either one language or the other. In contrast, we propose using continuous vector representations of language. We show that these can be learned efficiently with a character-based neural language model, and used to improve inference about language varieties not seen during training. In experiments with 1303 Bible translations into 990 different languages, we empirically explore the capacity of multilingual language models, and also show that the language vectors capture genetic relationships between languages.", "title": "" }, { "docid": "b191b9829aac1c1e74022c33e2488bbd", "text": "We investigated the normal and parallel ground reaction forces during downhill and uphill running. Our rationale was that these force data would aid in the understanding of hill running injuries and energetics. Based on a simple spring-mass model, we hypothesized that the normal force peaks, both impact and active, would increase during downhill running and decrease during uphill running. We anticipated that the parallel braking force peaks would increase during downhill running and the parallel propulsive force peaks would increase during uphill running. But, we could not predict the magnitude of these changes. Five male and five female subjects ran at 3m/s on a force treadmill mounted on the level and on 3 degrees, 6 degrees, and 9 degrees wedges. During downhill running, normal impact force peaks and parallel braking force peaks were larger compared to the level. At -9 degrees, the normal impact force peaks increased by 54%, and the parallel braking force peaks increased by 73%. During uphill running, normal impact force peaks were smaller and parallel propulsive force peaks were larger compared to the level. At +9 degrees, normal impact force peaks were absent, and parallel propulsive peaks increased by 75%. Neither downhill nor uphill running affected normal active force peaks. Combined with previous biomechanics studies, our normal impact force data suggest that downhill running substantially increases the probability of overuse running injury. Our parallel force data provide insight into past energetic studies, which show that the metabolic cost increases during downhill running at steep angles.", "title": "" }, { "docid": "ca5eaacea8702798835ca585200b041d", "text": "ccupational Health Psychology concerns the application of psychology to improving the quality of work life and to protecting and promoting the safety, health, and well-being of workers. Contrary to what its name suggests, Occupational Health Psychology has almost exclusively dealt with ill health and poor wellbeing. For instance, a simple count reveals that about 95% of all articles that have been published so far in the leading Journal of Occupational Health Psychology have dealt with negative aspects of workers' health and well-being, such as cardiovascular disease, repetitive strain injury, and burnout. In contrast, only about 5% of the articles have dealt with positive aspects such as job satisfaction, commitment, and motivation. However, times appear to be changing. Since the beginning of this century, more attention has been paid to what has been coined positive psychology: the scientific study of human strength and optimal functioning. This approach is considered to supplement the traditional focus of psychology on psychopathology, disease, illness, disturbance, and malfunctioning. The emergence of positive (organizational) psychology has naturally led to the increasing popularity of positive aspects of health and well-being in Occupational Health Psychology. One of these positive aspects is work engagement, which is considered to be the antithesis of burnout. While burnout is usually defined as a syndrome of exhaustion, cynicism, and reduced professional efficacy, engagement is defined as a positive, fulfilling, work-related state of mind that is characterized by vigor, dedication, and absorption. Engaged employees have a sense of energetic and effective connection with their work activities. Since this new concept was proposed by Wilmar Schaufeli (Utrecht University, the Netherlands) in 2001, 93 academic articles mainly focusing on the measurement of work engagement and its possible antecedents and consequences have been published (see www.schaufeli.com). In addition, major international academic conferences organized by the International Commission on Occupational 171", "title": "" }, { "docid": "0b1b4c8d501c3b1ab350efe4f2249978", "text": "Motivated by formation control of multiple non-holonomic mobile robots, this paper presents a trajectory tracking control scheme design for nonholonomic mobile robots that are equipped with low-level linear and angular velocities control systems. The design includes a nonlinear kinematic trajectory tracking control law and a tracking control gains selection method that provide a means to implement the nonlinear tracking control law systematically based on the dynamic control performance of the robot's low-level control systems. In addition, the proposed scheme, by design, enables the mobile robot to execute reference trajectories that are represented by time-parameterized waypoints. This feature provides the scheme a generic interface with higher-level trajectory planners. The trajectory tracking control scheme is validated using an iRobot Packbot's parameteric model estimated from experimental data.", "title": "" }, { "docid": "f48d87cb95488bba0c7e903e8bc20726", "text": "We address the problem of generating multiple hypotheses for structured prediction tasks that involve interaction with users or successive components in a cascaded architecture. Given a set of multiple hypotheses, such components/users typically have the ability to retrieve the best (or approximately the best) solution in this set. The standard approach for handling such a scenario is to first learn a single-output model and then produce M -Best Maximum a Posteriori (MAP) hypotheses from this model. In contrast, we learn to produce multiple outputs by formulating this task as a multiple-output structured-output prediction problem with a loss-function that effectively captures the setup of the problem. We present a max-margin formulation that minimizes an upper-bound on this lossfunction. Experimental results on image segmentation and protein side-chain prediction show that our method outperforms conventional approaches used for this type of scenario and leads to substantial improvements in prediction accuracy.", "title": "" }, { "docid": "5aed256aaca0a1f2fe8a918e6ffb62bd", "text": "Zero-shot learning (ZSL) enables solving a task without the need to see its examples. In this paper, we propose two ZSL frameworks that learn to synthesize parameters for novel unseen classes. First, we propose to cast the problem of ZSL as learning manifold embeddings from graphs composed of object classes, leading to a flexible approach that synthesizes “classifiers” for the unseen classes. Then, we define an auxiliary task of synthesizing “exemplars” for the unseen classes to be used as an automatic denoising mechanism for any existing ZSL approaches or as an effective ZSL model by itself. On five visual recognition benchmark datasets, we demonstrate the superior performances of our proposed frameworks in various scenarios of both conventional and generalized ZSL. Finally, we provide valuable insights through a series of empirical analyses, among which are a comparison of semantic representations on the full ImageNet benchmark as well as a comparison of metrics used in generalized ZSL. Our code and data are publicly available at https: //github.com/pujols/Zero-shot-learning-journal. Soravit Changpinyo Google AI E-mail: schangpi@google.com Wei-Lun Chao Cornell University, Department of Computer Science E-mail: weilunchao760414@gmail.com Boqing Gong Tencent AI Lab E-mail: boqinggo@outlook.com Fei Sha University of Southern California, Department of Computer Science E-mail: feisha@usc.edu", "title": "" }, { "docid": "73a02535ca36f6233319536f70975366", "text": "Structured decorative patterns are common ornamentations in a variety of media like books, web pages, greeting cards and interior design. Creating such art from scratch using conventional software is time consuming for experts and daunting for novices. We introduce DecoBrush, a data-driven drawing system that generalizes the conventional digital \"painting\" concept beyond the scope of natural media to allow synthesis of structured decorative patterns following user-sketched paths. The user simply selects an example library and draws the overall shape of a pattern. DecoBrush then synthesizes a shape in the style of the exemplars but roughly matching the overall shape. If the designer wishes to alter the result, DecoBrush also supports user-guided refinement via simple drawing and erasing tools. For a variety of example styles, we demonstrate high-quality user-constrained synthesized patterns that visually resemble the exemplars while exhibiting plausible structural variations.", "title": "" }, { "docid": "0e37a1a251c97fd88aa2ab3ee9ed422b", "text": "k-means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm.", "title": "" }, { "docid": "bd1523c64d8ec69d87cbe68a4d73ea17", "text": "BACKGROUND AND OBJECTIVE\nThe effective processing of biomedical images usually requires the interoperability of diverse software tools that have different aims but are complementary. The goal of this work is to develop a bridge to connect two of those tools: ImageJ, a program for image analysis in life sciences, and OpenCV, a computer vision and machine learning library.\n\n\nMETHODS\nBased on a thorough analysis of ImageJ and OpenCV, we detected the features of these systems that could be enhanced, and developed a library to combine both tools, taking advantage of the strengths of each system. The library was implemented on top of the SciJava converter framework. We also provide a methodology to use this library.\n\n\nRESULTS\nWe have developed the publicly available library IJ-OpenCV that can be employed to create applications combining features from both ImageJ and OpenCV. From the perspective of ImageJ developers, they can use IJ-OpenCV to easily create plugins that use any functionality provided by the OpenCV library and explore different alternatives. From the perspective of OpenCV developers, this library provides a link to the ImageJ graphical user interface and all its features to handle regions of interest.\n\n\nCONCLUSIONS\nThe IJ-OpenCV library bridges the gap between ImageJ and OpenCV, allowing the connection and the cooperation of these two systems.", "title": "" }, { "docid": "c9fdd453232bc1ebd540624f5c81c65b", "text": "A major drawback of backpropagation through time (BPTT) is the difficulty of learning long-term dependencies, coming from having to propagate credit information backwards through every single step of the forward computation. This makes BPTT both computationally impractical and biologically implausible. For this reason, full backpropagation through time is rarely used on long sequences, and truncated backpropagation through time is used as a heuristic. However, this usually leads to biased estimates of the gradient in which longer term dependencies are ignored. Addressing this issue, we propose an alternative algorithm, Sparse Attentive Backtracking, which might also be related to principles used by brains to learn long-term dependencies. Sparse Attentive Backtracking learns an attention mechanism over the hidden states of the past and selectively backpropagates through paths with high attention weights. This allows the model to learn long term dependencies while only backtracking for a small number of time steps, not just from the recent past but also from attended relevant past states.", "title": "" }, { "docid": "2da9ad29e0b10a8dc8b01a8faf35bb1a", "text": "Face recognition is challenge task which involves determining the identity of facial images. With availability of a massive amount of labeled facial images gathered from Internet, deep convolution neural networks(DCNNs) have achieved great success in face recognition tasks. Those images are gathered from unconstrain environment, which contain people with different ethnicity, age, gender and so on. However, in the actual application scenario, the target face database may be gathered under different conditions compered with source training dataset, e.g. different ethnicity, different age distribution, disparate shooting environment. These factors increase domain discrepancy between source training database and target application database which makes the learnt model degenerate in target database. Meanwhile, for the target database where labeled data are lacking or unavailable, directly using target data to fine-tune pre-learnt model becomes intractable and impractical. In this paper, we adopt unsupervised transfer learning methods to address this issue. To alleviate the discrepancy between source and target face database and ensure the generalization ability of the model, we constrain the maximum mean discrepancy (MMD) between source database and target database and utilize the massive amount of labeled facial images of source database to training the deep neural network at the same time. We evaluate our method on two face recognition benchmarks and significantly enhance the performance without utilizing the target label.", "title": "" } ]
scidocsrr
562ec1264e50a4dce04e20927fa35bfd
Look-ahead before you leap: end-to-end active recognition by forecasting the effect of motion
[ { "docid": "fdfea6d3a5160c591863351395929a99", "text": "Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision [22, 33], but their use in graphics problems has been limited ([23, 7] are notable recent exceptions). In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches, which consist of multiple complex stages of processing, each of which requires careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network, which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system, which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. We show view interpolation results on imagery from the KITTI dataset [12], from data from [1] as well as on Google Street View images. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.", "title": "" }, { "docid": "9ab7304f37e64d26d1d77feb95d3f140", "text": "This paper presents experiments extending the work of Ba et al. (2014) on recurrent neural models for attention into less constrained visual environments, beginning with fine-grained categorization on the Stanford Dogs data set. In this work we use an RNN of the same structure but substitute a more powerful visual network and perform large-scale pre-training of the visual network outside of the attention RNN. Most work in attention models to date focuses on tasks with toy or more constrained visual environments. We present competitive results for fine-grained categorization. More importantly, we show that our model learns to direct high resolution attention to the most discriminative regions without any spatial supervision such as bounding boxes. Given a small input window, it is hence able to discriminate fine-grained dog breeds with cheap glances at faces and fur patterns, while avoiding expensive and distracting processing of entire images. In addition to allowing high resolution processing with a fixed budget and naturally handling static or sequential inputs, this approach has the major advantage of being trained end-to-end, unlike most current approaches which are heavily engineered.", "title": "" } ]
[ { "docid": "a65d1881f5869f35844064d38b684ac8", "text": "Skilled artists, using traditional media or modern computer painting tools, can create a variety of expressive styles that are very appealing in still images, but have been unsuitable for animation. The key difficulty is that existing techniques lack adequate temporal coherence to animate these styles effectively. Here we augment the range of practical animation styles by extending the guided texture synthesis method of Image Analogies [Hertzmann et al. 2001] to create temporally coherent animation sequences. To make the method art directable, we allow artists to paint portions of keyframes that are used as constraints. The in-betweens calculated by our method maintain stylistic continuity and yet change no more than necessary over time.", "title": "" }, { "docid": "5b2dc2f54f104857384e4d036680ee1c", "text": "Social Media (SM) has become a valuable information source to many in diverse situations. In IR, research has focused on real-time aspects and as such little is known about how long SM content is of value to users, if and how often it is re-accessed, the strategies people employ to re-access and if difficulties are experienced while doing so. We present results from a 5 month-long naturalistic, log-based study of user interaction with Twitter, which suggest re-finding to be a regular activity and that Tweets can offer utility for longer than one might think. We shed light on re-finding strategies revealing that remembered people are used as a stepping stone to Tweets rather than searching for content directly. Bookmarking strategies reported in the literature are used infrequently as a means to re-access. Finally, we show that by using statistical modelling it is possible to predict if a Tweet has future utility and is likely to be re-found. Our findings have implications for the design of social media search systems and interfaces, in particular for Twitter, to better support users re-find previously seen content.", "title": "" }, { "docid": "28fd4e290dfb7d2826c8720c134ae087", "text": "We examined parent-child relationship quality and positive mental well-being using Medical Research Council National Survey of Health and Development data. Well-being was measured at ages 13-15 (teacher-rated happiness), 36 (life satisfaction), 43 (satisfaction with home and family life) and 60-64 years (Diener Satisfaction With Life scale and Warwick Edinburgh Mental Well-being scale). The Parental Bonding Instrument captured perceived care and control from the father and mother to age 16, recalled by study members at age 43. Greater well-being was seen for offspring with higher combined parental care and lower combined parental psychological control (p < 0.05 at all ages). Controlling for maternal care and paternal and maternal behavioural and psychological control, childhood social class, parental separation, mother's neuroticism and study member's personality, higher well-being was consistently related to paternal care. This suggests that both mother-child and father-child relationships may have short and long-term consequences for positive mental well-being.", "title": "" }, { "docid": "1610802593a60609bc1213762a9e0584", "text": "We examined emotional stability, ambition (an aspect of extraversion), and openness as predictors of adaptive performance at work, based on the evolutionary relevance of these traits to human adaptation to novel environments. A meta-analysis on 71 independent samples (N = 7,535) demonstrated that emotional stability and ambition are both related to overall adaptive performance. Openness, however, does not contribute to the prediction of adaptive performance. Analysis of predictor importance suggests that ambition is the most important predictor for proactive forms of adaptive performance, whereas emotional stability is the most important predictor for reactive forms of adaptive performance. Job level (managers vs. employees) moderates the effects of personality traits: Ambition and emotional stability exert stronger effects on adaptive performance for managers as compared to employees.", "title": "" }, { "docid": "443fb61dbb3cc11060104ed6ed0c645c", "text": "An interactive framework for soft segmentation and matting of natural images and videos is presented in this paper. The proposed technique is based on the optimal, linear time, computation of weighted geodesic distances to user-provided scribbles, from which the whole data is automatically segmented. The weights are based on spatial and/or temporal gradients, considering the statistics of the pixels scribbled by the user, without explicit optical flow or any advanced and often computationally expensive feature detectors. These could be naturally added to the proposed framework as well if desired, in the form of weights in the geodesic distances. An automatic localized refinement step follows this fast segmentation in order to further improve the results and accurately compute the corresponding matte function. Additional constraints into the distance definition permit to efficiently handle occlusions such as people or objects crossing each other in a video sequence. The presentation of the framework is complemented with numerous and diverse examples, including extraction of moving foreground from dynamic background in video, natural and 3D medical images, and comparisons with the recent literature.", "title": "" }, { "docid": "3122984a3e3e85abb201a822ac4ee92b", "text": "Fashion is an increasingly important topic in computer vision, in particular the so-called street-to-shop task of matching street images with shop images containing similar fashion items. Solving this problem promises new means of making fashion searchable and helping shoppers find the articles they are looking for. This paper focuses on finding pieces of clothing worn by a person in full-body or half-body images with neutral backgrounds. Such images are ubiquitous on the web and in fashion blogs, and are typically studio photos, we refer to this setting as studio-to-shop. Recent advances in computational fashion include the development of domain-specific numerical representations. Our model Studio2Shop builds on top of such representations and uses a deep convolutional network trained to match a query image to the numerical feature vectors of all the articles annotated in this image. Top-k retrieval evaluation on test query images shows that the correct items are most often found within a range that is sufficiently small for building realistic visual search engines for the studio-to-shop setting.", "title": "" }, { "docid": "d0e7bc4dab94eae7148ec0316918cf69", "text": "The exploitation of syntactic structures and semantic background knowledge has always been an appealing subject in the context of text retrieval and information management. The usefulness of this kind of information has been shown most prominently in highly specialized tasks, such as classification in Question Answering (QA) scenarios. So far, however, additional syntactic or semantic information has been used only individually. In this paper, we propose a principled approach for jointly exploiting both types of information. We propose a new type of kernel, the Semantic Syntactic Tree Kernel (SSTK), which incorporates linguistic structures, e.g. syntactic dependencies, and semantic background knowledge, e.g. term similarity based on WordNet, to automatically learn question categories in QA. We show the power of this approach in a series of experiments with a well known Question Classification dataset.", "title": "" }, { "docid": "b96836da7518ceccace39347f06067c6", "text": "A number of visual question answering approaches have been proposed recently, aiming at understanding the visual scenes by answering the natural language questions. While the image question answering has drawn significant attention, video question answering is largely unexplored. Video-QA is different from Image-QA since the information and the events are scattered among multiple frames. In order to better utilize the temporal structure of the videos and the phrasal structures of the answers, we propose two mechanisms: the re-watching and the re-reading mechanisms and combine them into the forgettable-watcher model. Then we propose a TGIF-QA dataset for video question answering with the help of automatic question generation. Finally, we evaluate the models on our dataset. The experimental results show the effectiveness of our proposed models.", "title": "" }, { "docid": "700a6c2741affdbdc2a5dd692130ebb0", "text": "Automated tools for understanding application behavior and its changes during the application lifecycle are essential for many performance analysis and debugging tasks. Application performance issues have an immediate impact on customer experience and satisfaction. A sudden slowdown of enterprise-wide application can effect a large population of customers, lead to delayed projects, and ultimately can result in company financial loss. Significantly shortened time between new software releases further exacerbates the problem of thoroughly evaluating the performance of an updated application. Our thesis is that online performance modeling should be a part of routine application monitoring. Early, informative warnings on significant changes in application performance should help service providers to timely identify and prevent performance problems and their negative impact on the service. We propose a novel framework for automated anomaly detection and application change analysis. It is based on integration of two complementary techniques: (i) a regression-based transaction model that reflects a resource consumption model of the application, and (ii) an application performance signature that provides a compact model of runtime behavior of the application. The proposed integrated framework provides a simple and powerful solution for anomaly detection and analysis of essential performance changes in application behavior. An additional benefit of the proposed approach is its simplicity: It is not intrusive and is based on monitoring data that is typically available in enterprise production environments. The introduced solution further enables the automation of capacity planning and resource provisioning tasks of multitier applications in rapidly evolving IT environments.", "title": "" }, { "docid": "f5ea6cbf85b375c920283666657fe24d", "text": "The link, if any, between creativity and mental illness is one of the most controversial topics in modern creativity research. The present research assessed the relationships between anxiety and depression symptom dimensions and several facets of creativity: divergent thinking, creative self-concepts, everyday creative behaviors, and creative accomplishments. Latent variable models estimated effect sizes and their confidence intervals. Overall, measures of anxiety, depression, and social anxiety predicted little variance in creativity. Few models explained more than 3% of the variance, and the effect sizes were small and inconsistent in direction.", "title": "" }, { "docid": "32b2cd6b63c6fc4de5b086772ef9d319", "text": "Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models – which potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree – which are common in highlyconnected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set – however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets – deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models, and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets.", "title": "" }, { "docid": "be41d072e3897506fad111549e7bf862", "text": "Handing unbalanced data and noise are two important issues in the field of machine learning. This paper proposed a complete framework of fuzzy relevance vector machine by weighting the punishment terms of error in Bayesian inference process of relevance vector machine (RVM). Above problems can be learned within this framework with different kinds of fuzzy membership functions. Experiments on both synthetic data and real world data demonstrate that fuzzy relevance vector machine (FRVM) is effective in dealing with unbalanced data and reducing the effects of noises or outliers. 2008 Published by Elsevier B.V.", "title": "" }, { "docid": "a497cb84141c7db35cd9a835b11f33d2", "text": "Ubiquitous nature of online social media and ever expending usage of short text messages becomes a potential source of crowd wisdom extraction especially in terms of sentiments therefore sentiment classification and analysis is a significant task of current research purview. Major challenge in this area is to tame the data in terms of noise, relevance, emoticons, folksonomies and slangs. This works is an effort to see the effect of pre-processing on twitter data for the fortification of sentiment classification especially in terms of slang word. The proposed method of pre-processing relies on the bindings of slang words on other coexisting words to check the significance and sentiment translation of the slang word. We have used n-gram to find the bindings and conditional random fields to check the significance of slang word. Experiments were carried out to observe the effect of proposed method on sentiment classification which clearly indicates the improvements in accuracy of classification. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of organizing committee of the Twelfth International Multi-Conference on Information Processing-2016 (IMCIP-2016).", "title": "" }, { "docid": "b7eb937f9f9175b3c987417d6ef9abfe", "text": "Introduction: Emergency dispatch is a relatively new field, but the growth of dispatching as a profession, along with raised expectations for help before responders arrive, has led to increased production of and interest in emergency dispatch research. As yet, no systematic review of dispatch research has been conducted. Objective: This study reviewed the existing literature and indicated gaps in the research as well as potentially fruitful extensions of current lines of study. Methods: Dispatch-related terms were used to search for papers in research databases (including PubMed, MEDLINE, EMBASE, EMCARE, SciSearch, PsychInfo, and SCOPUS). All research papers with dispatching as the core focus were included. Results: A total 149 papers (114 original research, and 35 seminal concept papers) were identified. A vast majority dealt with medical dispatching (as opposed to police or fire dispatching). Four major issues emerged from the early history of emergency dispatch that continue to dominate dispatch studies: dispatch as first point of care, standardization of the dispatching process, resource allocation, and best practices for dispatching. Conclusion: Substantial peer-reviewed research does exist in dispatch studies. However, a lack of consistent metrics, the near-nonexistence of research in fire and police dispatching, and a relative lack of studies in many areas of interest indicate a need for increased participation in research by communication center administrators and others “on the ground” in emergency dispatch, as well as increased collaboration between research organizations and operations personnel.", "title": "" }, { "docid": "4655dcd241aa9e543111c5c95026b365", "text": "Received: 15 May 2002 Revised: 31 January 2003 Accepted: 18 July 2003 Abstract In this study, we developed a conceptual model for studying the adoption of electronic business (e-business or EB) at the firm level, incorporating six adoption facilitators and inhibitors, based on the technology–organization– environment theoretical framework. Survey data from 3100 businesses and 7500 consumers in eight European countries were used to test the proposed adoption model. We conducted confirmatory factor analysis to assess the reliability and validity of constructs. To examine whether adoption patterns differ across different e-business environments, we divided the full sample into high EB-intensity and low EB-intensity countries. After controlling for variations of industry and country effects, the fitted logit models demonstrated four findings: (1) Technology competence, firm scope and size, consumer readiness, and competitive pressure are significant adoption drivers, while lack of trading partner readiness is a significant adoption inhibitor. (2) As EB-intensity increases, two environmental factors – consumer readiness and lack of trading partner readiness – become less important, while competitive pressure remains significant. (3) In high EB-intensity countries, e-business is no longer a phenomenon dominated by large firms; as more and more firms engage in e-business, network effect works to the advantage of small firms. (4) Firms are more cautious in adopting e-business in high EB-intensity countries – it seems to suggest that the more informed firms are less aggressive in adopting e-business, a somehow surprising result. Explanations and implications are offered. European Journal of Information Systems (2003) 12, 251–268. doi:10.1057/ palgrave.ejis.3000475", "title": "" }, { "docid": "7c8d5da89424dfba8fc84c7cb4f36856", "text": "Advances in sensor data collection technology, such as pervasive and embedded devices, and RFID Technology have lead to a large number of smart devices which are connected to the net and continuously transmit their data over time. It has been estimated that the number of internet connected devices has overtaken the number of humans on the planet, since 2008. The collection and processing of such data leads to unprecedented challenges in mining and processing such data. Such data needs to be processed in real-time and the processing may be highly distributed in nature. Even in cases, where the data is stored offline, the size of the data is often so large and distributed, that it requires the use of big data analytical tools for processing. In addition, such data is often sensitive, and brings a number of privacy challenges associated 384 MANAGING AND MINING SENSOR DATA with it. This chapter will discuss a data analytics perspective about mining and managing data associated with this phenomenon, which is now known as the internet of things.", "title": "" }, { "docid": "2683a2b2a86b382a8e4ad6208d4cc37e", "text": "Violence detection is a hot topic for surveillance systems. However, it has not been studied as much as for action recognition. Existing vision-based methods mainly concentrate on violence detection and make little effort to determine the location of violence. In this paper, we propose a fast and robust framework for detecting and localizing violence in surveillance scenes. For this purpose, a Gaussian Model of Optical Flow (GMOF) is proposed to extract candidate violence regions, which are adaptively modeled as a deviation from the normal behavior of crowd observed in the scene. Violence detection is then performed on each video volume constructed by densely sampling the candidate violence regions. To distinguish violent events from nonviolent events, we also propose a novel descriptor, named as Orientation Histogram of Optical Flow (OHOF), which are fed into a linear SVM for classification. Experimental results on several benchmark datasets have demonstrated the superiority of our proposed method over the state-of-the-arts in terms of both detection accuracy and processing speed, even in crowded scenes.", "title": "" }, { "docid": "a3ac978e59bdedc18c45d460dd8fc154", "text": "Searching for information in distributed ledgers is currently not an easy task, as information relating to an entity may be scattered throughout the ledger with no index. As distributed ledger technologies become more established, they will increasingly be used to represent real world transactions involving many parties and the search requirements will grow. An index providing the ability to search using domain specific terms across multiple ledgers will greatly enhance to power, usability and scope of these systems. We have implemented a semantic index to the Ethereum blockchain platform, to expose distributed ledger data as Linked Data. As well as indexing blockand transactionlevel data according to the BLONDiE ontology, we have mapped smart contracts to the Minimal Service Model ontology, to take the first steps towards connecting smart contracts with Semantic Web Services.", "title": "" }, { "docid": "457b7543de1ffb7c04465f42cc313435", "text": "The purpose of this review is to document the directions and recent progress in our understanding of the motivational dynamics of school achievement. Based on the accumulating research it is concluded that the quality of student learning as well as the will to continue learning depends closely on an interaction between the kinds of social and academic goals students bring to the classroom, the motivating properties of these goals and prevailing classroom reward structures. Implications for school reform that follow uniquely from a motivational and goal-theory perspective are also explored.", "title": "" }, { "docid": "5d3a0b1dfdbffbd4465ad7a9bb2f6878", "text": "The Cancer Genome Atlas (TCGA) is a public funded project that aims to catalogue and discover major cancer-causing genomic alterations to create a comprehensive \"atlas\" of cancer genomic profiles. So far, TCGA researchers have analysed large cohorts of over 30 human tumours through large-scale genome sequencing and integrated multi-dimensional analyses. Studies of individual cancer types, as well as comprehensive pan-cancer analyses have extended current knowledge of tumorigenesis. A major goal of the project was to provide publicly available datasets to help improve diagnostic methods, treatment standards, and finally to prevent cancer. This review discusses the current status of TCGA Research Network structure, purpose, and achievements.", "title": "" } ]
scidocsrr
0e078e18998edeacaff6a61369a98571
Cyberbullying or Cyber Aggression ? : A Review of Existing Definitions of Cyber-Based Peer-to-Peer Aggression
[ { "docid": "64bdb5647b7b05c96de8c0d8f6f00eed", "text": "Cyberbullying is a reality of the digital age. To address this phenomenon, it becomes imperative to understand exactly what cyberbullying is. Thus, establishing a workable and theoretically sound definition is essential. This article contributes to the existing literature in relation to the definition of cyberbullying. The specific elements of repetition, power imbalance, intention, and aggression, regarded as essential criteria of traditional face-to-face bullying, are considered in the cyber context. It is posited that the core bullying elements retain their importance and applicability in relation to cyberbullying. The element of repetition is in need of redefining, given the public nature of material in the online environment. In this article, a clear distinction between direct and indirect cyberbullying is made and a model definition of cyberbullying is offered. Overall, the analysis provided lends insight into how the essential bullying elements have evolved and should apply in our parallel cyber universe.", "title": "" }, { "docid": "117f529b96afc67e1a9ba3058f83049f", "text": "Data from 53 focus groups, which involved students from 10 to 18 years old, show that youngsters often interpret \"cyberbullying\" as \"Internet bullying\" and associate the phenomenon with a wide range of practices. In order to be considered \"true\" cyberbullying, these practices must meet several criteria. They should be intended to hurt (by the perpetrator) and perceived as hurtful (by the victim); be part of a repetitive pattern of negative offline or online actions; and be performed in a relationship characterized by a power imbalance (based on \"real-life\" power criteria, such as physical strength or age, and/or on ICT-related criteria such as technological know-how and anonymity).", "title": "" }, { "docid": "056944e9e568d69d5caa707d03353f62", "text": "Cyberbullying has emerged as a new form of antisocial behaviour in the context of online communication over the last decade. The present study investigates potential longitudinal risk factors for cyberbullying. A total of 835 Swiss seventh graders participated in a short-term longitudinal study (two assessments 6 months apart). Students reported on the frequency of cyberbullying, traditional bullying, rule-breaking behaviour, cybervictirnisation, traditional victirnisation, and frequency of online communication (interpersonal characteristics). In addition, we assessed moral disengagement, empathic concern, and global self-esteem (intrapersonal characteristics). Results showed that traditional bullying, rule-breaking behaviour, and frequency of online communication are longitudinal risk factors for involvement in cyberbullying as a bully. Thus, cyberbullying is strongly linked to real-world antisocial behaviours. Frequent online communication may be seen as an exposure factor that increases the likelihood of engaging in cyberbullying. In contrast, experiences of victimisation and intrapersonal characteristics were not found to increase the longitudinal risk for cyberbullying over and above antisocial behaviour and frequency of online communication. Implications of the findings for the prevention of cyberbullying are discussed. Copyright © 2012 John Wiley & Sons, Ltd.", "title": "" } ]
[ { "docid": "fd8f5dc4264464cd8f978872d58aaf19", "text": "OBJECTIVES\nTo determine the capacity of black soldier fly larvae (BSFL) (Hermetia illucens) to convert fresh human faeces into larval biomass under different feeding regimes, and to determine how effective BSFL are as a means of human faecal waste management.\n\n\nMETHODS\nBlack soldier fly larvae were fed fresh human faeces. The frequency of feeding, number of larvae and feeding ratio were altered to determine their effects on larval growth, prepupal weight, waste reduction, bioconversion and feed conversion rate (FCR).\n\n\nRESULTS\nThe larvae that were fed a single lump amount of faeces developed into significantly larger larvae and prepupae than those fed incrementally every 2 days; however, the development into pre-pupae took longer. The highest waste reduction was found in the group containing the most larvae, with no difference between feeding regimes. At an estimated 90% pupation rate, the highest bioconversion (16-22%) and lowest, most efficient FCR (2.0-3.3) occurred in groups that contained 10 and 100 larvae, when fed both the lump amount and incremental regime.\n\n\nCONCLUSION\nThe prepupal weight, bioconversion and FCR results surpass those from previous studies into BSFL management of swine, chicken manure and municipal organic waste. This suggests that the use of BSFL could provide a solution to the health problems associated with poor sanitation and inadequate human waste management in developing countries.", "title": "" }, { "docid": "59e3e0099e215000b34e32d90b0bd650", "text": "We present a method for learning discriminative filters using a shallow Convolutional Neural Network (CNN). We encode rotation invariance directly in the model by tying the weights of groups of filters to several rotated versions of the canonical filter in the group. These filters can be used to extract rotation invariant features well-suited for image classification. We test this learning procedure on a texture classification benchmark, where the orientations of the training images differ from those of the test images. We obtain results comparable to the state-of-the-art. Compared to standard shallow CNNs, the proposed method obtains higher classification performance while reducing by an order of magnitude the number of parameters to be learned.", "title": "" }, { "docid": "35de54ee9d3d4c117cf4c1d8fc4f4e87", "text": "On the purpose of managing process models to make them more practical and effective in enterprises, a construction of BPMN-based Business Process Model Base is proposed. Considering Business Process Modeling Notation (BPMN) is used as a standard of process modeling, based on BPMN, the process model transformation is given, and business blueprint modularization management methodology is used for process management. Therefore, BPMN-based Business Process Model Base provides a solution of business process modeling standardization, management and execution so as to enhance the business process reuse.", "title": "" }, { "docid": "0add9f22db24859da50e1a64d14017b9", "text": "Light field imaging offers powerful new capabilities through sophisticated digital processing techniques that are tightly merged with unconventional optical designs. This combination of imaging technology and computation necessitates a fundamentally different view of the optical properties of imaging systems and poses new challenges for the traditional signal and image processing domains. In this article, we aim to provide a comprehensive review of the considerations involved and the difficulties encountered in working with light field data.", "title": "" }, { "docid": "bd07c789a76efd51cc78f9828d045329", "text": "BACKGROUND\nProphylaxis for venous thromboembolism is recommended for at least 10 days after total knee arthroplasty; oral regimens could enable shorter hospital stays. We aimed to test the efficacy and safety of oral rivaroxaban for the prevention of venous thromboembolism after total knee arthroplasty.\n\n\nMETHODS\nIn a randomised, double-blind, phase III study, 3148 patients undergoing knee arthroplasty received either oral rivaroxaban 10 mg once daily, beginning 6-8 h after surgery, or subcutaneous enoxaparin 30 mg every 12 h, starting 12-24 h after surgery. Patients had mandatory bilateral venography between days 11 and 15. The primary efficacy outcome was the composite of any deep-vein thrombosis, non-fatal pulmonary embolism, or death from any cause up to day 17 after surgery. Efficacy was assessed as non-inferiority of rivaroxaban compared with enoxaparin in the per-protocol population (absolute non-inferiority limit -4%); if non-inferiority was shown, we assessed whether rivaroxaban had superior efficacy in the modified intention-to-treat population. The primary safety outcome was major bleeding. This trial is registered with ClinicalTrials.gov, number NCT00362232.\n\n\nFINDINGS\nThe primary efficacy outcome occurred in 67 (6.9%) of 965 patients given rivaroxaban and in 97 (10.1%) of 959 given enoxaparin (absolute risk reduction 3.19%, 95% CI 0.71-5.67; p=0.0118). Ten (0.7%) of 1526 patients given rivaroxaban and four (0.3%) of 1508 given enoxaparin had major bleeding (p=0.1096).\n\n\nINTERPRETATION\nOral rivaroxaban 10 mg once daily for 10-14 days was significantly superior to subcutaneous enoxaparin 30 mg given every 12 h for the prevention of venous thromboembolism after total knee arthroplasty.\n\n\nFUNDING\nBayer Schering Pharma AG, Johnson & Johnson Pharmaceutical Research & Development.", "title": "" }, { "docid": "db8f1de1961f4730e6fc40881f4d0641", "text": "Non-thrombotic pulmonary embolism has recently been reported as a remote complication of filler injections to correct hollowing in the temporal region. The middle temporal vein (MTV) has been identified as being highly susceptible to accidental injection. The anatomy and tributaries of the MTV were investigated in six soft embalmed cadavers. The MTV was cannulated and injected in both anterograde and retrograde directions in ten additional cadavers using saline and black filler, respectively. The course and tributaries of the MTV were described. Regarding the infusion experiment, manual injection of saline was easily infused into the MTV toward the internal jugular vein, resulting in continuous flow of saline drainage. This revealed a direct channel from the MTV to the internal jugular vein. Assessment of a preventive maneuver during filler injections was effectively performed by pressing at the preauricular venous confluent point against the zygomatic process. Sudden retardation of saline flow from the drainage tube situated in the internal jugular vein was observed when the preauricular confluent point was compressed. Injection of black gel filler into the MTV and the tributaries through the cannulated tube directed toward the eye proved difficult. The mechanism of venous filler emboli in a clinical setting occurs when the MTV is accidentally cannulated. The filler emboli follow the anterograde venous blood stream to the pulmonary artery causing non-thrombotic pulmonary embolism. Pressing of the pretragal confluent point is strongly recommended during temporal injection to help prevent filler complications, but does not totally eliminate complication occurrence. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .", "title": "" }, { "docid": "b3e183d0e260ff14d82d8c5f65aa808a", "text": "Ulnar nerve entrapment across the elbow (UAE), a common entrapment, requires neurophysiological evaluation for a diagnosis, but a standardized neurophysiological classification is not available. The aim of our study was to evaluate the validity of a neurophysiological classification of UAE, developed by us. To this end, we examined whether sensorimotor deficits, as observed by the physician and as referred by the patients, increased with the neurophysiological severity according to the classification. We performed a multiperspective assessment of 63 consecutive arms from 52 patients with a clinical diagnosis of UAE. Neurophysiological, clinical and patient-oriented validated measurements were used. The neurophysiological classification is based on the presence or absence of evoked responses and on the normality or abnormality of conduction findings. A strict relationship was observed between the degree of neurophysiological severity and the clinical findings (sensorimotor deficits). Moreover, a significant positive correlation between hand functional deficit and neurophysiological classification was observed. Conversely, a clear correlation between neurophysiological pattern and symptoms was not found. The neurophysiological classification is easy to use and reliable, but further multicentric studies should be performed.", "title": "" }, { "docid": "07c8719c4b8be9e02d14cd24c6e4e05c", "text": "Sentiment and emotional analysis on online collaborative software development forums can be very useful to gain important insights into the behaviors and personalities of the developers. Such information can later on be used to increase productivity of developers by making recommendations on how to behave best in order to get a task accomplished. However, due to the highly technical nature of the data present in online collaborative software development forums, mining sentiments and emotions becomes a very challenging task. In this work we present a new approach for mining sentiments and emotions from software development datasets using Interaction Process Analysis(IPA) labels and machine learning. We also apply distance metric learning as a preprocessing step before training a feed forward neural network and report the precision, recall, F1 and accuracy.", "title": "" }, { "docid": "bae3d6ffee5380ea6352b8b384667d76", "text": "A flexible transparent modify dipole antenna printed on PET film is presented in this paper. The proposed antenna was designed to operate at 2.4GHz for ISM applications. The impedance characteristic and the radiation characteristic were simulated and measured. The proposed antenna has good performance. It can be easily mounted on conformal shape, because it is fabricated on PET film having the flexible characteristic.", "title": "" }, { "docid": "8519ab2692f07cc4d7fa8443591c4729", "text": "We discuss methodology for multidimensional scaling (MDS) and its implementation in two software systems, GGvis and XGvis. MDS is a visualization technique for proximity data, that is, data in the form of N × N dissimilarity matrices. MDS constructs maps (“configurations,” “embeddings”) in IRk by interpreting the dissimilarities as distances. Two frequent sources of dissimilarities are high-dimensional data and graphs. When the dissimilarities are distances between high-dimensional objects, MDS acts as a (often nonlinear) dimension-reduction technique. When the dissimilarities are shortest-path distances in a graph, MDS acts as a graph layout technique. MDS has found recent attention in machine learning motivated by image databases (“Isomap”). MDS is also of interest in view of the popularity of “kernelizing” approaches inspired by Support Vector Machines (SVMs; “kernel PCA”). This article discusses the following general topics: (1) the stability and multiplicity of MDS solutions; (2) the analysis of structure within and between subsets of objects with missing value schemes in dissimilarity matrices; (3) gradient descent for optimizing general MDS loss functions (“Strain” and “Stress”); (4) a unification of classical (Strain-based) and distance (Stress-based) MDS. Particular topics include the following: (1) blending of automatic optimization with interactive displacement of configuration points to assist in the search for global optima; (2) forming groups of objects with interactive brushing to create patterned missing values in MDS loss functions; (3) optimizing MDS loss functions for large numbers of objects relative to a small set of anchor points (“external unfolding”); and (4) a nonmetric version of classical MDS.", "title": "" }, { "docid": "8eb161e363d55631148ed3478496bbd5", "text": "This paper proposes a new power-factor-correction (PFC) topology, and explains its operation principle, its control mechanism, related application problems followed by experimental results. In this proposed topology, critical-conduction-mode (CRM) interleaved technique is applied to a bridgeless PFC in order to achieve high efficiency by combining benefits of each topology. This application is targeted toward low to middle power applications that normally employs continuous-conductionmode boost converter. key words: PFC, Interleaved, critical-conduction-mode, totem-pole", "title": "" }, { "docid": "de1db4e54fb686f2b597936aa551cd14", "text": "Trustworthy software requires strong privacy and security guarantees from a secure trust base in hardware. While chipmakers provide hardware support for basic security and privacy primitives such as enclaves and memory encryption. these primitives do not address hiding of the memory access pattern, information about which may enable attacks on the system or reveal characteristics of sensitive user data. State-of-the-art approaches to protecting the access pattern are largely based on Oblivious RAM (ORAM). Unfortunately, current ORAM implementations suffer from very significant practicality and overhead concerns, including roughly an order of magnitude slowdown, more than 100% memory capacity overheads, and the potential for system deadlock.\n Memory technology trends are moving towards 3D and 2.5D integration, enabling significant logic capabilities and sophisticated memory interfaces. Leveraging the trends, we propose a new approach to access pattern obfuscation, called ObfusMem. ObfusMem adds the memory to the trusted computing base and incorporates cryptographic engines within the memory. ObfusMem encrypts commands and addresses on the memory bus, hence the access pattern is cryptographically obfuscated from external observers. Our evaluation shows that ObfusMem incurs an overhead of 10.9% on average, which is about an order of magnitude faster than ORAM implementations. Furthermore, ObfusMem does not incur capacity overheads and does not amplify writes. We analyze and compare the security protections provided by ObfusMem and ORAM, and highlight their differences.", "title": "" }, { "docid": "e4c23ebf305f9f1a3e3d016b6f22e683", "text": "Accurate detection of the human metaphase chromosome centromere is a critical element of cytogenetic diagnostic techniques, including chromosome enumeration, karyotyping and radiation biodosimetry. Existing centromere detection methods tends to perform poorly in the presence of irregular boundaries, shape variations and premature sister chromatid separation. We present a centromere detection algorithm that uses a novel contour partitioning technique to generate centromere candidates followed by a machine learning approach to select the best candidate that enhances the detection accuracy. The contour partitioning technique evaluates various combinations of salient points along the chromosome boundary using a novel feature set and is able to identify telomere regions as well as detect and correct for sister chromatid separation. This partitioning is used to generate a set of centromere candidates which are then evaluated based on a second set of proposed features. The proposed algorithm outperforms previously published algorithms and is shown to do so with a larger set of chromosome images. A highlight of the proposed algorithm is the ability to rank this set of centromere candidates and create a centromere confidence metric which may be used in post-detection analysis. When tested with a larger metaphase chromosome database consisting of 1400 chromosomes collected from 40 metaphase cell images, the proposed algorithm was able to accurately localize 1220 centromere locations yielding a detection accuracy of 87%.", "title": "" }, { "docid": "d86ed46cf03298129055a7a734c0ef3c", "text": "Photosynthetic CO2 uptake rate and early growth parameters of radish Raphanus sativus L. seedlings exposed to an extremely low frequency magnetic field (ELF MF) were investigated. Radish seedlings were exposed to a 60 Hz, 50 microT(rms) (root mean square) sinusoidal magnetic field (MF) and a parallel 48 microT static MF for 6 or 15 d immediately after germination. Control seedlings were exposed to the ambient MF but not the ELF MF. The CO2 uptake rate of ELF MF exposed seedlings on day 5 and later was lower than that of the control seedlings. The dry weight and the cotyledon area of ELF MF exposed seedlings on day 6 and the fresh weight, the dry weight and the leaf area of ELF MF exposed seedlings on day 15 were significantly lower than those of the control seedlings, respectively. In another experiment, radish seedlings were grown without ELF MF exposure for 14 d immediately after germination, and then exposed to the ELF MF for about 2 h, and the photosynthetic CO2 uptake rate was measured during the short-term ELF MF exposure. The CO2 uptake rate of the same seedlings was subsequently measured in the ambient MF (control) without the ELF MF. There was no difference in the CO2 uptake rate of seedlings exposed to the ELF MF or the ambient MF. These results indicate that continuous exposure to 60 Hz, 50 microT(rms) sinusoidal MF with a parallel 48 microT static MF affects the early growth of radish seedlings, but the effect is not so severe that modification of photosynthetic CO2 uptake can observed during short-term MF exposure.", "title": "" }, { "docid": "f398eee40f39acd2c2955287ccbb4924", "text": "One of the ultimate goals of natural language processing (NLP) systems is understanding the meaning of what is being transmitted, irrespective of the medium (e.g., written versus spoken) or the form (e.g., static documents versus dynamic dialogues). Although much work has been done in traditional language domains such as speech and static written text, little has yet been done in the newer communication domains enabled by the Internet, e.g., online chat and instant messaging. This is in part due to the fact that there are no annotated chat corpora available to the broader research community. The purpose of this research is to build a chat corpus, tagged with lexical (token part-of-speech labels), syntactic (post parse tree), and discourse (post classification) information. Such a corpus can then be used to develop more complex, statistical-based NLP applications that perform tasks such as author profiling, entity identification, and social network analysis.", "title": "" }, { "docid": "edbf9ed3377e31d53b7f633a5bfe3ebe", "text": "INTRODUCTION\nAnchorage control in patients with severe skeletal Class II malocclusion is a difficult problem in orthodontic treatment. In adults, treatment often requires premolar extractions and maximum anchorage. Recently, incisor retraction with miniscrew anchorage has become a new strategy for treating skeletal Class II patients.\n\n\nMETHODS\nIn this study, we compared treatment outcomes of patients with severe skeletal Class II malocclusion treated using miniscrew anchorage (n = 11) or traditional orthodontic mechanics of headgear and transpalatal arch (n = 11). Pretreatment and posttreatment lateral cephalograms were analyzed.\n\n\nRESULTS\nBoth treatment methods, miniscrew anchorage or headgear, achieved acceptable results as indicated by the reduction of overjet and the improvement of facial profile. However, incisor retraction with miniscrew anchorage did not require patient cooperation to reinforce the anchorage and provided more significant improvement of the facial profile than traditional anchorage mechanics (headgear combined with transpalatal arch).\n\n\nCONCLUSIONS\nOrthodontic treatment with miniscrew anchorage is simpler and more useful than that with traditional anchorage mechanics for patients with Class II malocclusion.", "title": "" }, { "docid": "c4e11f7bbb252b18910a64c0145edec2", "text": "Cluster analysis represents one of the most versatile methods in statistical science. It is employed in empirical sciences for the summarization of datasets into groups of similar objects, with the purpose of facilitating the interpretation and further analysis of the data. Cluster analysis is of particular importance in the exploratory investigation of data of high complexity, such as that derived from molecular biology or image databases. Consequently, recent work in the field of cluster analysis, including the work presented in this thesis, has focused on designing algorithms that can provide meaningful solutions for data with high cardinality and/or dimensionality, under the natural restriction of limited resources. In the first part of the thesis, a novel algorithm for the clustering of large, highdimensional datasets is presented. The developed method is based on the principles of projection pursuit and grid partitioning, and focuses on reducing computational requirements for large datasets without loss of performance. To achieve that, the algorithm relies on procedures such as sampling of objects, feature selection, and quick density estimation using histograms. The algorithm searches for low-density points in potentially favorable one-dimensional projections, and partitions the data by a hyperplane passing through the best split point found. Tests on synthetic and reference data indicated that the proposed method can quickly and efficiently recover clusters that are distinguishable from the remaining objects on at least one direction; linearly non-separable clusters were usually subdivided. In addition, the clustering solution was proved to be robust in the presence of noise in moderate levels, and when the clusters are partially overlapping. In the second part of the thesis, a novel method for generating synthetic datasets with variable structure and clustering difficulty is presented. The developed algorithm can construct clusters with different sizes, shapes, and orientations, consisting of objects sampled from different probability distributions. In addition, some of the clusters can have multimodal distributions, curvilinear shapes, or they can be defined only in restricted subsets of dimensions. The clusters are distributed within the data space using a greedy geometrical procedure, with the overall degree of cluster overlap adjusted by scaling the clusters. Evaluation tests indicated that the proposed approach is highly effective in prescribing the cluster overlap. Furthermore, it can be extended to allow for the production of datasets containing non-overlapping clusters with defined degrees of separation. In the third part of the thesis, a novel system for the semi-supervised annotation of images is described and evaluated. The system is based on a visual vocabulary of prototype visual features, which is constructed through the clustering of visual features extracted from training images with accurate textual annotations. Consequently, each training image is associated with the visual words representing its detected features. In addition, each such image is associated with the concepts extracted from the linked textual data. These two sets of associations are combined into a direct linkage scheme between textual concepts and visual words, thus constructing an automatic image classifier that can annotate new images with text-based concepts using only their visual features. As an initial application, the developed method was successfully employed in a person classification task.", "title": "" }, { "docid": "85e3992ff97ae284218cf47dcb57abec", "text": "Software has been part of modern society for more than 50 years. There are several software development methodologies in use today. Some companies have their own customized methodology for developing their software but the majority speaks about two kinds of methodologies: heavyweight and lightweight. Heavyweight methodologies, also considered as the traditional way to develop software, claim their support to comprehensive planning, detailed documentation, and expansive design. The lightweight methodologies, also known as agile modeling, have gained significant attention from the software engineering community in the last few years. Unlike traditional methods, agile methodologies employ short iterative cycles, and rely on tacit knowledge within a team as opposed to documentation. In this dissertation, I have described the characteristics of some traditional and agile methodologies that are widely used in software development. I have also discussed the strengths and weakness between the two opposing methodologies and provided the challenges associated with implementing agile processes in the software industry. This anecdotal evidence is rising regarding the effectiveness of agile methodologies in certain environments; but there have not been much collection and analysis of empirical evidence for agile projects. However, to support my dissertation I conducted a questionnaire, soliciting feedback from software industry practitioners to evaluate which methodology has a better success rate for different sizes of software development. According to our findings agile methodologies can provide good benefits for small scaled and medium scaled projects but for large scaled projects traditional methods seem dominant.", "title": "" }, { "docid": "ad4596e24f157653a36201767d4b4f3b", "text": "We present a character-based model for joint segmentation and POS tagging for Chinese. The bidirectional RNN-CRF architecture for general sequence tagging is adapted and applied with novel vector representations of Chinese characters that capture rich contextual information and lower-than-character level features. The proposed model is extensively evaluated and compared with a state-of-the-art tagger respectively on CTB5, CTB9 and UD Chinese. The experimental results indicate that our model is accurate and robust across datasets in different sizes, genres and annotation schemes. We obtain stateof-the-art performance on CTB5, achieving 94.38 F1-score for joint segmentation and POS tagging.", "title": "" }, { "docid": "db190bb0cf83071b6e19c43201f92610", "text": "In this paper, a MATLAB based simulation of Grid connected PV system is presented. The main components of this simulation are PV solar panel, Boost converter; Maximum Power Point Tracking System (MPPT) and Grid Connected PV inverter with closed loop control system is designed and simulated. A simulation studies is carried out in different solar radiation level.", "title": "" } ]
scidocsrr
152bb0f38f2ed471967956032ddbaf5e
Visual Translation Embedding Network for Visual Relation Detection
[ { "docid": "a81b08428081cd15e7c705d5a6e79a6f", "text": "Attributes act as intermediate representations that enable parameter sharing between classes, a must when training data is scarce. We propose to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors. We introduce a function that measures the compatibility between an image and a label embedding. The parameters of this function are learned on a training set of labeled samples to ensure that, given an image, the correct classes rank higher than the incorrect ones. Results on the Animals With Attributes and Caltech-UCSD-Birds datasets show that the proposed framework outperforms the standard Direct Attribute Prediction baseline in a zero-shot learning scenario. Label embedding enjoys a built-in ability to leverage alternative sources of information instead of or in addition to attributes, such as, e.g., class hierarchies or textual descriptions. Moreover, label embedding encompasses the whole range of learning settings from zero-shot learning to regular learning with a large number of labeled examples.", "title": "" } ]
[ { "docid": "16f424e9b279d8368e0081f9d83581ab", "text": "Object recognition is one of the important tasks in computer vision which has found enormous applications. Depth modality is proven to provide supplementary information to the common RGB modality for object recognition. In this paper, we propose methods to improve the recognition performance of an existing deep learning based RGB-D object recognition model, namely the FusionNet proposed by Eitel et al. First, we show that encoding the depth values as colorized surface normals is beneficial, when the model is initialized with weights learned from training on ImageNet data. Additionally, we show that the RGB stream of the FusionNet model can benefit from using deeper network architectures, namely the 16-layered VGGNet, in exchange for the 8-layered CaffeNet. In combination, these changes improves the recognition performance with 2.2% in comparison to the original FusionNet, when evaluating on the Washington RGB-D Object Dataset.", "title": "" }, { "docid": "be4defd26cf7c7a29a85da2e15132be9", "text": "The quantity of rooftop solar photovoltaic (PV) installations has grown rapidly in the US in recent years. There is a strong interest among decision makers in obtaining high quality information about rooftop PV, such as the locations, power capacity, and energy production of existing rooftop PV installations. Solar PV installations are typically connected directly to local power distribution grids, and therefore it is important for the reliable integration of solar energy to have information at high geospatial resolutions: by county, zip code, or even by neighborhood. Unfortunately, traditional means of obtaining this information, such as surveys and utility interconnection filings, are limited in availability and geospatial resolution. In this work a new approach is investigated where a computer vision algorithm is used to detect rooftop PV installations in high resolution color satellite imagery and aerial photography. It may then be possible to use the identified PV images to estimate power capacity and energy production for each array of panels, yielding a fast, scalable, and inexpensive method to obtain rooftop PV estimates for regions of any size. The aim of this work is to investigate the feasibility of the first step of the proposed approach: detecting rooftop PV in satellite imagery. Towards this goal, a collection of satellite rooftop images is used to develop and evaluate a detection algorithm. The results show excellent detection performance on the testing dataset and that, with further development, the proposed approach may be an effective solution for fast and scalable rooftop PV information collection.", "title": "" }, { "docid": "f6342101ff8315bcaad4e4f965e6ba8a", "text": "In radar imaging it is well known that relative motion or deformation of parts of illuminated objects induce additional features in the Doppler frequency spectra. These features are called micro-Doppler effect and appear as sidebands around the central Doppler frequency. They can provide valuable information about the structure of the moving parts and may be used for identification purposes [1].", "title": "" }, { "docid": "2ea12a279b2a059399dcc62db2957ce5", "text": "Alkaline pretreatment with NaOH under mild operating conditions was used to improve ethanol and biogas production from softwood spruce and hardwood birch. The pretreatments were carried out at different temperatures between minus 15 and 100oC with 7.0% w/w NaOH solution for 2 h. The pretreated materials were then enzymatically hydrolyzed and subsequently fermented to ethanol or anaerobically digested to biogas. In general, the pretreatment was more successful for both ethanol and biogas production from the hardwood birch than the softwood spruce. The pretreatment resulted in significant reduction of hemicellulose and the crystallinity of cellulose, which might be responsible for improved enzymatic hydrolyses of birch from 6.9% to 82.3% and spruce from 14.1% to 35.7%. These results were obtained with pretreatment at 100°C for birch and 5°C for spruce. Subsequently, the best ethanol yield obtained was 0.08 g/g of the spruce while pretreated at 100°C, and 0.17 g/g of the birch treated at 100°C. On the other hand, digestion of untreated birch and spruce resulted in methane yields of 250 and 30 l/kg VS of the wood species, respectively. The pretreatment of the wood species at the best conditions for enzymatic hydrolysis resulted in 83% and 74% improvement in methane production from birch and spruce.", "title": "" }, { "docid": "f73216f257d978edbf744d51164e2ad3", "text": "With the development of low power electronics and energy harvesting technology, selfpowered systems have become a research hotspot over the last decade. The main advantage of self-powered systems is that they require minimum maintenance which makes them to be deployed in large scale or previously inaccessible locations. Therefore, the target of energy harvesting is to power autonomous ‘fit and forget’ electronic systems over their lifetime. Some possible alternative energy sources include photonic energy (Norman, 2007), thermal energy (Huesgen et al., 2008) and mechanical energy (Beeby et al., 2006). Among these sources, photonic energy has already been widely used in power supplies. Solar cells provide excellent power density. However, energy harvesting using light sources restricts the working environment of electronic systems. Such systems cannot work normally in low light or dirty conditions. Thermal energy can be converted to electrical energy by the Seebeck effect while working environment for thermo-powered systems is also limited. Mechanical energy can be found in instances where thermal or photonic energy is not suitable, which makes extracting energy from mechanical energy an attractive approach for powering electronic systems. The source of mechanical energy can be a vibrating structure, a moving human body or air/water flow induced vibration. The frequency of the mechanical excitation depends on the source: less than 10Hz for human movements and typically over 30Hz for machinery vibrations (Roundy et al., 2003). In this chapter, energy harvesting from various vibration sources will be reviewed. In section 2, energy harvesting from machinery vibration will be introduced. A general model of vibration energy harvester is presented first followed by introduction of three main transduction mechanisms, i.e. electromagnetic, piezoelectric and electrostatic transducers. In addition, vibration energy harvesters with frequency tunability and wide bandwidth will be discussed. In section 3, energy harvesting from human movement will be introduced. In section 4, energy harvesting from flow induced vibration (FIV) will be discussed. Three types of such generators will be introduced, i.e. energy harvesting from vortex-induced vibration (VIV), fluttering energy harvesters and Helmholtz resonator. Conclusions will be given in section 5.", "title": "" }, { "docid": "88acb55335bc4530d8dfe5f44738d39f", "text": "Driving is an attention-demanding task, especially with children in the back seat. While most recommendations prefer to reduce children's screen time in common entertainment systems, e.g. DVD players and tablets, parents often rely on these systems to entertain the children during car trips. These systems often lack key components that are important for modern parents, namely, sociability and educational content. In this contribution we introduce PANDA, a parental affective natural driving assistant. PANDA is a virtual in-car entertainment agent that can migrate around the car to interact with the parent-driver or with children in the back seat. PANDA supports the parent-driver via speech interface, helps to mediate her interaction with children in the back seat, and works to reduce distractions for the driver while also engaging, entertaining and educating children. We present the design of PANDA system and preliminary tests of the prototype system in a car setting.", "title": "" }, { "docid": "79c0490d7c19c855812beb8e71e52c54", "text": "Software engineering project management (SEPM) has been the focus of much recent attention because of the enormous penalties incurred during software development and maintenance resulting from poor management. To date there has been no comprehensive study performed to determine the most significant problems of SEPM, their relative importance, or the research directions necessary to solve them. We conducted a major survey of individuals from all areas of the computer field to determine the general consensus on SEPM problems. Twenty hypothesized problems were submitted to several hundred individuals for their opinions. The 294 respondents validated most of these propositions. None of the propositions was rejected by the respondents as unimportant. A number of research directions were indicated by the respondents which, if followed, the respondents believed would lead to solutions for these problems.", "title": "" }, { "docid": "2d43992a8eb6e97be676c04fc9ebd8dd", "text": "Social interactions and interpersonal communication has undergone significant changes in recent years. Increasing awareness of privacy issues and events such as the Snowden disclosures have led to the rapid growth of a new generation of anonymous social networks and messaging applications. By removing traditional concepts of strong identities and social links, these services encourage communication between strangers, and allow users to express themselves without fear of bullying or retaliation.\n Despite millions of users and billions of monthly page views, there is little empirical analysis of how services like Whisper have changed the shape and content of social interactions. In this paper, we present results of the first large-scale empirical study of an anonymous social network, using a complete 3-month trace of the Whisper network covering 24 million whispers written by more than 1 million unique users. We seek to understand how anonymity and the lack of social links affect user behavior. We analyze Whisper from a number of perspectives, including the structure of user interactions in the absence of persistent social links, user engagement and network stickiness over time, and content moderation in a network with minimal user accountability. Finally, we identify and test an attack that exposes Whisper users to detailed location tracking. We have notified Whisper and they have taken steps to address the problem.", "title": "" }, { "docid": "fe94c5e7130d28b5cec34e001582e4ce", "text": "This study presents a model of harsh parenting that has an indirect effect, as well as a direct effect, on child aggression in the school environment through the mediating process of child emotion regulation. Tested on a sample of 325 Chinese children and their parents, the model showed adequate goodness of fit. Also investigated were interaction effects between parents' and children's gender. Mothers' harsh parenting affected child emotion regulation more strongly than fathers', whereas harsh parenting emanating from fathers had a stronger effect on child aggression. Fathers' harsh parenting also affected sons more than daughters, whereas there was no gender differential effect with mothers' harsh parenting. These results are discussed with an emphasis on negative emotionality as a potentially common cause of family perturbations, including parenting and child adjustment problems.", "title": "" }, { "docid": "ea1352cf1fd488ccd89bf8ec727d6b99", "text": "Diverse neuropeptides participate in cell–cell communication to coordinate neuronal and endocrine regulation of physiological processes in health and disease. Neuropeptides are short peptides ranging in length from ~3 to 40 amino acid residues that are involved in biological functions of pain, stress, obesity, hypertension, mental disorders, cancer, and numerous health conditions. The unique neuropeptide sequences define their specific biological actions. Significantly, this review article discusses how the neuropeptide field is at the crest of expanding knowledge gained from mass-spectrometry-based neuropeptidomic studies, combined with proteomic analyses for understanding the biosynthesis of neuropeptidomes. The ongoing expansion in neuropeptide diversity lies in the unbiased and global mass-spectrometry-based approaches for identification and quantitation of peptides. Current mass spectrometry technology allows definition of neuropeptide amino acid sequence structures, profiling of multiple neuropeptides in normal and disease conditions, and quantitative peptide measures in biomarker applications to monitor therapeutic drug efficacies. Complementary proteomic studies of neuropeptide secretory vesicles provide valuable insight into the protein processes utilized for neuropeptide production, storage, and secretion. Furthermore, ongoing research in developing new computational tools will facilitate advancements in mass-spectrometry-based identification of small peptides. Knowledge of the entire repertoire of neuropeptides that regulate physiological systems will provide novel insight into regulatory mechanisms in health, disease, and therapeutics.", "title": "" }, { "docid": "9b5877847bedecd73a8c2f0d6f832641", "text": "Traditional, more biochemically motivated approaches to chemical design and drug discovery are notoriously complex and costly processes. The space of all synthesizable molecules is far too large to exhaustively search any meaningful subset for interesting novel drug and molecule proposals, and the lack of any particularly informative and manageable structure to this search space makes the very task of defining interesting subsets a difficult problem in itself. Recent years have seen the proposal and rapid development of alternative, machine learning-based methods for vastly simplifying the search problem specified in chemical design and drug discovery. In this work, I build upon this existing literature exploring the possibility of automatic chemical design and propose a novel generative model for producing a diverse set of valid new molecules. The proposed molecular graph variational autoencoder model achieves comparable performance across standard metrics to the state-of-the-art in this problem area and is capable of regularly generating valid molecule proposals similar but distinctly different from known sets of interesting molecules. While an interesting result in terms of addressing one of the core issues with machine learning-based approaches to automatic chemical design, further research in this direction should aim to optimize for more biochemically motivated objectives and be more informed by the ultimate utility of such models to the biochemical field.", "title": "" }, { "docid": "075e263303b73ee5d1ed6cff026aee63", "text": "Automatic and accurate whole-heart and great vessel segmentation from 3D cardiac magnetic resonance (MR) images plays an important role in the computer-assisted diagnosis and treatment of cardiovascular disease. However, this task is very challenging due to ambiguous cardiac borders and large anatomical variations among different subjects. In this paper, we propose a novel densely-connected volumetric convolutional neural network, referred as DenseVoxNet, to automatically segment the cardiac and vascular structures from 3D cardiac MR images. The DenseVoxNet adopts the 3D fully convolutional architecture for effective volume-to-volume prediction. From the learning perspective, our DenseVoxNet has three compelling advantages. First, it preserves the maximum information flow between layers by a densely-connected mechanism and hence eases the network training. Second, it avoids learning redundant feature maps by encouraging feature reuse and hence requires fewer parameters to achieve high performance, which is essential for medical applications with limited training data. Third, we add auxiliary side paths to strengthen the gradient propagation and stabilize the learning process. We demonstrate the effectiveness of DenseVoxNet by comparing it with the state-of-the-art approaches from HVSMR 2016 challenge in conjunction with MICCAI, and our network achieves the best dice coefficient. We also show that our network can achieve better performance than other 3D ConvNets but with fewer parameters.", "title": "" }, { "docid": "6f0ebd6314cd5c012f791d0e5c448045", "text": "This paper presents a framework of discriminative least squares regression (LSR) for multiclass classification and feature selection. The core idea is to enlarge the distance between different classes under the conceptual framework of LSR. First, a technique called ε-dragging is introduced to force the regression targets of different classes moving along opposite directions such that the distances between classes can be enlarged. Then, the ε-draggings are integrated into the LSR model for multiclass classification. Our learning framework, referred to as discriminative LSR, has a compact model form, where there is no need to train two-class machines that are independent of each other. With its compact form, this model can be naturally extended for feature selection. This goal is achieved in terms of L2,1 norm of matrix, generating a sparse learning model for feature selection. The model for multiclass classification and its extension for feature selection are finally solved elegantly and efficiently. Experimental evaluation over a range of benchmark datasets indicates the validity of our method.", "title": "" }, { "docid": "e89124e33d7d208fcdd30c5cccc409d6", "text": "In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.", "title": "" }, { "docid": "78c40bdaaa28daa997d4727d49976536", "text": "Multiple-input multiple-output (MIMO) systems are well suited for millimeter-wave (mmWave) wireless communications where large antenna arrays can be integrated in small form factors due to tiny wavelengths, thereby providing high array gains while supporting spatial multiplexing, beamforming, or antenna diversity. It has been shown that mmWave channels exhibit sparsity due to the limited number of dominant propagation paths, thus compressed sensing techniques can be leveraged to conduct channel estimation at mmWave frequencies. This paper presents a novel approach of constructing beamforming dictionary matrices for sparse channel estimation using the continuous basis pursuit (CBP) concept, and proposes two novel low-complexity algorithms to exploit channel sparsity for adaptively estimating multipath channel parameters in mmWave channels. We verify the performance of the proposed CBP-based beamforming dictionary and the two algorithms using a simulator built upon a three-dimensional mmWave statistical spatial channel model, NYUSIM, that is based on real-world propagation measurements. Simulation results show that the CBP-based dictionary offers substantially higher estimation accuracy and greater spectral efficiency than the grid-based counterpart introduced by previous researchers, and the algorithms proposed here render better performance but require less computational effort compared with existing algorithms.", "title": "" }, { "docid": "63e45222ea9627ce22e9e90fc1ca4ea1", "text": "A soft switching three-transistor push-pull(TTPP)converter is proposed in this paper. The 3rd transistor is inserted in the primary side of a traditional push-pull converter. Two primitive transistors can achieve zero-voltage-switching (ZVS) easily under a wide load range, the 3rd transistor can also realize zero-voltage-switching assisted by leakage inductance. The rated voltage of the 3rd transistor is half of that of the main transistors. The operation theory is explained in detail. The soft-switching realization conditions are derived. An 800 W with 83.3 kHz switching frequency prototype has been built. The experimental result is provided to verify the analysis.", "title": "" }, { "docid": "9ddc451ee5509f69ffab3f3485ba5870", "text": "GOAL\nThe aims are to establish the prevalence of newfound, unidentified cases of depressive disorder by screening with the Becks Depression scale; To establish a comparative relationship with self-identified cases of depression in the patients in the family medicine; To assess the significance of the BDI in screening practice of family medicine.\n\n\nPATIENTS AND METHODS\nA prospective study was conducted anonymously by Beck's Depression scale (Beck Depression Questionnaire org.-BDI) and specially created short questionnaire. The study included 250 randomly selected patients (20-60 years), users of services in family medicine in \"Dom Zdravlja\" Zenica, and the final number of respondents with included in the study was 126 (51 male, 75 female; response or response rate 50.4%). Exclusion factor was previously diagnosed and treated mental disorder. Participation was voluntary and respondents acknowledge the validity of completing the questionnaire. BDI consists of 21 items. Answers to questions about symptoms were ranked according to the Likert type scale responses from 0-4 (from irrelevant to very much). Respondents expressed themselves on personal perception of depression, whether are or not depressed.\n\n\nRESULTS\nDepression was observed in 48% of patients compared to 31% in self estimate depression analyzed the questionnaires. The negative trend in the misrecognition of depression is -17% (48:31). Depression was significantly more frequent in unemployed compared to employed respondents (p = 0.001). The leading symptom in both sexes is the perception of lost hope (59% of cases).\n\n\nCONCLUSION\nAll respondents in family medicine care in Zenica showed a high percentage of newly detected (17%) patients with previously unrecognized depression. BDI is a really simple and effective screening tool for the detection and identification of persons with symptoms of depression.", "title": "" }, { "docid": "a17cf9c0d9be4f25b605b986b368445a", "text": "The amyloid-β peptide (Aβ) is a key protein in Alzheimer’s disease (AD) pathology. We previously reported in vitro evidence suggesting that Aβ is an antimicrobial peptide. We present in vivo data showing that Aβ expression protects against fungal and bacterial infections in mouse, nematode, and cell culture models of AD. We show that Aβ oligomerization, a behavior traditionally viewed as intrinsically pathological, may be necessary for the antimicrobial activities of the peptide. Collectively, our data are consistent with a model in which soluble Aβ oligomers first bind to microbial cell wall carbohydrates via a heparin-binding domain. Developing protofibrils inhibited pathogen adhesion to host cells. Propagating β-amyloid fibrils mediate agglutination and eventual entrapment of unatttached microbes. Consistent with our model, Salmonella Typhimurium bacterial infection of the brains of transgenic 5XFAD mice resulted in rapid seeding and accelerated β-amyloid deposition, which closely colocalized with the invading bacteria. Our findings raise the intriguing possibility that β-amyloid may play a protective role in innate immunity and infectious or sterile inflammatory stimuli may drive amyloidosis. These data suggest a dual protective/damaging role for Aβ, as has been described for other antimicrobial peptides.", "title": "" }, { "docid": "6c1a3792b9f92a4a1abd2135996c5419", "text": "Artificial neural networks (ANNs) have been applied in many areas successfully because of their ability to learn, ease of implementation and fast real-time operation. In this research, there are proposed two algorithms. The first is cellular neural network (CNN) with noise level estimation. While the second is modify cellular neural network with noise level estimation. The proposed CNN modification is by adding the Rossler chaos to the CNN fed. Noise level algorithm were used to image noise removal approach in order to get a good image denoising processing with high quality image visual and statistical measures. The results of the proposed system show that the combination of chaos CNN with noise level estimation gives acceptable PSNR and RMSE with a best quality visual vision and small computational time.", "title": "" }, { "docid": "84647b51dbbe755534e1521d9d9cf843", "text": "Social Mediator is a forum exploring the ways that HCI research and principles interact---or might interact---with practices in the social media world.<br /><b><i>Joe McCarthy, Editor</i></b>", "title": "" } ]
scidocsrr
9c33bd10e001f3ae096a07a1b535252e
Multiscale Rotated Bounding Box-Based Deep Learning Method for Detecting Ship Targets in Remote Sensing Images
[ { "docid": "9c74b77e79217602bb21a36a5787ed59", "text": "Ship detection on spaceborne images has attracted great interest in the applications of maritime security and traffic control. Optical images stand out from other remote sensing images in object detection due to their higher resolution and more visualized contents. However, most of the popular techniques for ship detection from optical spaceborne images have two shortcomings: 1) Compared with infrared and synthetic aperture radar images, their results are affected by weather conditions, like clouds and ocean waves, and 2) the higher resolution results in larger data volume, which makes processing more difficult. Most of the previous works mainly focus on solving the first problem by improving segmentation or classification with complicated algorithms. These methods face difficulty in efficiently balancing performance and complexity. In this paper, we propose a ship detection approach to solving the aforementioned two issues using wavelet coefficients extracted from JPEG2000 compressed domain combined with deep neural network (DNN) and extreme learning machine (ELM). Compressed domain is adopted for fast ship candidate extraction, DNN is exploited for high-level feature representation and classification, and ELM is used for efficient feature pooling and decision making. Extensive experiments demonstrate that, in comparison with the existing relevant state-of-the-art approaches, the proposed method requires less detection time and achieves higher detection accuracy.", "title": "" } ]
[ { "docid": "fec50e53536febc02b8fe832a97cf833", "text": "Translational control plays a critical role in the regulation of gene expression in eukaryotes and affects many essential cellular processes, including proliferation, apoptosis and differentiation. Under most circumstances, translational control occurs at the initiation step at which the ribosome is recruited to the mRNA. The eukaryotic translation initiation factor 4E (eIF4E), as part of the eIF4F complex, interacts first with the mRNA and facilitates the recruitment of the 40S ribosomal subunit. The activity of eIF4E is regulated at many levels, most profoundly by two major signalling pathways: PI3K (phosphoinositide 3-kinase)/Akt (also known and Protein Kinase B, PKB)/mTOR (mechanistic/mammalian target of rapamycin) and Ras (rat sarcoma)/MAPK (mitogen-activated protein kinase)/Mnk (MAPK-interacting kinases). mTOR directly phosphorylates the 4E-BPs (eIF4E-binding proteins), which are inhibitors of eIF4E, to relieve translational suppression, whereas Mnk phosphorylates eIF4E to stimulate translation. Hyperactivation of these pathways occurs in the majority of cancers, which results in increased eIF4E activity. Thus, translational control via eIF4E acts as a convergence point for hyperactive signalling pathways to promote tumorigenesis. Consequently, recent works have aimed to target these pathways and ultimately the translational machinery for cancer therapy.", "title": "" }, { "docid": "36a538b833de4415d12cd3aa5103cf9b", "text": "Big data is an opportunity in the emergence of novel business applications such as “Big Data Analytics” (BDA). However, these data with non-traditional volumes create a real problem given the capacity constraints of traditional systems. The aim of this paper is to deal with the impact of big data in a decision-support environment and more particularly in the data integration phase. In this context, we developed a platform, called P-ETL (Parallel-ETL) for extracting (E), transforming (T) and loading (L) very large data in a data warehouse (DW). To cope with very large data, ETL processes under our P-ETL platform run on a cluster of computers in parallel way with MapReduce paradigm. The conducted experiment shows mainly that increasing tasks dealing with large data speeds-up the ETL process.", "title": "" }, { "docid": "6eaa0d1b6a7e55eca070381954638292", "text": "Unsupervised learning is of growing interest because it unlocks the potential held in vast amounts of unlabeled data to learn useful representations for inference. Autoencoders, a form of generative model, may be trained by learning to reconstruct unlabeled input data from a latent representation space. More robust representations may be produced by an autoencoder if it learns to recover clean input samples from corrupted ones. Representations may be further improved by introducing regularization during training to shape the distribution of the encoded data in the latent space. We suggest denoising adversarial autoencoders (AAEs), which combine denoising and regularization, shaping the distribution of latent space using adversarial training. We introduce a novel analysis that shows how denoising may be incorporated into the training and sampling of AAEs. Experiments are performed to assess the contributions that denoising makes to the learning of representations for classification and sample synthesis. Our results suggest that autoencoders trained using a denoising criterion achieve higher classification performance and can synthesize samples that are more consistent with the input data than those trained without a corruption process.", "title": "" }, { "docid": "a6d4b6a0cd71a8e64c9a2429b95cd7da", "text": "Creativity research has traditionally focused on human creativity, and even more specifically, on the psychology of individual creative people. In contrast, computational creativity research involves the development and evaluation of creativity in a computational system. As we study the effect of scaling up from the creativity of a computational system and individual people to large numbers of diverse computational agents and people, we have a new perspective: creativity can ascribed to a computational agent, an individual person, collectives of people and agents and/or their interaction. By asking “Who is being creative?” this paper examines the source of creativity in computational and collective creativity. A framework based on ideation and interaction provides a way of characterizing existing research in computational and collective creativity and identifying directions for future research. Human and Computational Creativity Creativity is a topic of philosophical and scientific study considering the scenarios and human characteristics that facilitate creativity as well as the properties of computational systems that exhibit creative behavior. “The four Ps of creativity”, as introduced in Rhodes (1987) and more recently summarized by Runco (2011), decompose the complexity of creativity into separate but related influences: • Person: characteristics of the individual, • Product: an outcome focus on ideas, • Press: the environmental and contextual factors, • Process: cognitive process and thinking techniques. While the four Ps are presented in the context of the psychology of human creativity, they can be modified for computational creativity if process includes a computational process. The study of human creativity has a focus on the characteristics and cognitive behavior of creative people and the environments in which creativity is facilitated. The study of computational creativity, while inspired by concepts of human creativity, is often expressed in the formal language of search spaces and algorithms. Why do we ask who is being creative? Firstly, there is an increasing interest in understanding computational systems that can formalize or model creative processes and therefore exhibit creative behaviors or acts. Yet there are still skeptics that claim computers aren’t creative, the computer is just following instructions. Second and in contrast, there is increasing interest in computational systems that encourage and enhance human creativity that make no claims about whether the computer is being or could be creative. Finally, as we develop more capable socially intelligent computational systems and systems that enable collective intelligence among humans and computers, the boundary between human creativity and computer creativity blurs. As the boundary blurs, we need to develop ways of recognizing creativity that makes no assumptions about whether the creative entity is a person, a computer, a potentially large group of people, or the collective intelligence of human and computational entities. This paper presents a framework that characterizes the source of creativity from two perspectives, ideation and interaction, as a guide to current and future research in computational and collective creativity. Creativity: Process and Product Understanding the nature of creativity as process and product is critical in computational creativity if we want to avoid any bias that only humans are creative and computers are not. While process and product in creativity are tightly coupled in practice, a distinction between the two provides two ways of recognizing computational creativity by describing the characteristics of a creative process and separately, the characteristics of a creative product. Studying and describing the processes that generate creative products focus on the cognitive behavior of a creative person or the properties of a computational system, and describing ways of recognizing a creative product focus on the characteristics of the result of a creative process. When describing creative processes there is an assumption that there is a space of possibilities. Boden (2003) refers to this as conceptual spaces and describes these spaces as structured styles of thought. In computational systems such a space is called a state space. How such spaces are changed, or the relationship between the set of known products, the space of possibilities, and the potentially creative product, is the basis for describing processes that can generate potentially creative artifacts. There are many accounts of the processes for generating creative products. Two sources are described here: Boden (2003) from the philosophical and artificial intelligence perspective and Gero (2000) from the design science perspective. Boden (2003) describes three ways in which creative products can be generated: combination, exploration, International Conference on Computational Creativity 2012 67 and transformation: each one describes the way in which the conceptual space of known products provides a basis for generating a creative product and how the conceptual space changes as a result of the creative artifact. Combination brings together two or more concepts in ways that hasn’t occurred in existing products. Exploration finds concepts in parts of the space that have not been considered in existing products. Transformation modifies concepts in the space to generate products that change the boundaries of the space. Gero (2000) describes computational processes for creative design as combination, transformation, analogy, emergence, and first principles. Combination and transformation are similar to Boden’s processes. Analogy transfers concepts from a source product that may be in a different conceptual space to a target product to generate a novel product in the target’s space. Emergence is a process that finds new underlying structures in a concept that give rise to a new product, effectively a re-representation process. First principles as a process generates new products without relying on concepts as defined in existing products. While these processes provide insight into the nature of creativity and provide a basis for computational creativity, they have little to say about how we recognize a creative product. As we move towards computational systems that enhance or contribute to human creativity, the articulation of process models for generating creative artifacts does not provide an evaluation of the product. Computational systems that generate creative products need evaluation criteria that are independent of the process by which the product was generated. There are also numerous approaches to defining characteristics of creative products as the basis for evaluating or assessing creativity. Boden (2003) claims that novelty and value are the essential criteria and that other aspects, such as surprise, are kinds of novelty or value. Wiggins (2006) often uses value to indicate all valuable aspects of a creative products, yet provides definitions for novelty and value as different features that are relevant to creativity. Oman and Tumer (2009) combine novelty and quality to evaluate individual ideas in engineering design as a relative measure of creativity. Shah, Smith, and Vargas-Hernandez (2003) associate creative design with ideation and develop metrics for novelty, variety, quality, and quantity of ideas. Wiggins (2006) argues that surprise is a property of the receiver of a creative artifact, that is, it is an emotional response. Cropley and Cropley (2005) propose four broad properties of products that can be used to describe the level and kind of creativity they possess: effectiveness, novelty, elegance, genesis. Besemer and O'Quin (1987) describe a Creative Product Semantic Scale which defines the creativity of products in three dimensions: novelty (the product is original, surprising and germinal), resolution (the product is valuable, logical, useful, and understandable), and elaboration and synthesis (the product is organic, elegant, complex, and well-crafted). Horn and Salvendy (2006) after doing an analysis of many properties of creative products, report on consumer perception of creativity in three critical perceptions: affect (our emotional response to the product), importance, and novelty. Goldenberg and Mazursky (2002) report on research that has found the observable characteristics of creativity in products to include \"original, of value, novel, interesting, elegant, unique, surprising.\" Amabile (1982) says it most clearly when she summarizes the social psychology literature on the assessment of creativity: While most definitions of creativity refer to novelty, appropriateness, and surprise, current creativity tests or assessment techniques are not closely linked to these criteria. She further argues that “There is no clear, explicit statement of the criteria that conceptually underlie the assessment procedures.” In response to an inability to establish and define criteria for evaluating creativity that is acceptable to all domains, Amabile (1982, 1996) introduced a Consensual Assessment Technique (CAT) in which creativity is assessed by a group of judges that are knowledgeable of the field. Since then, several scales for assisting human evaluators have been developed to guide human evaluators, for example, Besemer and O'Quin's (1999) Creative Product Semantic Scale, Reis and Renzulli's (1991) Student Product Assessment Form, and Cropley et al’s (2011) Creative Solution Diagnosis Scale. Maher (2010) presents an AI approach to evaluating creativity of a product by measuring novelty, value and surprise that provides a formal model for evaluating creative products. Novelty is a measure of how different the product is from existing products and is measured as a distance from clusters of other products in a conceptual space, characterizing the artifact as similar but different. Value is a measure of how the creative product co", "title": "" }, { "docid": "179c5bc5044d85c2597d41b1bd5658b3", "text": "Embedding models typically associate each word with a single real-valued vector, representing its different properties. Evaluation methods, therefore, need to analyze the accuracy and completeness of these properties in embeddings. This requires fine-grained analysis of embedding subspaces. Multi-label classification is an appropriate way to do so. We propose a new evaluation method for word embeddings based on multi-label classification given a word embedding. The task we use is finegrained name typing: given a large corpus, find all types that a name can refer to based on the name embedding. Given the scale of entities in knowledge bases, we can build datasets for this task that are complementary to the current embedding evaluation datasets in: they are very large, contain fine-grained classes, and allow the direct evaluation of embeddings without confounding factors like sentence context.", "title": "" }, { "docid": "3611d022aee93b9cbcc961bb7cbdd3ff", "text": "Due to the popularity of Deep Neural Network (DNN) models, we have witnessed extreme-scale DNN models with the continued increase of the scale in terms of depth and width. However, the extremely high memory requirements for them make it difficult to run the training processes on single many-core architectures such as a Graphic Processing Unit (GPU), which compels researchers to use model parallelism over multiple GPUs to make it work. However, model parallelism always brings very heavy additional overhead. Therefore, running an extreme-scale model in a single GPU is urgently required. There still exist several challenges to reduce the memory footprint for extreme-scale deep learning. To address this tough problem, we first identify the memory usage characteristics for deep and wide convolutional networks, and demonstrate the opportunities for memory reuse at both the intra-layer and inter-layer levels. We then present Layrub, a runtime data placement strategy that orchestrates the execution of the training process. It achieves layer-centric reuse to reduce memory consumption for extreme-scale deep learning that could not previously be run on a single GPU. Experiments show that, compared to the original Caffe, Layrub can cut down the memory usage rate by an average of 58.2% and by up to 98.9%, at the moderate cost of 24.1% higher training execution time on average. Results also show that Layrub outperforms some popular deep learning systems such as GeePS, vDNN, MXNet, and Tensorflow. More importantly, Layrub can tackle extreme-scale deep learning tasks. For example, it makes an extra-deep ResNet with 1,517 layers that can be trained successfully in one GPU with 12GB memory, while other existing deep learning systems cannot.", "title": "" }, { "docid": "49c9ccdf36b60f1a8778919fe8ad3ad2", "text": "Formal evaluations conducted by NIST in 1996 demonstrated that systems that used parallel banks of tokenizer-dependent language models produced the best language identification performance. Since that time, other approaches to language identification have been developed that match or surpass the performance of phone-based systems. This paper describes and evaluates three techniques that have been applied to the language identification problem: phone recognition, Gaussian mixture modeling, and support vector machine classification. A recognizer that fuses the scores of three systems that employ these techniques produces a 2.7% equal error rate (EER) on the 1996 NIST evaluation set and a 2.8% EER on the NIST 2003 primary condition evaluation set. An approach to dealing with the problem of out-of-set data is also discussed.", "title": "" }, { "docid": "867a6923a650bdb1d1ec4f04cda37713", "text": "We examine Gärdenfors’ theory of conceptual spaces, a geometrical form of knowledge representation (Conceptual spaces: The geometry of thought, MIT Press, Cambridge, 2000), in the context of the general Creative Systems Framework introduced by Wiggins (J Knowl Based Syst 19(7):449–458, 2006a; New Generation Comput 24(3):209–222, 2006b). Gärdenfors’ theory offers a way of bridging the traditional divide between symbolic and sub-symbolic representations, as well as the gap between representational formalism and meaning as perceived by human minds. We discuss how both these qualities may be advantageous from the point of view of artificial creative systems. We take music as our example domain, and discuss how a range of musical qualities may be instantiated as conceptual spaces, and present a detailed conceptual space formalisation of musical metre.", "title": "" }, { "docid": "c8ca57db545f2d1f70f3640651bb3e79", "text": "sprightly style and is interesting from cover to cover. The comments, critiques, and summaries that accompany the chapters are very helpful in crystalizing the ideas and answering questions that may arise, particularly to the self-learner. The transparency in the presentation of the material in the book equips the reader to proceed quickly to a wealth of problems included at the end of each chapter. These problems ranging from elementary to research-level are very valuable in that a solid working knowledge of the invariant imbedding techniques is acquired as well as good insight in attacking problems in various applied areas. Furthermore, a useful selection of references is given at the end of each chapter. This book may not appeal to those mathematicians who are interested primarily in the sophistication of mathematical theory, because the authors have deliberately avoided all pseudo-sophistication in attaining transparency of exposition. Precisely for the same reason the majority of the intended readers who are applications-oriented and are eager to use the techniques quickly in their own fields will welcome and appreciate the efforts put into writing this book. From a purely mathematical point of view, some of the invariant imbedding results may be considered to be generalizations of the classical theory of first-order partial differential equations, and a part of the analysis of invariant imbedding is still at a somewhat heuristic stage despite successes in many computational applications. However, those who are concerned with mathematical rigor will find opportunities to explore the foundations of the invariant imbedding method. In conclusion, let me quote the following: \"What is the best method to obtain the solution to a problem'? The answer is, any way that works.\" (Richard P. Feyman, Engineering and Science, March 1965, Vol. XXVIII, no. 6, p. 9.) In this well-written book, Bellman and Wing have indeed accomplished the task of introducing the simplicity of the invariant imbedding method to tackle various problems of interest to engineers, physicists, applied mathematicians, and numerical analysts.", "title": "" }, { "docid": "7321e113293a7198bf88a1744a7ca6c9", "text": "It is widely claimed that research to discover and develop new pharmaceuticals entails high costs and high risks. High research and development (R&D) costs influence many decisions and policy discussions about how to reduce global health disparities, how much companies can afford to discount prices for lowerand middle-income countries, and how to design innovative incentives to advance research on diseases of the poor. High estimated costs also affect strategies for getting new medicines to the world’s poor, such as the advanced market commitment, which built high estimates into its inflated size and prices. This article takes apart the most detailed and authoritative study of R&D costs in order to show how high estimates have been constructed by industry-supported economists, and to show how much lower actual costs may be. Besides serving as an object lesson in the construction of ‘facts’, this analysis provides reason to believe that R&D costs need not be such an insuperable obstacle to the development of better medicines. The deeper problem is that current incentives reward companies to develop mainly new medicines of little advantage and compete for market share at high prices, rather than to develop clinically superior medicines with public funding so that prices could be much lower and risks to companies lower as well. BioSocieties advance online publication, 7 February 2011; doi:10.1057/biosoc.2010.40", "title": "" }, { "docid": "b39a47adecae9b552a32f890569a0d1b", "text": "Since they are potentially more efficient and simpler in construction, as well as being easier to integrate, electromechanical actuation systems are being considered as an alternative to hydraulic systems for controlling clutches and gearshifts in vehicle transmissions. A high-force, direct-drive linear electromechanical actuator has been developed which acts directly on the shift rails of either an automated manual transmission (AMT) or a dual clutch transmission (DCT) to facilitate gear selection and provide shift-by-wire functionality. It offers a number of advantages over electromechanical systems based on electric motors and gearboxes in that it reduces mechanical hysteresis, backlash and compliance, has fewer components, is more robust, and exhibits a better dynamic response", "title": "" }, { "docid": "2ab7cfe4978d09fde9f0bbef9850f3cf", "text": "We propose novel tensor decomposition methods that advocate both properties of sparsity and robustness to outliers. The sparsity enables us to extract some essential features from a big data that are easily interpretable. The robustness ensures the resistance to outliers that appear commonly in high-dimensional data. We first propose a method that generalizes the ridge regression in M-estimation framework for tensor decompositions. The other approach we propose combines the least absolute deviation (LAD) regression and the least absolute shrinkage operator (LASSO) for the CANDECOMP/PARAFAC (CP) tensor decompositions. We also formulate various robust tensor decomposition methods using different loss functions. The simulation study shows that our robust-sparse methods outperform other general tensor decomposition methods in the presence of outliers.", "title": "" }, { "docid": "8eb84b8d29c8f9b71c92696508c9c580", "text": "We introduce a novel in-ear sensor which satisfies key design requirements for wearable electroencephalography (EEG)-it is discreet, unobtrusive, and capable of capturing high-quality brain activity from the ear canal. Unlike our initial designs, which utilize custom earpieces and require a costly and time-consuming manufacturing process, we here introduce the generic earpieces to make ear-EEG suitable for immediate and widespread use. Our approach represents a departure from silicone earmoulds to provide a sensor based on a viscoelastic substrate and conductive cloth electrodes, both of which are shown to possess a number of desirable mechanical and electrical properties. Owing to its viscoelastic nature, such an earpiece exhibits good conformance to the shape of the ear canal, thus providing stable electrode-skin interface, while cloth electrodes require only saline solution to establish low impedance contact. The analysis highlights the distinguishing advantages compared with the current state-of-the-art in ear-EEG. We demonstrate that such a device can be readily used for the measurement of various EEG responses.", "title": "" }, { "docid": "09ada66e157c6a99c6317a7cb068367f", "text": "Experience design is a relatively new approach to product design. While there are several possible starting points in designing for positive experiences, we start with experience goals that state a profound source for a meaningful experience. In this paper, we investigate three design cases that used experience goals as the starting point for both incremental and radical design, and analyse them from the perspective of their potential for design space expansion. Our work addresses the recent call for design research directed toward new interpretations of what could be meaningful to people, which is seen as the source for creating new meanings for products, and thereby, possibly leading to radical innovations. Based on this idea, we think about the design space as a set of possible concepts derived from deep meanings that experience goals help to communicate. According to our initial results from the small-scale touchpoint design cases, the type of experience goals we use seem to have the potential to generate not only incremental but also radical design ideas.", "title": "" }, { "docid": "2733a4bc77e7fc22f426e69ebbf6d697", "text": "A microwave nano-probing station incorporating home-made MEMS coplanar waveguide (CPW) probes was built inside a scanning electron microscope. The instrumentation proposed is able to measure accurately the guided complex reflection of 1D devices embedded in dedicated CPW micro-structures. As a demonstration, RF impedance characterization of an Indium Arsenide nanowire is exemplary shown up to 6 GHz. Next, optimization of the MEMS probe assembly is experimentally verified by establishing the measurement uncertainty up to 18 GHz.", "title": "" }, { "docid": "36e42f2e4fd2f848eaf82440c2bcbf62", "text": "Indoor positioning systems (IPSs) locate objects in closed structures such as office buildings, hospitals, stores, factories, and warehouses, where Global Positioning System devices generally do not work. Most available systems apply wireless concepts, optical tracking, and/or ultrasound. This paper presents a standalone IPS using radio frequency identification (RFID) technology. The concept is based on an object carrying an RFID reader module, which reads low-cost passive tags installed next to the object path. A positioning system using a Kalman filter is proposed. The inputs of the proposed algorithm are the measurements of the backscattered signal power propagated from nearby RFID tags and a tag-path position database. The proposed algorithm first estimates the location of the reader, neglecting tag-reader angle-path loss. Based on the location estimate, an iterative procedure is implemented, targeting the estimation of the tag-reader angle-path loss, where the latter is iteratively compensated from the received signal strength information measurement. Experimental results are presented, illustrating the high performance of the proposed positioning system.", "title": "" }, { "docid": "ee510bbe7c7be6e0fb86a32d9f527be1", "text": "Internet communications with paths that include satellite link face some peculiar challenges, due to the presence of a long propagation wireless channel. In this paper, we propose a performance enhancing proxy (PEP) solution, called PEPsal, which is, to the best of the authors' knowledge, the first open source TCP splitting solution for the GNU/Linux operating systems. PEPsal improves the performance of a TCP connection over a satellite channel making use of the TCP Hybla, a TCP enhancement for satellite networks developed by the authors. The objective of the paper is to present and evaluate the PEPsal architecture, by comparing it with end to end TCP variants (NewReno, SACK, Hybla), considering both performance and reliability issues. Performance is evaluated by making use of a testbed set up at the University of Bologna, to study advanced transport protocols and architectures for Internet satellite communications", "title": "" }, { "docid": "8d31d43bf080e7b57c09917c9b7e15aa", "text": "We provide 89 challenging simulation environments that range in difficulty. The difficulty of solving a task is linked not only to the number of dimensions in the action space but also to the size and shape of the distribution of configurations the agent experiences. Therefore, we are releasing a number of simulation environments that include randomly generated terrain. The library also provides simple mechanisms to create new environments with different agent morphologies and the option to modify the distribution of generated terrain. We believe using these and other more complex simulations will help push the field closer to creating human-level intelligence.", "title": "" }, { "docid": "fc164dc2d55cec2867a99436d37962a1", "text": "We address the text-to-text generation problem of sentence-level paraphrasing — a phenomenon distinct from and more difficult than wordor phrase-level paraphrasing. Our approach applies multiple-sequence alignment to sentences gathered from unannotated comparable corpora: it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentences. The results of our evaluation experiments show that the system derives accurate paraphrases, outperforming baseline systems.", "title": "" }, { "docid": "be9b40cc2e2340249584f7324e26c4d3", "text": "This paper provides a unified account of two schools of thinking in information retrieval modelling: the generative retrieval focusing on predicting relevant documents given a query, and the discriminative retrieval focusing on predicting relevancy given a query-document pair. We propose a game theoretical minimax game to iteratively optimise both models. On one hand, the discriminative model, aiming to mine signals from labelled and unlabelled data, provides guidance to train the generative model towards fitting the underlying relevance distribution over documents given the query. On the other hand, the generative model, acting as an attacker to the current discriminative model, generates difficult examples for the discriminative model in an adversarial way by minimising its discrimination objective. With the competition between these two models, we show that the unified framework takes advantage of both schools of thinking: (i) the generative model learns to fit the relevance distribution over documents via the signals from the discriminative model, and (ii) the discriminative model is able to exploit the unlabelled data selected by the generative model to achieve a better estimation for document ranking. Our experimental results have demonstrated significant performance gains as much as 23.96% on Precision@5 and 15.50% on MAP over strong baselines in a variety of applications including web search, item recommendation, and question answering.", "title": "" } ]
scidocsrr
e2fdfa856f325cb7e31a34550b2572fe
Immune neglect: a source of durability bias in affective forecasting.
[ { "docid": "cfddb85a8c81cb5e370fe016ea8d4c5b", "text": "Negative (adverse or threatening) events evoke strong and rapid physiological, cognitive, emotional, and social responses. This mobilization of the organism is followed by physiological, cognitive, and behavioral responses that damp down, minimize, and even erase the impact of that event. This pattern of mobilization-minimization appears to be greater for negative events than for neutral or positive events. Theoretical accounts of this response pattern are reviewed. It is concluded that no single theoretical mechanism can explain the mobilization-minimization pattern, but that a family of integrated process models, encompassing different classes of responses, may account for this pattern of parallel but disparately caused effects.", "title": "" } ]
[ { "docid": "f69723ed73c7edd9856883bbb086ed0c", "text": "An algorithm for license plate recognition (LPR) applied to the intelligent transportation system is proposed on the basis of a novel shadow removal technique and character recognition algorithms. This paper has two major contributions. One contribution is a new binary method, i.e., the shadow removal method, which is based on the improved Bernsen algorithm combined with the Gaussian filter. Our second contribution is a character recognition algorithm known as support vector machine (SVM) integration. In SVM integration, character features are extracted from the elastic mesh, and the entire address character string is taken as the object of study, as opposed to a single character. This paper also presents improved techniques for image tilt correction and image gray enhancement. Our algorithm is robust to the variance of illumination, view angle, position, size, and color of the license plates when working in a complex environment. The algorithm was tested with 9026 images, such as natural-scene vehicle images using different backgrounds and ambient illumination particularly for low-resolution images. The license plates were properly located and segmented as 97.16% and 98.34%, respectively. The optical character recognition system is the SVM integration with different character features, whose performance for numerals, Kana, and address recognition reached 99.5%, 98.6%, and 97.8%, respectively. Combining the preceding tests, the overall performance of success for the license plate achieves 93.54% when the system is used for LPR in various complex conditions.", "title": "" }, { "docid": "3392de95bfc0e16776550b2a0a8fa00e", "text": "This paper presents a new type of three-phase voltage source inverter (VSI), called three-phase dual-buck inverter. The proposed inverter does not need dead time, and thus avoids the shoot-through problems of traditional VSIs, and leads to greatly enhanced system reliability. Though it is still a hard-switching inverter, the topology allows the use of power MOSFETs as the active devices instead of IGBTs typically employed by traditional hard-switching VSIs. As a result, the inverter has the benefit of lower switching loss, and it can be designed at higher switching frequency to reduce current ripple and the size of passive components. A unified pulsewidth modulation (PWM) is introduced to reduce computational burden in real-time implementation. Different PWM methods were applied to a three-phase dual-buck inverter, including sinusoidal PWM (SPWM), space vector PWM (SVPWM) and discontinuous space vector PWM (DSVPWM). A 2.5 kW prototype of a three-phase dual-buck inverter and its control system has been designed and tested under different dc bus voltage and modulation index conditions to verify the feasibility of the circuit, the effectiveness of the controller, and to compare the features of different PWMs. Efficiency measurement of different PWMs has been conducted, and the inverter sees peak efficiency of 98.8% with DSVPWM.", "title": "" }, { "docid": "351e2afb110d9304b5d534be45bf2fba", "text": "BACKGROUND\nThe Lyon Diet Heart Study is a randomized secondary prevention trial aimed at testing whether a Mediterranean-type diet may reduce the rate of recurrence after a first myocardial infarction. An intermediate analysis showed a striking protective effect after 27 months of follow-up. This report presents results of an extended follow-up (with a mean of 46 months per patient) and deals with the relationships of dietary patterns and traditional risk factors with recurrence.\n\n\nMETHODS AND RESULTS\nThree composite outcomes (COs) combining either cardiac death and nonfatal myocardial infarction (CO 1), or the preceding plus major secondary end points (unstable angina, stroke, heart failure, pulmonary or peripheral embolism) (CO 2), or the preceding plus minor events requiring hospital admission (CO 3) were studied. In the Mediterranean diet group, CO 1 was reduced (14 events versus 44 in the prudent Western-type diet group, P=0.0001), as were CO 2 (27 events versus 90, P=0.0001) and CO 3 (95 events versus 180, P=0. 0002). Adjusted risk ratios ranged from 0.28 to 0.53. Among the traditional risk factors, total cholesterol (1 mmol/L being associated with an increased risk of 18% to 28%), systolic blood pressure (1 mm Hg being associated with an increased risk of 1% to 2%), leukocyte count (adjusted risk ratios ranging from 1.64 to 2.86 with count >9x10(9)/L), female sex (adjusted risk ratios, 0.27 to 0. 46), and aspirin use (adjusted risk ratios, 0.59 to 0.82) were each significantly and independently associated with recurrence.\n\n\nCONCLUSIONS\nThe protective effect of the Mediterranean dietary pattern was maintained up to 4 years after the first infarction, confirming previous intermediate analyses. Major traditional risk factors, such as high blood cholesterol and blood pressure, were shown to be independent and joint predictors of recurrence, indicating that the Mediterranean dietary pattern did not alter, at least qualitatively, the usual relationships between major risk factors and recurrence. Thus, a comprehensive strategy to decrease cardiovascular morbidity and mortality should include primarily a cardioprotective diet. It should be associated with other (pharmacological?) means aimed at reducing modifiable risk factors. Further trials combining the 2 approaches are warranted.", "title": "" }, { "docid": "e303b7edea2e32bdc78712efb129588b", "text": "The identification of orthologous groups is useful for genome annotation, studies on gene/protein evolution, comparative genomics, and the identification of taxonomically restricted sequences. Methods successfully exploited for prokaryotic genome analysis have proved difficult to apply to eukaryotes, however, as larger genomes may contain multiple paralogous genes, and sequence information is often incomplete. OrthoMCL provides a scalable method for constructing orthologous groups across multiple eukaryotic taxa, using a Markov Cluster algorithm to group (putative) orthologs and paralogs. This method performs similarly to the INPARANOID algorithm when applied to two genomes, but can be extended to cluster orthologs from multiple species. OrthoMCL clusters are coherent with groups identified by EGO, but improved recognition of \"recent\" paralogs permits overlapping EGO groups representing the same gene to be merged. Comparison with previously assigned EC annotations suggests a high degree of reliability, implying utility for automated eukaryotic genome annotation. OrthoMCL has been applied to the proteome data set from seven publicly available genomes (human, fly, worm, yeast, Arabidopsis, the malaria parasite Plasmodium falciparum, and Escherichia coli). A Web interface allows queries based on individual genes or user-defined phylogenetic patterns (http://www.cbil.upenn.edu/gene-family). Analysis of clusters incorporating P. falciparum genes identifies numerous enzymes that were incompletely annotated in first-pass annotation of the parasite genome.", "title": "" }, { "docid": "0c91a3ed677b42579ba0620710edc432", "text": "Due to the heterogeneous and distributed nature of computer networks, the detection of misconfigurations and software/hardware failures is frequently reported to be notoriously non-trivial. The advent of SDN complicates the situation even more, since besides troubleshooting, the problem of finding software bugs in controller/switch/VNF implementations also has to be solved. Today a wealth of general and SDN-specific troubleshooting tools are available which are usually tailored to identify network-related errors and bugs of a particular nature. In this paper we define a troubleshooting framework which can assemble many of these tools in a single platform and makes possible to flexibly combine them. As we see network operators and SDN developers execute similar tasks anyway, e.g. combine ping, traceroute and tcpdump (or more complex tools) manually to see what is going on in the network. Our framework can ease their work by consolidating the available troubleshooting tools in a flexible and automated manner.", "title": "" }, { "docid": "1259763cc4b2af221663283daed3527c", "text": "CMMI and ISO/IEC 15504 are two main models for software process assessment and improvement. CMMI staged representation provides the standard way to process improvement and the attractive simple measure for organization’s software process maturity. ISO/IEC 15504 ensures the possibility to assess the capability of each process, to get the detailed organization’s processes capability profile and to define an individual improvement path. This paper investigates relationship between CMMI-DEV maturity levels and ISO/IEC 15504 processes capability. The mapping approach and ISO/IEC 15504 processes capability profiles ensured by all CMMI maturity levels are presented. Key-Words: Software process assessment and improvement, organization’s maturity, processes capability profile, CMMI, ISO/IEC 15504.", "title": "" }, { "docid": "6995ef4918b6f31e0d26c9f77204f246", "text": "In order to improve utilization of TV spectrum, regulatory bodies around the world have been developing rules to allow operation by unlicensed users in these bands provided that interference to incumbent broadcasters is avoided. Thus, new services may opportunistically use temporarily unoccupied TV channels, known as television white space. This has motivated several standardization efforts such as IEEE 802.22, 802.11af, 802.19 TG1, and ECMA 392 to further cognitive networking. Specifically, multiple collocated secondary networks are expected to use TVWS, each with distinct requirements (bandwidth, transmission power, different system architectures, and device types) that must all comply with regulatory requirements to protect incumbents. Heterogeneous coexistence in the TVWS is thus expected to be an important research challenge. This article introduces the current regulatory scenario, emerging standards for cognitive wireless networks targeting the TVWS, and discusses possible coexistence scenarios and associated challenges. Furthermore, the article casts an eye on future considerations for these upcoming standards in support of spectrum sharing opportunities as a function of network architecture evolution.", "title": "" }, { "docid": "b6937126282162204fd2e45b70d4f840", "text": "In the ventral premotor cortex (area F5) of the monkey there are neurons that discharge both when the monkey performs specific motor actions and when it observes another individual performing a similar action (mirror neurons). Previous studies on mirror neurons concerned hand actions. Here, we describe the mirror responses of F5 neurons that motorically code mouth actions. The results showed that about one-third of mouth motor neurons also discharge when the monkey observes another individual performing mouth actions. The majority of these 'mouth mirror neurons' become active during the execution and observation of mouth actions related to ingestive functions such as grasping, sucking or breaking food. Another population of mouth mirror neurons also discharges during the execution of ingestive actions, but the most effective visual stimuli in triggering them are communicative mouth gestures (e.g. lip smacking). Some also fire when the monkey makes communicative gestures. These findings extend the notion of mirror system from hand to mouth action and suggest that area F5, the area considered to be the homologue of human Broca's area, is also involved in communicative functions.", "title": "" }, { "docid": "b1e431f48c52a267c7674b5526d9ee23", "text": "Publish/subscribe is a distributed interaction paradigm well adapted to the deployment of scalable and loosely coupled systems.\n Apache Kafka and RabbitMQ are two popular open-source and commercially-supported pub/sub systems that have been around for almost a decade and have seen wide adoption. Given the popularity of these two systems and the fact that both are branded as pub/sub systems, two frequently asked questions in the relevant online forums are: how do they compare against each other and which one to use?\n In this paper, we frame the arguments in a holistic approach by establishing a common comparison framework based on the core functionalities of pub/sub systems. Using this framework, we then venture into a qualitative and quantitative (i.e. empirical) comparison of the common features of the two systems. Additionally, we also highlight the distinct features that each of these systems has. After enumerating a set of use cases that are best suited for RabbitMQ or Kafka, we try to guide the reader through a determination table to choose the best architecture given his/her particular set of requirements.", "title": "" }, { "docid": "769a263c08934e330a87c1af15b6af21", "text": "Realization of brain-like computer has always been human's ultimate dream. Today, the possibility of having this dream come true has been significantly boosted due to the advent of several emerging non-volatile memory devices. Within these innovative technologies, phase-change memory device has been commonly regarded as the most promising candidate to imitate the biological brain, owing to its excellent scalability, fast switching speed, and low energy consumption. In this context, a detailed review concerning the physical principles of the neuromorphic circuit using phase-change materials as well as a comprehensive introduction of the currently available phase-change neuromorphic prototypes becomes imperative for scientists to continuously progress the technology of artificial neural networks. In this paper, we first present the biological mechanism of human brain, followed by a brief discussion about physical properties of phase-change materials that recently receive a widespread application on non-volatile memory field. We then survey recent research on different types of neuromorphic circuits using phase-change materials in terms of their respective geometrical architecture and physical schemes to reproduce the biological events of human brain, in particular for spike-time-dependent plasticity. The relevant virtues and limitations of these devices are also evaluated. Finally, the future prospect of the neuromorphic circuit based on phase-change technologies is envisioned.", "title": "" }, { "docid": "4b25c7e58f49784d525398f4611b7ffa", "text": "In this work, we studied the extraction process of papain, present in the latex of papaya fruit (Carica papaya L.) cv. Maradol. The variables studied in the extraction of papain were: latex:alcohol ratio (1:2.1 and 1:3) and drying method (vacuum and refractance window). Papain enzyme responses were obtained in terms of enzymatic activity and yield of the extraction process. The best result in terms of enzyme activity and yield was obtained by vacuum drying and a latex:alcohol ratio of 1:3. The enzyme obtained was characterized by physicochemical and microbiological properties and, enzymatic activity when compared with a commercial sample used as standard.", "title": "" }, { "docid": "7ddab8f1a5306062f4b835e7bf696e9e", "text": "WGCNA begins with the understanding that the information captured by microarray experiments is far richer than a list of differentially expressed genes. Rather, microarray data are more completely represented by considering the relationships between measured transcripts, which can be assessed by pair-wise correlations between gene expression profiles. In most microarray data analyses, however, these relationships go essentially unexplored. WGCNA starts from the level of thousands of genes, identifies clinically interesting gene modules, and finally uses intramodular connectivity, gene significance (e.g. based on the correlation of a gene expression profile with a sample trait) to identify key genes in the disease pathways for further validation. WGCNA alleviates the multiple testing problem inherent in microarray data analysis. Instead of relating thousands of genes to a microarray sample trait, it focuses on the relationship between a few (typically less than 10) modules and the sample trait. Toward this end, it calculates the eigengene significance (correlation between sample trait and eigengene) and the corresponding p-value for each module. The module definition does not make use of a priori defined gene sets. Instead, modules are constructed from the expression data by using hierarchical clustering. Although it is advisable to relate the resulting modules to gene ontology information to assess their biological plausibility, it is not required. Because the modules may correspond to biological pathways, focusing the analysis on intramodular hub genes (or the module eigengenes) amounts to a biologically motivated data reduction scheme. Because the expression profiles of intramodular hub genes are highly correlated, typically dozens of candidate biomarkers result. Although these candidates are statistically equivalent, they may differ in terms of biological plausibility or clinical utility. Gene ontology information can be useful for further prioritizing intramodular hub genes. Examples of biological studies that show the importance of intramodular hub genes can be found reported in [4, 1, 2, 3, 5]. A flow chart of a typical network analysis is shown in Fig. 1. Below we present a short glossary of important network-related terms.", "title": "" }, { "docid": "1d161bf47ac2efd6597d20fdb100291e", "text": "Amphetamine (AMPH) and its derivatives are regularly used in the treatment of a wide array of disorders such as attention-deficit hyperactivity disorder (ADHD), obesity, traumatic brain injury, and narcolepsy (Prog Neurobiol 75:406–433, 2005; J Am Med Assoc 105:2051–2054, 1935; J Am Acad Child Adolesc Psychiatry 41:514–521, 2002; Neuron 43:261–269, 2004; Annu Rev Pharmacol Toxicol 47:681–698, 2007; Drugs Aging 21:67–79, 2004). Despite the important medicinal role for AMPH, it is more widely known for its psychostimulant and addictive properties as a drug of abuse. The primary molecular targets of AMPH are both the vesicular monoamine transporters (VMATs) and plasma membrane monoamine—dopamine (DA), norepinephrine (NE), and serotonin (5-HT)—transporters. The rewarding and addicting properties of AMPH rely on its ability to act as a substrate for these transporters and ultimately increase extracellular levels of monoamines. AMPH achieves this elevation in extracellular levels of neurotransmitter by inducing synaptic vesicle depletion, which increases intracellular monoamine levels, and also by promoting reverse transport (efflux) through plasma membrane monoamine transporters (J Biol Chem 237:2311–2317, 1962; Med Exp Int J Exp Med 6:47–53, 1962; Neuron 19:1271–1283, 1997; J Physiol 144:314–336, 1958; J Neurosci 18:1979–1986, 1998; Science 237:1219–1223, 1987; J Neurosc 15:4102–4108, 1995). This review will focus on two important aspects of AMPH-induced regulation of the plasma membrane monoamine transporters—transporter mediated monoamine efflux and transporter trafficking.", "title": "" }, { "docid": "96010bf04c08ace7932fb5c48b2f8798", "text": "Spatio-temporal databases aim to support extensions to existing models of Spatial Information Systems (SIS) to include time in order to better describe our dynamic environment. Although interest into this area has increased in the past decade, a number of important issues remain to be investigated. With the advances made in temporal database research, we can expect a more uni®ed approach towards aspatial temporal data in SIS and a wider discussion on spatio-temporal data models. This paper provides an overview of previous achievements within the ®eld and highlights areas currently receiving or requiring further investigation.", "title": "" }, { "docid": "cbc9437811bff9a1d96dd5d5f886c598", "text": "Weakly supervised learning for object detection has been gaining significant attention in the recent past. Visually similar objects are extracted automatically from weakly labelled videos hence bypassing the tedious process of manually annotating training data. However, the problem as applied to small or medium sized objects is still largely unexplored. Our observation is that weakly labelled information can be derived from videos involving human-object interactions. Since the object is characterized neither by its appearance nor its motion in such videos, we propose a robust framework that taps valuable human context and models similarity of objects based on appearance and functionality. Furthermore, the framework is designed such that it maximizes the utility of the data by detecting possibly multiple instances of an object from each video. We show that object models trained in this fashion perform between 86% and 92% of their fully supervised counterparts on three challenging RGB and RGB-D datasets.", "title": "" }, { "docid": "b77ab33226f6d643aee49d63d3485d46", "text": "A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.", "title": "" }, { "docid": "23f91ffdd3c15fdeeb3ef33ca463c238", "text": "The Shield project relied on application protocol analyzers to detect potential exploits of application vulnerabilities. We present the design of a second-generation generic application-level protocol analyzer (GAPA) that encompasses a domain-specific language and the associated run-time. We designed GAPA to satisfy three important goals: safety, real-time analysis and response, and rapid development of analyzers. We have found that these goals are relevant for many network monitors that implement protocol analysis. Therefore, we built GAPA to be readily integrated into tools such as Ethereal as well as Shield. GAPA preserves safety through the use of a memorysafe language for both message parsing and analysis, and through various techniques to reduce the amount of state maintained in order to avoid denial-of-service attacks. To support online analysis, the GAPA runtime uses a streamprocessing model with incremental parsing. In order to speed protocol development, GAPA uses a syntax similar to many protocol RFCs and other specifications, and incorporates many common protocol analysis tasks as built-in abstractions. We have specified 10 commonly used protocols in the GAPA language and found it expressive and easy to use. We measured our GAPA prototype and found that it can handle an enterprise client HTTP workload at up to 60 Mbps, sufficient performance for many end-host firewall/IDS scenarios. At the same time, the trusted code base of GAPA is an order of magnitude smaller than Ethereal.", "title": "" }, { "docid": "eac94aff93246b77327d1da0e499ce60", "text": "Finding appropriate stable grasps for a hand (either robotic or human) on an arbitrary object has proved to be a challenging and difficult problem. The space of grasping parameters coupled with the degrees-of-freedom and geometry of the object to be grasped creates a high-dimensional, non-smooth manifold. Traditional search methods applied to this manifold are typically not powerful enough to find appropriate stable grasping solutions, let alone optimal grasps. We address this issue in this paper, which attempts to find optimal grasps of objects using a grasping simulator. Our unique approach to the problem involves a combination of numerical methods to recover parts of the grasp quality surface with any robotic hand, and contemporary machine learning methods to interpolate that surface, in order to find the optimal grasp.", "title": "" }, { "docid": "3ea021309fd2e729ffced7657e3a6038", "text": "Physiological and pharmacological research undertaken on sloths during the past 30 years is comprehensively reviewed. This includes the numerous studies carried out upon the respiratory and cardiovascular systems, anesthesia, blood chemistry, neuromuscular responses, the brain and spinal cord, vision, sleeping and waking, water balance and kidney function and reproduction. Similarities and differences between the physiology of sloths and that of other mammals are discussed in detail.", "title": "" }, { "docid": "56fb6fe1f6999b5d7a9dab19e8b877ef", "text": "Low-cost consumer depth cameras and deep learning have enabled reasonable 3D hand pose estimation from single depth images. In this paper, we present an approach that estimates 3D hand pose from regular RGB images. This task has far more ambiguities due to the missing depth information. To this end, we propose a deep network that learns a network-implicit 3D articulation prior. Together with detected keypoints in the images, this network yields good estimates of the 3D pose. We introduce a large scale 3D hand pose dataset based on synthetic hand models for training the involved networks. Experiments on a variety of test sets, including one on sign language recognition, demonstrate the feasibility of 3D hand pose estimation on single color images.", "title": "" } ]
scidocsrr
7280100936d05cbde0a03facfe4aab73
60-GHz Dual-Polarized Two-Dimensional Switch-Beam Wideband Antenna Array of Aperture-Coupled Magneto-Electric Dipoles
[ { "docid": "e325165aa6628514015a6b467bf6c036", "text": "Wafer-scale beamforming lenses for future IEEE802.15.3c 60 GHz WPAN applications are presented. An on-wafer fabrication is of particular interest because a beamforming lens can be fabricated with sub-circuits in a single process. It means that the beamforming lens system would be compact, reliable, and cost-effective. The Rotman lens and the Rotman lens with antenna arrays were fabricated on a high-resistivity silicon (HRS) wafer in a semiconductor process, which is a preliminary research to check the feasibility of a Rotman lens for a chip scale packaging. In the case of the Rotman lens only, the efficiency is in the range from 50% to 70% depending on which beam port is excited. Assuming that the lens is coupled with ideal isotropic antennas, the synthesized beam patterns from the S-parameters shows that the beam directions are -29.3°, -15.1°, 0.2°, 15.2°, and 29.5 °, and the beam widths are 15.37°, 15.62°, 15.46°, 15.51°, and 15.63°, respectively. In the case of the Rotman lens with antenna array, the patterns were measured by using on-wafer measurement setup. It shows that the beam directions are -26.6°, -21.8°, 0°, 21.8°, and 26.6° . These results are in good agreement with the calculated results from ray-optic. Thus, it is verified that the lens antenna implemented on a wafer can be feasible for the system-in-package (SiP) and wafer-level package technologies.", "title": "" }, { "docid": "f1c4577a013e313d3a0bfdd1f5c9981e", "text": "In this work, a simple and compact transition from substrate integrated waveguide (SIW) to traditional rectangular waveguide is proposed and demonstrated. The substrate of SIW can be easily surface-mounted to the standard flange of the waveguide by creating a flange on the substrate. A longitudinal slot window etched on the broad wall of SIW couples energy between SIW and rectangular waveguide. An example of the transition structure is realized at 35 GHz with substrate of RT/Duroid 5880. HFSS simulated result of the transition shows a return loss less than −15 dB over a frequency range of 800 MHz. A back to back connected transition has been fabricated, and the measured results confirm well with the anticipated ones.", "title": "" } ]
[ { "docid": "5648ad4ca9c350abdc7177fb0a771382", "text": "The segmentation of transparent objects can be very useful in computer vision applications. However, because they borrow texture from their background and have a similar appearance to their surroundings, transparent objects are not handled well by regular image segmentation methods. We propose a method that overcomes these problems using the consistency and distortion properties of a light-field image. Graph-cut optimization is applied for the pixel labeling problem. The light-field linearity is used to estimate the likelihood of a pixel belonging to the transparent object or Lambertian background, and the occlusion detector is used to find the occlusion boundary. We acquire a light field dataset for the transparent object, and use this dataset to evaluate our method. The results demonstrate that the proposed method successfully segments transparent objects from the background.", "title": "" }, { "docid": "949c1c832e67283c9bb0c400b59f6292", "text": "Industrial control system intrusion detection is a popular topic of research for several years, and many intrusion detection systems (IDS) have been proposed in literature. IDS researchers lack a common framework to train and test proposed algorithms. This leads to an inability to properly compare proposed IDS and limits research progress. This paper documents 2 approaches to data sharing for the industrial control system IDS research community. First, a network traffic data log captured from a gas pipeline is presented. The gas pipeline data log was captured in a laboratory and includes artifacts of normal operation and cyberattacks. Second, an expandable virtual gas pipeline is presented which includes a human machine interface, programmable logic controller, Modbus/TCP communication, and a Simulink based gas pipeline model. The virtual gas pipeline provides the ability to model cyber-attacks and normal behavior. IDS solutions can overlay the virtual gas pipeline for training and testing.", "title": "" }, { "docid": "22d9ae82a09a212eb5dcd48ad77cc7a9", "text": "The purpose of this study is to propose an extended model of Theory of Planned Behavior (TPB) by incorporating constructs drawn from the model of Expectation Disconfirmation Theory (EDT) and to examine the antecedents of users’ intention to continue using online shopping (continuance intention). Prior research has demonstrated that TPB constructs, including attitude, subjective norm, and perceived behavioral control, are important factors in determining the acceptance and use of various information technologies. These factors, however, are insufficient to explain a user’s continuance intention in the online shopping context. In this study we extended TPB with two EDT constructs—disconfirmation and satisfaction—for studying users’ continuance intention in the online shopping context. By employing longitudinal method with two-stage survey, we empirically validated the proposed model and research hypotheses. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "86cb3c072e67bed8803892b72297812c", "text": "Internet of Things (IoT) will comprise billions of devices that can sense, communicate, compute and potentially actuate. Data streams coming from these devices will challenge the traditional approaches to data management and contribute to the emerging paradigm of big data. This paper discusses emerging Internet of Things (IoT) architecture, large scale sensor network applications, federating sensor networks, sensor data and related context capturing techniques, challenges in cloud-based management, storing, archiving and processing of", "title": "" }, { "docid": "e8197d339037ada47ed6db5f8f427211", "text": "Concentric tube robots are catheter-sized continuum robots that are well suited for minimally invasive surgery inside confined body cavities. These robots are constructed from sets of precurved superelastic tubes and are capable of assuming complex 3-D curves. The family of 3-D curves that the robot can assume depends on the number, curvatures, lengths, and stiffnesses of the tubes in its tube set. The robot design problem involves solving for a tube set that will produce the family of curves necessary to perform a surgical procedure. At a minimum, these curves must enable the robot to smoothly extend into the body and to manipulate tools over the desired surgical workspace while respecting anatomical constraints. This paper introduces an optimization framework that utilizes procedure- or patient-specific image-based anatomical models along with surgical workspace requirements to generate robot tube set designs. The algorithm searches for designs that minimize robot length and curvature and for which all paths required for the procedure consist of stable robot configurations. Two mechanics-based kinematic models are used. Initial designs are sought using a model assuming torsional rigidity. These designs are then refined using a torsionally compliant model. The approach is illustrated with clinically relevant examples from neurosurgery and intracardiac surgery.", "title": "" }, { "docid": "1b844eb4aeaac878ebffaaf5b4d6e3ab", "text": "Recently, deep residual networks have been successfully applied in many computer vision and natural language processing tasks, pushing the state-of-the-art performance with deeper and wider architectures. In this work, we interpret deep residual networks as ordinary differential equations (ODEs), which have long been studied in mathematics and physics with rich theoretical and empirical success. From this interpretation, we develop a theoretical framework on stability and reversibility of deep neural networks, and derive three reversible neural network architectures that can go arbitrarily deep in theory. The reversibility property allows a memoryefficient implementation, which does not need to store the activations for most hidden layers. Together with the stability of our architectures, this enables training deeper networks using only modest computational resources. We provide both theoretical analyses and empirical results. Experimental results demonstrate the efficacy of our architectures against several strong baselines on CIFAR-10, CIFAR-100 and STL-10 with superior or on-par state-of-the-art performance. Furthermore, we show our architectures yield superior results when trained using fewer training data.", "title": "" }, { "docid": "56346f33d2adf529ff11e82d42cce4c6", "text": "A smart contract is hard to patch for bugs once it is deployed, irrespective of the money it holds. A recent bug caused losses worth around $50 million of cryptocurrency. We present ZEUS—a framework to verify the correctness and validate the fairness of smart contracts. We consider correctness as adherence to safe programming practices, while fairness is adherence to agreed upon higher-level business logic. ZEUS leverages both abstract interpretation and symbolic model checking, along with the power of constrained horn clauses to quickly verify contracts for safety. We have built a prototype of ZEUS for Ethereum and Fabric blockchain platforms, and evaluated it with over 22.4K smart contracts. Our evaluation indicates that about 94.6% of contracts (containing cryptocurrency worth more than $0.5 billion) are vulnerable. ZEUS is sound with zero false negatives and has a low false positive rate, with an order of magnitude improvement in analysis time as compared to prior art.", "title": "" }, { "docid": "b69f2c426f86ad0e07172eb4d018b818", "text": "Versatile motor skills for hitting and throwing motions can be observed in humans already in early ages. Future robots require high power-to-weight ratios as well as inherent long operational lifetimes without breakage in order to achieve similar perfection. Robustness due to passive compliance and high-speed catapult-like motions as possible with fast energy release are further beneficial characteristics. Such properties can be realized with antagonistic muscle-based designs. Additionally, control algorithms need to exploit the full potential of the robot. Learning control is a promising direction due to its the potential to capture uncertainty and control of complex systems. The aim of this paper is to build a robotic arm that is capable of generating high accelerations and sophisticated trajectories as well as enable exploration at such speeds for robot learning approaches. Hence, we have designed a light-weight robot arm with moving masses below 700 g with powerful antagonistic compliant actuation with pneumatic artificial muscles. Rather than recreating human anatomy, our system is designed to be easy to control in order to facilitate future learning of fast trajectory tracking control. The resulting robot is precise at low speeds using a simple PID controller while reaching high velocities of up to 12 m/s in task space and 1500 deg/s in joint space. This arm will enable new applications in fast changing and uncertain task like robot table tennis while being a sophisticated and reproducible test-bed for robot skill learning methods. Construction details are available.", "title": "" }, { "docid": "4f3ae7d6f0de941f0bd91c1eb8325c09", "text": "This paper explores the introduction of groupware into an organization to understand the changes in work practices and social interaction facilitated by the technology. The results suggest that people’s mental models and organizations’ structure and culture significantly influence how groupware is implemented and used. Specifically, in the absence of mental models that stressed its collaborative nature, groupwae was interpreted in terms of familiar personal, stand-alone technologies such as spreadsheets. Further, the culture and structure provided few incentives or norms for cooperating or sharing expertise, hence the groupware on its own was unlikely to engender collaboration. Recognizing the central influence of these cognitive and organizational elements is critical to developers, researchers, and practitioners of groupware,", "title": "" }, { "docid": "444c3a4eb179604e96fb39b68f999143", "text": "Reduced heart rate variability carries an adverse prognosis in patients who have survived an acute myocardial infarction. This article reviews the physiology, technical problems of assessment, and clinical relevance of heart rate variability. The sympathovagal influence and the clinical assessment of heart rate variability are discussed. Methods measuring heart rate variability are classified into four groups, and the advantages and disadvantages of each group are described. Concentration is on risk stratification of postmyocardial infarction patients. The evidence suggests that heart rate variability is the single most important predictor of those patients who are at high risk of sudden death or serious ventricular arrhythmias.", "title": "" }, { "docid": "41353a12a579f72816f1adf3cba154dd", "text": "The crux of our initialization technique is n-gram selection, which assists neural networks to extract important n-gram features at the beginning of the training process. In the following tables, we illustrate those selected n-grams of different classes and datasets to understand our technique intuitively. Since all of MR, SST-1, SST-2, CR, and MPQA are sentiment classification datasets, we only report the selected n-grams of SST-1 (Table 1). N-grams selected by our method in SUBJ and TREC are shown in Table 2 and Table 3.", "title": "" }, { "docid": "4c3e6abcc0963efe7423fa25e9b231cb", "text": "In this demo, we present NaLIR, a generic interactive natural language interface for querying relational databases. NaLIR can accept a logically complex English language sentence as query input. This query is first translated into a SQL query, which may include aggregation, nesting, and various types of joins, among other things, and then evaluated against an RDBMS. In this demonstration, we show that NaLIR, while far from being able to pass the Turing test, is perfectly usable in practice, and able to handle even quite complex queries in a variety of application domains. In addition, we also demonstrate how carefully designed interactive communication can avoid misinterpretation with minimum user burden.", "title": "" }, { "docid": "7fdd251faf180d2daecb6bfe0c825b2e", "text": "Biometric recognition, or biometrics, refers to the authentication of an individual based on her/his biometric traits. Among the various biometric traits (e.g., face, iris, fingerprint, voice), fingerprint-based authentication has the longest history, and has been successfully adopted in both forensic and civilian applications. Advances in fingerprint capture technology have resulted in new large scale civilian applications (e.g., US-VISIT program). However, these systems still encounter difficulties due to various noise factors present in operating environments. The purpose of this article is to give an overview of fingerprint-based recognition and discuss research opportunities for making these systems perform more effectively.", "title": "" }, { "docid": "31e8d60af8a1f9576d28c4c1e0a3db86", "text": "Management of bulk sensor data is one of the challenging problems in the development of Internet of Things (IoT) applications. High volume of sensor data induces for optimal implementation of appropriate sensor data compression technique to deal with the problem of energy-efficient transmission, storage space optimization for tiny sensor devices, and cost-effective sensor analytics. The compression performance to realize significant gain in processing high volume sensor data cannot be attained by conventional lossy compression methods, which are less likely to exploit the intrinsic unique contextual characteristics of sensor data. In this paper, we propose SensCompr, a dynamic lossy compression method specific for sensor datasets and it is easily realizable with standard compression methods. Senscompr leverages robust statistical and information theoretic techniques and does not require specific physical modeling. It is an information-centric approach that exhaustively analyzes the inherent properties of sensor data for extracting the embedded useful information content and accordingly adapts the parameters of compression scheme to maximize compression gain while optimizing information loss. Senscompr is successfully applied to compress large sets of heterogeneous real sensor datasets like ECG, EEG, smart meter, accelerometer. To the best of our knowledge, for the first time 'sensor information content'-centric dynamic compression technique is proposed and implemented particularly for IoT-applications and this method is independent to sensor data types.", "title": "" }, { "docid": "10ee57480485050a1bb52dcb9203bd26", "text": "This paper investigates and evaluates coupled inductors (CIs) in the interleaved multiphase three-level dc-dc converter. If non-CIs are used in the multiphase three-level dc-dc converter, interleaving operation of the converter will increase inductor current ripple, although the overall output current ripple and common-mode (CM) voltage will become smaller. To reduce inductor current ripple, inverse-CIs are employed. The current ripple in the CI is analyzed in detail. The benefits of the three-level dc-dc converter with CIs under interleaving operation are evaluated. By adding CIs and working under interleaving operation, smaller inductor current ripple, smaller overall output current ripple, and smaller CM voltage can be achieved simultaneously compared with the noninterleaving case. The analysis results are verified by simulations and 10 kW scale-down experiments.", "title": "" }, { "docid": "799447689731d339d1fd8b2539e1210b", "text": "The words of a language reflect the structure of the human mind, allowing us to transmit thoughts between individuals. However, language can represent only a subset of our rich and detailed cognitive architecture. Here, we ask what kinds of common knowledge (semantic memory) are captured by word meanings (lexical semantics). We examine a prominent computational model that represents words as vectors in a multidimensional space, such that proximity between wordvectors approximates semantic relatedness. Because related words appear in similar contexts, such spaces – called “word embeddings” – can be learned from patterns of lexical co-occurrences in natural language. Despite their popularity, a fundamental concern about word embeddings is that they appear to be semantically “rigid”: inter-word proximity captures only overall similarity, yet human judgments about object similarities are highly context-dependent and involve multiple, distinct semantic features. For example, dolphins and alligators appear similar in size, but differ in intelligence and aggressiveness. Could such context-dependent relationships be recovered from word embeddings? To address this issue, we introduce a powerful, domain-general solution: “semantic projection” of word-vectors onto lines that represent various object features, like size (the line extending from the word “small” to “big”), intelligence (“dumb” → “smart”), or danger (“safe” → “dangerous”). This method, which is intuitively analogous to placing objects “on a mental scale” between two extremes, recovers human judgments across a range of object categories and properties. We thus show that word embeddings inherit a wealth of common knowledge from word co-occurrence statistics and can be flexibly manipulated to express context-dependent meanings.", "title": "" }, { "docid": "b1b0ee2c46314f311407158cd28f3079", "text": "Accurate data regarding the size of the erect penis is of great importance to several disciplines working with male patients, but little data exists on the best technique to measure penile length. While some previous small studies have suggested good correlation between stretched penile length, others have shown significant variability. Penile girth has been less well studied, and little data exist on the possible errors induced by differing observers and different techniques. Much of the published data report penile length measured from the penopubic skin junction-to-glans tip (STT) rather than pubic bone-to-tip (BTT). We wished to assess the accuracy of different techniques of penile measurements with multiple observers. Men who achieved full erection using dynamic penile Doppler ultrasound for the diagnosis of sexual dysfunction or a desire for objective penile measurement were included in the study. Exclusion criteria were penile scarring, curvature, or congenital abnormality. In each case, the penis was measured by one of the seven andrology specialists in a private air-conditioned (21 °C) environment. Each patient had three parameters measured: circumference (girth) of the penile shaft, length from suprapubic skin-to-distal glans (STT), and pubis-to-distal glans (BTT). The three measurements were recorded in the stretched flaccid state, and the same three measurements were then repeated in the fully erect state, following induction of full erection with intracavernosal injection. We analyzed the accuracy of each flaccid measurement using the erect measurements as a reference, for the overall patient population and for each observer. In total, 201 adult men (mean age 49.4 years) were included in this study. Assessing the penis in the stretched and flaccid state gave a mean underestimate of the erect measurement of ~20% (STT length 23.39%, BTT length 19.86%, and circumference 21.38%). In this large, multicenter, multi-observer study of penis size, flaccid measurements were only moderately accurate in predicting erect size. They were also significantly observer dependent. Measuring penile length from pubic bone to tip of glans is more accurate and reliable, the discrepancy being most notable in overweight patients.", "title": "" }, { "docid": "072f3152a93eb2a75f716dd1aec131c4", "text": "Research has not verified the theoretical or practical value of the brand attachment construct in relation to alternative constructs, particularly brand attitude strength. The authors make conceptual, measurement, and managerial contributions to this research issue. Conceptually, they define brand attachment, articulate its defining properties, and differentiate it from brand attitude strength. From a measurement perspective, they develop and validate a parsimonious measure of brand attachment, test the assumptions that underlie it, and demonstrate that it indicates the concept of attachment. They also demonstrate the convergent and discriminant validity of this measure in relation to brand attitude strength. Managerially, they demonstrate that brand attachment offers value over brand attitude strength in predicting (1) consumers’ intentions to perform difficult behaviors (those they regard as using consumer resources), (2) actual purchase behaviors, (3) brand purchase share (the share of a brand among directly competing brands), and (4) need share (the extent to which consumers rely on a brand to address relevant needs, including those brands in substitutable product categories).", "title": "" }, { "docid": "2cff00acdccfc43ed2bc35efe704f1ac", "text": "A decision to invest in new manufacturing enabling technologies supporting computer integrated manufacturing (CIM) must include non-quantifiable, intangible benefits to the organization in meeting its strategic goals. Therefore, use of tactical level, purely economic, evaluation methods normally result in the rejection of strategically vital automation proposals. This paper includes four different fuzzy multi-attribute group decision-making methods. The first one is a fuzzy model of group decision proposed by Blin. The second is fuzzy synthetic evaluation, the third is Yager’s weighted goals method, and the last one is fuzzy analytic hierarchy process. These methods are extended to select the best computer integrated manufacturing system by taking into account both intangible and tangible factors. A computer software for these approaches is developed and finally some numerical applications of these methods are given to compare the results of all methods. # 2003 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "5ab8a8f4991f7c701c51e32de7f97b36", "text": "Recent breakthroughs in computational capabilities and optimization algorithms have enabled a new class of signal processing approaches based on deep neural networks (DNNs). These algorithms have been extremely successful in the classification of natural images, audio, and text data. In particular, a special type of DNNs, called convolutional neural networks (CNNs) have recently shown superior performance for object recognition in image processing applications. This paper discusses modern training approaches adopted from the image processing literature and shows how those approaches enable significantly improved performance for synthetic aperture radar (SAR) automatic target recognition (ATR). In particular, we show how a set of novel enhancements to the learning algorithm, based on new stochastic gradient descent approaches, generate significant classification improvement over previously published results on a standard dataset called MSTAR.", "title": "" } ]
scidocsrr
ab2fd61dce90ff8ef98a102c4d9aff14
Semantic expansion using word embedding clustering and convolutional neural network for improving short text classification
[ { "docid": "e59d1a3936f880233001eb086032d927", "text": "In microblogging services such as Twitter, the users may become overwhelmed by the raw data. One solution to this problem is the classification of short text messages. As short texts do not provide sufficient word occurrences, traditional classification methods such as \"Bag-Of-Words\" have limitations. To address this problem, we propose to use a small set of domain-specific features extracted from the author's profile and text. The proposed approach effectively classifies the text to a predefined set of generic classes such as News, Events, Opinions, Deals, and Private Messages.", "title": "" }, { "docid": "09df260d26638f84ec3bd309786a8080", "text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/", "title": "" } ]
[ { "docid": "46fb68fc33453605c14e36d378c5e23e", "text": "This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. Meaning in life is thought to be important to well-being throughout the human life span. We assessed the structure, levels, and correlates of the presence of meaning in life, and the search for meaning, within four life stage groups: emerging adulthood, young adulthood, middle-age adulthood, and older adulthood. Results from a sample of Internet users (N ¼ 8756) demonstrated the structural invariance of the meaning measure used across life stages. Those at later life stages generally reported a greater presence of meaning in their lives, whereas those at earlier life stages reported higher levels of searching for meaning. Correlations revealed that the presence of meaning has similar relations to well-being across life stages, whereas searching for meaning is more strongly associated with well-being deficits at later life stages. Introduction Meaning in life has enjoyed a renaissance of interest in recent years, and is considered to be an important component of broader well-being (e. Perceptions of meaning in life are thought to be related to the development of a coherent sense of one's identity (Heine, Proulx, & Vohs, 2006), and the process of creating a sense of meaning theoretically begins in adolescence, continuing throughout life (Fry, 1998). Meaning creation should then be linked to individual development, and is likely to unfold in conjunction with other processes, such as the development of identity, relationships, and goals. Previous research has revealed that people experience different levels of the presence of meaning at different ages (e.g., Ryff & Essex, 1992), although these findings have been inconsistent, and inquiries have generally focused on limited age ranges (e.g., Pinquart, 2002). The present study aimed to integrate research on dimensions of meaning in life across the life span by providing an analysis …", "title": "" }, { "docid": "c4590f91c2644849dc7154e923635f0d", "text": "Researchers believe that Employer branding may be the most powerful tool a business can use to emotionally engage employees, maintain and retain the talented. It is essential to accurately measure whether the organization's values, systems, policies and behaviors work towards the objectives of attracting, motivating and retaining current and potential employees. This paper envisages examining empirically the Employer brand status in the IT/ITES (Information Technology / Information Technology Enabled Services) units under study and determining, if any, the differences in the Employer brand and its components /elements among the IT students and IT professionals. This study is limited to analyzing the Employer brand in terms of the perceived Employer brand image and the Employer brand expectations in the selected units of the IT industry in India. The findings would help to picturize the Employer brand image and expectations and provide policy makers and HR consultants a starting point to look individually into the various labor segments and evaluate their Employer brands.", "title": "" }, { "docid": "a49a425a7345d075775b0c409aa6c1f8", "text": "Attention is critical to learning. Hence, advanced learning technologies should benefit from mechanisms to monitor and respond to learners' attentional states. We study the feasibility of integrating commercial off-the-shelf (COTS) eye trackers to monitor attention during interactions with a learning technology called GuruTutor. We tested our implementation on 135 students in a noisy computer-enabled high school classroom and were able to collect a median 95% valid eye gaze data in 85% of the sessions where gaze data was successfully recorded. Machine learning methods were employed to develop automated detectors of mind wandering (MW) -- a phenomenon involving a shift in attention from task-related to task-unrelated thoughts that is negatively correlated with performance. Our student-independent, gaze-based models could detect MW with an accuracy (F1 of MW = 0.59) significantly greater than chance (F1 of MW = 0.24). Predicted rates of mind wandering were negatively related to posttest performance, providing evidence for the predictive validity of the detector. We discuss next steps towards developing gaze-based, attention-aware, learning technologies that can be deployed in noisy, real-world environments.", "title": "" }, { "docid": "707c5c55c11aac05c783929239f953dd", "text": "Social networks are of significant analytical interest. This is because their data are generated in great quantity, and intermittently, besides that, the data are from a wide variety, and it is widely available to users. Through such data, it is desired to extract knowledge or information that can be used in decision-making activities. In this context, we have identified the lack of methods that apply data mining techniques to the task of analyzing the professional profile of employees. The aim of such analyses is to detect competencies that are of greater interest by being more required and also, to identify their associative relations. Thus, this work introduces MineraSkill methodology that deals with methods to infer the desired profile of a candidate for a job vacancy. In order to do so, we use keyword detection via natural language processing techniques; which are related to others by inferring their association rules. The results are presented in the form of a case study, which analyzed data from LinkedIn, demonstrating the potential of the methodology in indicating trending competencies that are required together.", "title": "" }, { "docid": "a1bef11b10bc94f84914d103311a5941", "text": "Class imbalance and class overlap are two of the major problems in data mining and machine learning. Several studies have shown that these data complexities may affect the performance or behavior of artificial neural networks. Strategies proposed to face with both challenges have been separately applied. In this paper, we introduce a hybrid method for handling both class imbalance and class overlap simultaneously in multi-class learning problems. Experimental results on five remote sensing data show that the combined approach is a promising method. 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "cb59c880b3848b7518264f305cfea32a", "text": "Leakage current reduction is crucial for the transformerless photovoltaic inverters. The conventional three-phase current source H6 inverter suffers from the large leakage current, which restricts its application to transformerless PV systems. In order to overcome the limitations, a new three-phase current source H7 (CH7) inverter is proposed in this paper. Only one additional Insulated Gate Bipolar Transistor is needed, but the leakage current can be effectively suppressed with a new space vector modulation (SVM). Finally, the experimental tests are carried out on the proposed CH7 inverter, and the experimental results verify the effectiveness of the proposed topology and SVM method.", "title": "" }, { "docid": "a112db5b9cc50564c81b373c2abeb777", "text": "In this paper, S-shape microstrip patch antenna is investigated for wideband operation using circuit theory concept based on modal expansion cavity model. It is found that the antenna resonates at 2.62 GHz. The bandwidth of the S-shape microstrip patch antenna 21.62 % (theoretical) and 20.49% (simulated). The theoretical results are compared with IE3D simulation as well as reported experimental results and they are in close agreement.", "title": "" }, { "docid": "1ec395dbe807ff883dab413419ceef56", "text": "\"The Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure\" provides a new guideline for hypertension prevention and management. The following are the key messages(1) In persons older than 50 years, systolic blood pressure (BP) of more than 140 mm Hg is a much more important cardiovascular disease (CVD) risk factor than diastolic BP; (2) The risk of CVD, beginning at 115/75 mm Hg, doubles with each increment of 20/10 mm Hg; individuals who are normotensive at 55 years of age have a 90% lifetime risk for developing hypertension; (3) Individuals with a systolic BP of 120 to 139 mm Hg or a diastolic BP of 80 to 89 mm Hg should be considered as prehypertensive and require health-promoting lifestyle modifications to prevent CVD; (4) Thiazide-type diuretics should be used in drug treatment for most patients with uncomplicated hypertension, either alone or combined with drugs from other classes. Certain high-risk conditions are compelling indications for the initial use of other antihypertensive drug classes (angiotensin-converting enzyme inhibitors, angiotensin-receptor blockers, beta-blockers, calcium channel blockers); (5) Most patients with hypertension will require 2 or more antihypertensive medications to achieve goal BP (<140/90 mm Hg, or <130/80 mm Hg for patients with diabetes or chronic kidney disease); (6) If BP is more than 20/10 mm Hg above goal BP, consideration should be given to initiating therapy with 2 agents, 1 of which usually should be a thiazide-type diuretic; and (7) The most effective therapy prescribed by the most careful clinician will control hypertension only if patients are motivated. Motivation improves when patients have positive experiences with and trust in the clinician. Empathy builds trust and is a potent motivator. Finally, in presenting these guidelines, the committee recognizes that the responsible physician's judgment remains paramount.", "title": "" }, { "docid": "cb62164bc5a582be0c45df28d8ebb797", "text": "Android rooting enables device owners to freely customize their own devices and run useful apps that require root privileges. While useful, rooting weakens the security of Android devices and opens the door for malware to obtain privileged access easily. Thus, several rooting prevention mechanisms have been introduced by vendors, and sensitive or high-value mobile apps perform rooting detection to mitigate potential security exposures on rooted devices. However, there is a lack of understanding whether existing rooting prevention and detection methods are effective. To fill this knowledge gap, we studied existing Android rooting methods and performed manual and dynamic analysis on 182 selected apps, in order to identify current rooting detection methods and evaluate their effectiveness. Our results suggest that these methods are ineffective. We conclude that reliable methods for detecting rooting must come from integrity-protected kernels or trusted execution environments, which are difficult to bypass.", "title": "" }, { "docid": "50a6240340448a869c9a883fc8b89aeb", "text": "Alzheimer’s disease, for which there is currently no effective therapy, is the most common senile dementia. Alzheimer’s disease patients have notable abnormalities in cholinergic neurons in the basal forebrain. Neurotrophic factors have potent biological activities, such as preventing neuronal death and promoting neurite outgrowth, and are essential to maintain and organize neurons functionally. Glial cells support neurons by releasing neurotrophic factors, such as nerve growth factor (NGF), brain-derived neurotrophic factor (BDNF), neurotrophin 3, and glial-derived neurotrophic factor (GDNF). In particular, it is assumed that functional deficiency of NGF is related to Alzheimer’s disease and plays a part in the etiology of the disease process. It is known that NGF levels are decreased in the basal forebrains of Alzheimer’s disease patients, and in the frontal cortices of undemented patients with senile plaques. Furthermore, intracerebroventricular administration of NGF eliminates degeneration and resultant cognitive deficits in rats after brain injury, and it enhances the retention of passive avoidance learning in developing mice. In aged rats, intracerebral infusion of NGF partly reverses cholinergic cell body atrophy and improves the retention of spatial memory. In addition, intranasal administration of NGF ameliorates neurodegeneration and reduces the numbers of amyloid plaques in transgenic anti-NGF mice (AD11 mice), in which have a progressive neurodegenerative phenotype resembling Alzheimer’s disease. Therefore, NGF is expected to be applied to the treatment of Alzheimer’s disease. However, neurotrophic factors are proteins, and so are unable to cross the blood–brain barrier; they are also easily metabolized by peptidases. Therefore, their application as a medicine for the treatment of neurodegenerative disorders is assumed to be difficult. Alternatively, research has been carried out on low-molecular weight compounds that promote NGF biosynthesis, such as catecholamines, benzoquinones, fellutamides, idebenone, kansuinin, ingenol triacetate, jolkinolide B, dictyophorines, scabronines, hericenones, erinacins, and cyrneines. Hericium erinaceus is a mushroom that grows on old or dead broadleaf trees. H. erinaceus is taken as a food in Japan and China without harmful effects. Hericenones C—H and erinacines A—I were isolated from the fruit body and mycelium of H. erinaceus, respectively, all of which promote NGF synthesis in rodent cultured astrocytes. These results suggest the usefulness of H. erinaceus for the treatment and prevention of dementia. However, the detailed mechanism by which H. erinaceus induces NGF synthesis remains unknown. In the present study, we examined the NGF-inducing activity of ethanol extracts of H. erinaceus in 1321N1 human astrocytoma cells. The results obtained indicate that H. erinaceus has NGF-inducing activity, but that its active compounds are not hericenones. Furthermore, ICR mice given feed containing 5% H. erinaceus dry powder for 7 d showed an increase in the level of NGF mRNA expression in the hippocampus. September 2008 1727", "title": "" }, { "docid": "8850aa9b16d37abf2dabb9695d0ad9fa", "text": "To classify data whether it is in the field of neural networks or maybe it is any application of Biometrics viz: Handwriting classification or Iris detection, feasibly the most candid classifier in the stockpile or machine learning techniques is the Nearest Neighbor Classifier in which classification is achieved by identifying the nearest neighbors to a query example and using those neighbors to determine the class of the query. K-NN classification classifies instances based on their similarity to instances in the training data. This paper presents various output with various distance used in algorithm and may help to know the response of classifier for the desired application it also represents computational issues in identifying nearest neighbors and mechanisms for reducing the dimension of the data. Keywords— K-NN, Biometrics, Classifier,distance", "title": "" }, { "docid": "4daa16553442aa424a1578f02f044c6e", "text": "Cluster structure of gene expression data obtained from DNA microarrays is analyzed and visualized with the Self-Organizing Map (SOM) algorithm. The SOM forms a non-linear mapping of the data to a two-dimensional map grid that can be used as an exploratory data analysis tool for generating hypotheses on the relationships, and ultimately of the function of the genes. Similarity relationships within the data and cluster structures can be visualized and interpreted. The methods are demonstrated by computing a SOM of yeast genes. The relationships of known functional classes of genes are investigated by analyzing their distribution on the SOM, the cluster structure is visualized by the U-matrix method, and the clusters are characterized in terms of the properties of the expression profiles of the genes. Finally, it is shown that the SOM visualizes the similarity of genes in a more trustworthy way than two alternative methods, multidimensional scaling and hierarchical clustering.", "title": "" }, { "docid": "864d97df4021751abe0aa60964690f9b", "text": "Due to the increasing deployment of Deep Neural Networks (DNNs) in real-world security-critical domains including autonomous vehicles and collision avoidance systems, formally checking security properties of DNNs, especially under different attacker capabilities, is becoming crucial. Most existing security testing techniques for DNNs try to find adversarial examples without providing any formal security guarantees about the non-existence of such adversarial examples. Recently, several projects have used different types of Satisfiability Modulo Theory (SMT) solvers to formally check security properties of DNNs. However, all of these approaches are limited by the high overhead caused by the solver. In this paper, we present a new direction for formally checking security properties of DNNs without using SMT solvers. Instead, we leverage interval arithmetic to compute rigorous bounds on the DNN outputs. Our approach, unlike existing solver-based approaches, is easily parallelizable. We further present symbolic interval analysis along with several other optimizations to minimize overestimations of output bounds. We design, implement, and evaluate our approach as part of ReluVal, a system for formally checking security properties of Relu-based DNNs. Our extensive empirical results show that ReluVal outperforms Reluplex, a stateof-the-art solver-based system, by 200 times on average. On a single 8-core machine without GPUs, within 4 hours, ReluVal is able to verify a security property that Reluplex deemed inconclusive due to timeout after running for more than 5 days. Our experiments demonstrate that symbolic interval analysis is a promising new direction towards rigorously analyzing different security properties of DNNs.", "title": "" }, { "docid": "aba0d28e9f1a138e569aa2525781e84d", "text": "A compact coplanar waveguide (CPW) monopole antenna is presented, comprising a fractal radiating patch in which a folded T-shaped element (FTSE) is embedded. The impedance match of the antenna is determined by the number of fractal unit cells, and the FTSE provides the necessary band-notch functionality. The filtering property can be tuned finely by controlling of length of FTSE. Inclusion of a pair of rectangular notches in the ground plane is shown to extend the antenna's impedance bandwidth for ultrawideband (UWB) performance. The antenna's parameters were investigated to fully understand their affect on the antenna. Salient parameters obtained from this analysis enabled the optimization of the antenna's overall characteristics. Experimental and simulation results demonstrate that the antenna exhibits the desired VSWR level and radiation patterns across the entire UWB frequency range. The measured results showed the antenna operates over a frequency band between 2.94–11.17 GHz with fractional bandwidth of 117% for <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm VSWR} \\leq 2$</tex></formula>, except at the notch band between 3.3–4.2 GHz. The antenna has dimensions of 14<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times\\,$</tex> </formula>18<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times \\,$</tex> </formula>1 mm<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{3}$</tex> </formula>.", "title": "" }, { "docid": "096912a3104d4c46eb22c647de40a471", "text": "An I/Q active mixer in LTCC technology using packaged HEMTs as mixing devices is described. A mixer is designed for use in the 24 GHz automotive radar application. An on-tile buffer amplifier was added to compensate for the limited power available from the system oscillator. Careful choice of the type or topology for each of the passive circuits implemented resulted in an optimal mixer layout, so a very small size for a ceramic tile of 15times15times0.8 mm3 was achieved. The measured conversion gain of the mixer for a 0 dBm LO level was -6.7 dB for I and -5.2 dB for Q. The amplitude imbalance between I and Q signals resulting from the aggressive miniaturization of the quadrature coupler could be compensated in the DSP stages of the system at no additional cost. The measured I-Q phase imbalance was around 3 degrees. The measured return losses at mixer ports and LO-RF isolations are also very good.", "title": "" }, { "docid": "62c515d4b96f123b585a92a5aa919792", "text": "OBJECTIVE\nTo investigate the characteristics of the laryngeal mucosal microvascular network in suspected laryngeal cancer patients, using narrow band imaging, and to evaluate the value of narrow band imaging endoscopy in the early diagnosis of laryngeal precancerous and cancerous lesions.\n\n\nPATIENTS AND METHODS\nEighty-five consecutive patients with suspected precancerous or cancerous laryngeal lesions were enrolled in the study. Endoscopic narrow band imaging findings were classified into five types (I to V) according to the features of the mucosal intraepithelial papillary capillary loops assessed.\n\n\nRESULTS\nA total of 104 lesions (45 malignancies and 59 nonmalignancies) was detected under white light and narrow band imaging modes. The sensitivity and specificity of narrow band imaging in detecting malignant lesions were 88.9 and 93.2 per cent, respectively. The intraepithelial papillary capillary loop classification, as determined by narrow band imaging, was closely associated with the laryngeal lesions' histological findings. Type I to IV lesions were considered nonmalignant and type V lesions malignant. For type Va lesions, the sensitivity and specificity of narrow band imaging in detecting severe dysplasia or carcinoma in situ were 100 and 79.5 per cent, respectively. In patients with type Vb and Vc lesions, the sensitivity and specificity of narrow band imaging in detecting invasive carcinoma were 83.8 and 100 per cent, respectively.\n\n\nCONCLUSION\nNarrow band imaging is a promising approach enabling in vivo differentiation of nonmalignant from malignant laryngeal lesions by evaluating the morphology of mucosal capillaries. These results suggest endoscopic narrow band imaging may be useful in the early detection of laryngeal cancer and precancerous lesions.", "title": "" }, { "docid": "b5475fb64673f6be82e430d307b31fa2", "text": "We report a novel technique: a 1-stage transfer of 2 paddles of thoracodorsal artery perforator (TAP) flap with 1 pair of vascular anastomoses for simultaneous restoration of bilateral facial atrophy. A 47-year-old woman with a severe bilateral lipodystrophy of the face (Barraquer-Simons syndrome) was surgically treated using this procedure. Sufficient blood supply to each of the 2 flaps was confirmed with fluorescent angiography using the red-excited indocyanine green method. A good appearance was obtained, and the patient was satisfied with the result. Our procedure has advantages over conventional methods in that bilateral facial atrophy can be augmented simultaneously with only 1 donor site. Furthermore, our procedure requires only 1 pair of vascular anastomoses and the horizontal branch of the thoracodorsal nerve can be spared. To our knowledge, this procedure has not been reported to date. We consider that 2 paddles of TAP flap are safely elevated if the distal flap is designed on the descending branch, and this technique is useful for the reconstruction of bilateral facial atrophy or deformity.", "title": "" }, { "docid": "89c85642fc2e0b1f10c9a13b19f1d833", "text": "Many current successful Person Re-Identification(ReID) methods train a model with the softmax loss function to classify images of different persons and obtain the feature vectors at the same time. However, the underlying feature embedding space is ignored. In this paper, we use a modified softmax function, termed Sphere Softmax, to solve the classification problem and learn a hypersphere manifold embedding simultaneously. A balanced sampling strategy is also introduced. Finally, we propose a convolutional neural network called SphereReID adopting Sphere Softmax and training a single model end-to-end with a new warming-up learning rate schedule on four challenging datasets including Market-1501, DukeMTMC-reID, CHHK-03, and CUHK-SYSU. Experimental results demonstrate that this single model outperforms the state-of-the-art methods on all four datasets without fine-tuning or reranking. For example, it achieves 94.4% rank-1 accuracy on Market-1501 and 83.9% rank-1 accuracy on DukeMTMC-reID. The code and trained weights of our model will be released.", "title": "" }, { "docid": "0bc7de3f7ac06aa080ec590bdaf4c3b3", "text": "This paper demonstrates that US prestige-press coverage of global warming from 1988 to 2002 has contributed to a significant divergence of popular discourse from scientific discourse. This failed discursive translation results from an accumulation of tactical media responses and practices guided by widely accepted journalistic norms. Through content analysis of US prestige press— meaning the New York Times, the Washington Post, the Los Angeles Times, and the Wall Street Journal—this paper focuses on the norm of balanced reporting, and shows that the prestige press’s adherence to balance actually leads to biased coverage of both anthropogenic contributions to global warming and resultant action. r 2003 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "07305bc3eab0d83772ea1ab8ceebed9d", "text": "This paper examines the effect of the freemium strategy on Google Play, an online marketplace for Android mobile apps. By analyzing a large panel dataset consisting of 1,597 ranked mobile apps, we found that the freemium strategy is positively associated with increased sales volume and revenue of the paid apps. Higher sales rank and review rating of the free version of a mobile app both lead to higher sales rank of its paid version. However, only higher review rating of the free app contributes to higher revenue from the paid version, suggesting that although offering a free version is a viable way to improve the visibility of a mobile app, revenue is largely determined by product quality, not product visibility. Moreover, we found that the impact of review rating is not significant when the free version is offered, or when the mobile app is a hedonic app.", "title": "" } ]
scidocsrr
e7241ba0cad3c91c71fc17acad382b92
Rotor Integrity Design for a High-Speed Modular Air-Cored Axial-Flux Permanent-Magnet Generator
[ { "docid": "4d44572846a0989bf4bc230b669c88b7", "text": "Application-specific integrated circuit (ASIC) ML4425 is often used for sensorless control of permanent-magnet (PM) brushless direct current (BLDC) motor drives. It integrates the terminal voltage of the unenergized winding that contains the back electromotive force (EMF) information and uses a phase-locked loop (PLL) to determine the proper commutation sequence for the BLDC motor. However, even without pulsewidth modulation, the terminal voltage is distorted by voltage pulses due to the freewheel diode conduction. The pulses, which appear very wide in an ultrahigh-speed (120 kr/min) drive, are also integrated by the ASIC. Consequently, the motor commutation is significantly retarded, and the drive performance is deteriorated. In this paper, it is proposed that the ASIC should integrate the third harmonic back EMF instead of the terminal voltage, such that the commutation retarding is largely reduced and the motor performance is improved. Basic principle and implementation of the new ASIC-based sensorless controller will be presented, and experimental results will be given to verify the control strategy. On the other hand, phase delay in the motor currents arises due to the influence of winding inductance, reducing the drive performance. Therefore, a novel circuit with discrete components is proposed. It also uses the integration of third harmonic back EMF and the PLL technique and provides controllable advanced commutation to the BLDC motor.", "title": "" } ]
[ { "docid": "6cca53a0b41a981bb6a1707c55e924da", "text": "During sustained high-intensity military training or simulated combat exercises, significant decreases in physical performance measures are often seen. The use of dietary supplements is becoming increasingly popular among military personnel, with more than half of the US soldiers deployed or garrisoned reported to using dietary supplements. β-Alanine is a popular supplement used primarily by strength and power athletes to enhance performance, as well as training aimed at improving muscle growth, strength and power. However, there is limited research examining the efficacy of β-alanine in soldiers conducting operationally relevant tasks. The gains brought about by β-alanine use by selected competitive athletes appears to be relevant also for certain physiological demands common to military personnel during part of their training program. Medical and health personnel within the military are expected to extrapolate and implement relevant knowledge and doctrine from research performed on other population groups. The evidence supporting the use of β-alanine in competitive and recreational athletic populations suggests that similar benefits would also be observed among tactical athletes. However, recent studies in military personnel have provided direct evidence supporting the use of β-alanine supplementation for enhancing combat-specific performance. This appears to be most relevant for high-intensity activities lasting 60–300 s. Further, limited evidence has recently been presented suggesting that β-alanine supplementation may enhance cognitive function and promote resiliency during highly stressful situations.", "title": "" }, { "docid": "c65f050e911abb4b58b4e4f9b9aec63b", "text": "The abundant spatial and contextual information provided by the advanced remote sensing technology has facilitated subsequent automatic interpretation of the optical remote sensing images (RSIs). In this paper, a novel and effective geospatial object detection framework is proposed by combining the weakly supervised learning (WSL) and high-level feature learning. First, deep Boltzmann machine is adopted to infer the spatial and structural information encoded in the low-level and middle-level features to effectively describe objects in optical RSIs. Then, a novel WSL approach is presented to object detection where the training sets require only binary labels indicating whether an image contains the target object or not. Based on the learnt high-level features, it jointly integrates saliency, intraclass compactness, and interclass separability in a Bayesian framework to initialize a set of training examples from weakly labeled images and start iterative learning of the object detector. A novel evaluation criterion is also developed to detect model drift and cease the iterative learning. Comprehensive experiments on three optical RSI data sets have demonstrated the efficacy of the proposed approach in benchmarking with several state-of-the-art supervised-learning-based object detection approaches.", "title": "" }, { "docid": "710e81da55d50271b55ac9a4f2d7f986", "text": "Although prior research has examined how individual difference factors are related to relationship initiation and formation over the Internet (e.g., online dating sites, social networking sites), little research has examined how dispositional factors are related to other aspects of online dating. The present research therefore sought to examine the relationship between several dispositional factors, such as Big-Five personality traits, self-esteem, rejection sensitivity, and attachment styles, and the use of online dating sites and online dating behaviors. Rejection sensitivity was the only dispositional variable predictive of use of online dating sites whereby those higher in rejection sensitivity are more likely to use online dating sites than those lower in rejection sensitivity. We also found that those higher in rejection sensitivity, those lower in conscientiousness, and men indicated being more likely to engage in potentially risky behaviors related to meeting an online dating partner face-to-face. Further research is needed to further explore the relationships between these dispositional factors and online dating behaviors. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f35cc20c079df040de008ce1ca7ece83", "text": "The lack of training data is a common challenge in many machine learning problems, which is often tackled by semi-supervised learning methods or transfer learning methods. The former requires unlabeled images from the same distribution as the labeled ones and the latter leverages labeled images from related homogenous tasks. However, these restrictions often cannot be satisfied. To address this, we propose a novel robust and discriminative self-taught learning approach to utilize any unlabeled data without the above restrictions. Our new approach employs a robust loss function to learn the dictionary, and enforces the structured sparse regularization to automatically select the optimal dictionary basis vectors and incorporate the supervision information contained in the labeled data. We derive an efficient iterative algorithm to solve the optimization problem and rigorously prove its convergence. Promising results in extensive experiments have validated the proposed approach.", "title": "" }, { "docid": "07db8fea11297fea2def9440a7d614dc", "text": "We present the 2017 Visual Domain Adaptation (VisDA) dataset and challenge, a large-scale testbed for unsupervised domain adaptation across visual domains. Unsupervised domain adaptation aims to solve the real-world problem of domain shift, where machine learning models trained on one domain must be transferred and adapted to a novel visual domain without additional supervision. The VisDA2017 challenge is focused on the simulation-to-reality shift and has two associated tasks: image classification and image segmentation. The goal in both tracks is to first train a model on simulated, synthetic data in the source domain and then adapt it to perform well on real image data in the unlabeled test domain. Our dataset is the largest one to date for cross-domain object classification, with over 280K images across 12 categories in the combined training, validation and testing domains. The image segmentation dataset is also large-scale with over 30K images across 18 categories in the three domains. We compare VisDA to existing cross-domain adaptation datasets and provide a baseline performance analysis, as well as results of the challenge.", "title": "" }, { "docid": "88eaf07c8ef59bad1ea9f29f83050149", "text": "A monocular 3D object tracking system generally has only up-to-scale pose estimation results without any prior knowledge of the tracked object. In this paper, we propose a novel idea to recover the metric scale of an arbitrary dynamic object by optimizing the trajectory of the objects in the world frame, without motion assumptions. By introducing an additional constraint in the time domain, our monocular visual-inertial tracking system can obtain continuous six degree of freedom (6-DoF) pose estimation without scale ambiguity. Our method requires neither fixed multi-camera nor depth sensor settings for scale observability, instead, the IMU inside the monocular sensing suite provides scale information for both camera itself and the tracked object. We build the proposed system on top of our monocular visual-inertial system (VINS) to obtain accurate state estimation of the monocular camera in the world frame. The whole system consists of a 2D object tracker, an object region-based visual bundle adjustment (BA), VINS and a correlation analysis-based metric scale estimator. Experimental comparisons with ground truth demonstrate the tracking accuracy of our 3D tracking performance while a mobile augmented reality (AR) demo shows the feasibility of potential applications.", "title": "" }, { "docid": "368c874a35428310bb0d497045b411f9", "text": "Triboelectric nanogenerator (TENG) technology has emerged as a new mechanical energy harvesting technology with numerous advantages. This paper analyzes its charging behavior together with a load capacitor. Through numerical and analytical modeling, the charging performance of a TENG with a bridge rectifier under periodic external mechanical motion is completely analogous to that of a dc voltage source in series with an internal resistance. An optimum load capacitance that matches the TENGs impedance is observed for the maximum stored energy. This optimum load capacitance is theoretically detected to be linearly proportional to the charging cycle numbers and the inherent TENG capacitance. Experiments were also performed to further validate our theoretical anticipation and show the potential application of this paper in guiding real experimental designs.", "title": "" }, { "docid": "5c2b7f85bba45905c324f7d6a10e5e53", "text": "We use the Sum of Squares method to develop new efficient algorithms for learning well-separated mixtures of Gaussians and robust mean estimation, both in high dimensions, that substantially improve upon the statistical guarantees achieved by previous efficient algorithms. Our contributions are: \n Mixture models with separated means: We study mixtures of poly(<i>k</i>)-many <i>k</i>-dimensional distributions where the means of every pair of distributions are separated by at least <i>k</i><sup>ε</sup>. In the special case of spherical Gaussian mixtures, we give a <i>k</i><sup><i>O</i>(1/ε)</sup>-time algorithm that learns the means assuming separation at least <i>k</i><sup>ε</sup>, for any ε> 0. This is the first algorithm to improve on greedy (“single-linkage”) and spectral clustering, breaking a long-standing barrier for efficient algorithms at separation <i>k</i><sup>1/4</sup>. \n Robust estimation: When an unknown (1−ε)-fraction of <i>X</i><sub>1</sub>,…,<i>X</i><sub><i>n</i></sub> are chosen from a sub-Gaussian distribution with mean µ but the remaining points are chosen adversarially, we give an algorithm recovering µ to error ε<sup>1−1/<i>t</i></sup> in time <i>k</i><sup><i>O</i>(<i>t</i>)</sup>, so long as sub-Gaussian-ness up to <i>O</i>(<i>t</i>) moments can be certified by a Sum of Squares proof. This is the first polynomial-time algorithm with guarantees approaching the information-theoretic limit for non-Gaussian distributions. Previous algorithms could not achieve error better than ε<sup>1/2</sup>. As a corollary, we achieve similar results for robust covariance estimation. \n Both of these results are based on a unified technique. Inspired by recent algorithms of Diakonikolas et al. in robust statistics, we devise an SDP based on the Sum of Squares method for the following setting: given <i>X</i><sub>1</sub>,…,<i>X</i><sub><i>n</i></sub> ∈ ℝ<sup><i>k</i></sup> for large <i>k</i> and <i>n</i> = poly(<i>k</i>) with the promise that a subset of <i>X</i><sub>1</sub>,…,<i>X</i><sub><i>n</i></sub> were sampled from a probability distribution with bounded moments, recover some information about that distribution.", "title": "" }, { "docid": "fc6d0ac5bea7182d25adc11ce7dcb489", "text": "The evolving 5G standards promise green communications with enhanced data services and significant link reliability. Massive multi input multi output (MIMO) techniques are the driving force behind green communications, since they provide better energy efficiency with reduced transmit power. The massive data generated from such mobile communication systems, is a rich data source of great value. Procuring useful analytics from this precious resource, a big data aware 5G mobile communication system can be developed. A particular choice of big analytics brings in the concept of large random matrix models and single ring law. In this paper, first, big data analytics is performed in the context of a mobile user communicating to, either a massive MIMO or a massive MIMO orthogonal frequency division multiplexing (OFDM) system. Constructive insights such as transmitted (source) signal correlation analysis (attributed to certain network events), channel correlation analysis (attributed to user mobility) have been extracted. Ring law also has its roots in signal detection, which suggests that few other signal detection algorithms may be suitable candidates for signal/channel correlation analysis. Therefore, second, a proposed extension of an information theoretic criterion (ITC) based signal detection algorithm, for correlation analysis, is compared with ring law. Using massive MIMO and MIMO-OFDM system simulations, the said correlation analyses have confirmed the prevalence of ring law. Third, it is deduced that integrating big data analytics with massive MIMO system improves spectral efficiency.", "title": "" }, { "docid": "c67b6ea4909f47f814760e7ccd38426f", "text": "Firewalls are core elements in network security. However, managing firewall rules, especially for enterprise networks, has become complex and error-prone. Firewall filtering rules have to be carefully written and organized in order to correctly implement the security policy. In addition, inserting or modifying a filtering rule requires thorough analysis of the relationship between this rule and other rules in order to determine the proper order of this rule and commit the updates. In this paper we present a set of techniques and algorithms that provide automatic discovery of firewall policy anomalies to reveal rule conflicts and potential problems in legacy firewalls, and anomaly-free policy editing for rule insertion, removal, and modification. This is implemented in a user-friendly tool called ¿Firewall Policy Advisor.¿ The Firewall Policy Advisor significantly simplifies the management of any generic firewall policy written as filtering rules, while minimizing network vulnerability due to firewall rule misconfiguration.", "title": "" }, { "docid": "bd8cdb4b89f2a0e4c91798da71621c75", "text": "Anthocyanins are one of the most widespread families of natural pigments in the plant kingdom. Their health beneficial effects have been documented in many in vivo and in vitro studies. This review summarizes the most recent literature regarding the health benefits of anthocyanins and their molecular mechanisms. It appears that several signaling pathways, including mitogen-activated protein kinase, nuclear factor κB, AMP-activated protein kinase, and Wnt/β-catenin, as well as some crucial cellular processes, such as cell cycle, apoptosis, autophagy, and biochemical metabolism, are involved in these beneficial effects and may provide potential therapeutic targets and strategies for the improvement of a wide range of diseases in future. In addition, specific anthocyanin metabolites contributing to the observed in vivo biological activities, structure-activity relationships as well as additive and synergistic efficacy of anthocyanins are also discussed.", "title": "" }, { "docid": "043b51b50f17840508b0dfb92c895fc9", "text": "Over the years, several security measures have been employed to combat the menace of insecurity of lives and property. This is done by preventing unauthorized entrance into buildings through entrance doors using conventional and electronic locks, discrete access code, and biometric methods such as the finger prints, thumb prints, the iris and facial recognition. In this paper, a prototyped door security system is designed to allow a privileged user to access a secure keyless door where valid smart card authentication guarantees an entry. The model consists of hardware module and software which provides a functionality to allow the door to be controlled through the authentication of smart card by the microcontroller unit. (", "title": "" }, { "docid": "1851533953769821423580614feae837", "text": "This work presents a 54 Gb/s monolithically integrated silicon photonics receiver (Rx). A germanium photodiode (Ge-PD) is monolithically integrated with a transimpedance amplifier (TIA) and low frequency feedback loop to compensate for the DC input overload current. Bandwidth enhancement techniques are used to extend the bandwidth compared to previously published monolithically integrated receivers. Implemented in a 0.25 μm SiGe:C BiCMOS electronic/photonic integrated circuit (EPIC) technology, the Rx operates at λ=1.55 μm, achieves an optical/electrical (O/E) bandwidth of 47GHz with only ±5ps group delay variation and a sensitivity of 0.2dBm for 4.5×10-11 BER at 40 Gb/s and 0.97dBm for 1.05×10-6 BER at 54 Gb/s. It dissipates 73mW of power, while occupying 1.6mm2 of area. To the best of the author's knowledge, this work presents the state-of-the-art bandwidth and bit rate in monolithically integrated photonic receivers.", "title": "" }, { "docid": "e8d4a806f1515d9cbbe2b7924dfba92e", "text": "How to use, and influence, consumer social communications to improve business performance, reputation, and profit.", "title": "" }, { "docid": "198311a68ad3b9ee8020b91d0b029a3c", "text": "Online multi-object tracking aims at producing complete tracks of multiple objects using the information accumulated up to the present moment. It still remains a difficult problem in complex scenes, because of frequent occlusion by clutter or other objects, similar appearances of different objects, and other factors. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first propose the tracklet confidence using the detectability and continuity of a tracklet, and formulate a multi-object tracking problem based on the tracklet confidence. The multi-object tracking problem is then solved by associating tracklets in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive associations. Here, for reliable association between tracklets and detections, we also propose a novel online learning method using an incremental linear discriminant analysis for discriminating the appearances of objects. By exploiting the proposed learning method, tracklet association can be successfully achieved even under severe occlusion. Experiments with challenging public datasets show distinct performance improvement over other batch and online tracking methods.", "title": "" }, { "docid": "3f292307824ed0b4d7fd59824ff9dd2b", "text": "The aim of this qualitative study was to obtain a better understanding of the developmental trajectories of persistence and desistence of childhood gender dysphoria and the psychosexual outcome of gender dysphoric children. Twenty five adolescents (M age 15.88, range 14-18), diagnosed with a Gender Identity Disorder (DSM-IV or DSM-IV-TR) in childhood, participated in this study. Data were collected by means of biographical interviews. Adolescents with persisting gender dysphoria (persisters) and those in whom the gender dysphoria remitted (desisters) indicated that they considered the period between 10 and 13 years of age to be crucial. They reported that in this period they became increasingly aware of the persistence or desistence of their childhood gender dysphoria. Both persisters and desisters stated that the changes in their social environment, the anticipated and actual feminization or masculinization of their bodies, and the first experiences of falling in love and sexual attraction had influenced their gender related interests and behaviour, feelings of gender discomfort and gender identification. Although, both persisters and desisters reported a desire to be the other gender during childhood years, the underlying motives of their desire seemed to be different.", "title": "" }, { "docid": "6eda76a015e8cb9122ed89b491474248", "text": "Beauty treatment for skin requires a high-intensity focused ultrasound (HIFU) transducer to generate coagulative necrosis in a small focal volume (e.g., 1 mm³) placed at a shallow depth (3-4.5 mm from the skin surface). For this, it is desirable to make the F-number as small as possible under the largest possible aperture in order to generate ultrasound energy high enough to induce tissue coagulation in such a small focal volume. However, satisfying both conditions at the same time is demanding. To meet the requirements, this paper, therefore, proposes a double-focusing technique, in which the aperture of an ultrasound transducer is spherically shaped for initial focusing and an acoustic lens is used to finally focus ultrasound on a target depth of treatment; it is possible to achieve the F-number of unity or less while keeping the aperture of a transducer as large as possible. In accordance with the proposed method, we designed and fabricated a 7-MHz double-focused ultrasound transducer. The experimental results demonstrated that the fabricated double-focused transducer had a focal length of 10.2 mm reduced from an initial focal length of 15.2 mm and, thus, the F-number changed from 1.52 to 1.02. Based on the results, we concluded that the proposed double-focusing method is suitable to decrease F-number while maintaining a large aperture size.", "title": "" }, { "docid": "940b907c28adeaddc2515f304b1d885e", "text": "In this study, we intend to identify the evolutionary footprints of the South Iberian population focusing on the Berber and Arab influence, which has received little attention in the literature. Analysis of the Y-chromosome variation represents a convenient way to assess the genetic contribution of North African populations to the present-day South Iberian genetic pool and could help to reconstruct other demographic events that could have influenced on that region. A total of 26 Y-SNPs and 17 Y-STRs were genotyped in 144 samples from 26 different districts of South Iberia in order to assess the male genetic composition and the level of substructure of male lineages in this area. To obtain a more comprehensive picture of the genetic structure of the South Iberian region as a whole, our data were compared with published data on neighboring populations. Our analyses allow us to confirm the specific impact of the Arab and Berber expansion and dominion of the Peninsula. Nevertheless, our results suggest that this influence is not bigger in Andalusia than in other Iberian populations.", "title": "" }, { "docid": "a762e16e19e71736331168f0910f2379", "text": "Introduction. In the current era of digital communication, users share information they consider important, using wikis, blogs and social networking Websites. The digital content includes valuable as well as biased, false and demagogic information. The objectives of this review paper are, i) To understand the perceptions of users regarding Web credibility judgment and the problems faced by them, ii) To review and list the factors used in various Web credibility judgment techniques, iii) To suggest a hybrid model that takes advantage of different credibility judgment techniques. Method. This paper adopted a systematic review methodology based on the guidelines of Kitchenham. Analysis. Over 100 papers were reviewed to compile the list of factors covered in the approaches. These analyses were summarized in the form of tables featuring the methods, types and categories of approaches as well as the factors covered. Results. Our findings show that by adopting more than one approach when assessing the credibility of a Web, measuring credibility assessment becomes easier. Therefore, a hybrid approach is presented for the conduct of credibility assessment using the different approaches available to measure accuracy, authority, aesthetics, professionalism, popularity, currency, impartiality and quality. Conclusions. This paper hopes to contribute to the body of knowledge related to the identification of the factors (and categories they belong to) affecting Web credibility judgment and highlights the importance of a hybrid model for making accurate and effective Web credibility judgments.", "title": "" } ]
scidocsrr
d00525ddea4edcf5ffb798e19962dc24
Designing products with added emotional value ; development and application of an approach for research through design
[ { "docid": "59af1eb49108e672a35f7c242c5b4683", "text": "“The value concept, more than any other, should occupy a central position . . . able to unify the apparently diverse interests of all the sciences concerned with human behavior.” These words, proclaiming the centrality of the value concept, were written by a psychologist (Rokeach, 1973, p. 3), but similar stands have been taken by sociologists (e.g., Williams, 1968) and anthropologists (e.g., Kluckhohn, 1951). These theorists view values as the criteria people use to select and justify actions and to evaluate people (including the self) and events. We, too, adopt this view of values as criteria rather than as qualities inherent in objects. This article discusses work that is part of a larger project intended to explore the importance of values in a wide variety of contexts. The project addresses three broad questions about values. First, how are the value priorities of individuals affected by their social experiences? That is, how do the common experiences people have, because of their shared locations in the social structure (their education, age, gender, occupation, etc.), influence their value priorities? And, how do individuals’ unique experiences (trauma, relations with parents, immigration, etc.) affect their value priorities? Second, how do the value priorities held by individuals affect their behavioral orientations and choices? That is, how do value priorities influence ideologies, attitudes, and actions in the political, religious, environmental, and other domains?", "title": "" } ]
[ { "docid": "c3b2949d4d851df37103d61b8b51c60e", "text": "Training deep neural networks is difficult for the pathological curvature problem. Re-parameterization is an effective way to relieve the problem by learning the curvature approximately or constraining the solutions of weights with good properties for optimization. This paper proposes to reparameterize the input weight of each neuron in deep neural networks by normalizing it with zero-mean and unit-norm, followed by a learnable scalar parameter to adjust the norm of the weight. This technique effectively stabilizes the distribution implicitly. Besides, it improves the conditioning of the optimization problem and thus accelerates the training of deep neural networks. It can be wrapped as a linear module in practice and plugged in any architecture to replace the standard linear module. We highlight the benefits of our method on both multi-layer perceptrons and convolutional neural networks, and demonstrate its scalability and efficiency on SVHN, CIFAR-10, CIFAR-100 and ImageNet datasets.", "title": "" }, { "docid": "fc40a4af9411d0e9f494b13cbb916eac", "text": "P (P2P) file sharing networks are an important medium for the distribution of information goods. However, there is little empirical research into the optimal design of these networks under real-world conditions. Early speculation about the behavior of P2P networks has focused on the role that positive network externalities play in improving performance as the network grows. However, negative network externalities also arise in P2P networks because of the consumption of scarce network resources or an increased propensity of users to free ride in larger networks, and the impact of these negative network externalities—while potentially important—has received far less attention. Our research addresses this gap in understanding by measuring the impact of both positive and negative network externalities on the optimal size of P2P networks. Our research uses a unique dataset collected from the six most popular OpenNap P2P networks between December 19, 2000, and April 22, 2001. We find that users contribute additional value to the network at a decreasing rate and impose costs on the network at an increasing rate, while the network increases in size. Our results also suggest that users are less likely to contribute resources to the network as the network size increases. Together, these results suggest that the optimal size of these centralized P2P networks is bounded—At some point the costs that a marginal user imposes on the network will exceed the value they provide to the network. This finding is in contrast to early predictions that larger P2P networks would always provide more value to users than smaller networks. Finally, these results also highlight the importance of considering user incentives—an important determinant of resource sharing in P2P networks—in network design.", "title": "" }, { "docid": "05a76f64a6acbcf48b7ac36785009db3", "text": "Mixed methods research is an approach that combines quantitative and qualitative research methods in the same research inquiry. Such work can help develop rich insights into various phenomena of interest that cannot be fully understood using only a quantitative or a qualitative method. Notwithstanding the benefits and repeated calls for such work, there is a dearth of mixed methods research in information systems. Building on the literature on recent methodological advances in mixed methods research, we develop a set of guidelines for conducting mixed methods research in IS. We particularly elaborate on three important aspects of conducting mixed methods research: (1) appropriateness of a mixed methods approach; (2) development of meta-inferences (i.e., substantive theory) from mixed methods research; and (3) assessment of the quality of meta-inferences (i.e., validation of mixed methods research). The applicability of these guidelines is illustrated using two published IS papers that used mixed methods.", "title": "" }, { "docid": "6ff681e22778abaf3b79f054fa5a1f30", "text": "Computer generated battleeeld agents need to be able to explain the rationales for their actions. Such explanations make it easier to validate agent behavior, and can enhance the eeectiveness of the agents as training devices. This paper describes an explanation capability called Debrief that enables agents implemented in Soar to describe and justify their decisions. Debrief determines the motivation for decisions by recalling the context in which decisions were made, and determining what factors were critical to those decisions. In the process Debrief learns to recognize similar situations where the same decision would be made for the same reasons. Debrief currently being used by the TacAir-Soar tactical air agent to explain its actions , and is being evaluated for incorporation into other reactive planning agents.", "title": "" }, { "docid": "50b5f29431b758e0df5bd6e295ef78d1", "text": "While deep convolutional neural networks (CNNs) have emerged as the driving force of a wide range of domains, their computationally and memory intensive natures hinder the further deployment in mobile and embedded applications. Recently, CNNs with low-precision parameters have attracted much research attention. Among them, multiplier-free binary- and ternary-weight CNNs are reported to be of comparable recognition accuracy with full-precision networks, and have been employed to improve the hardware efficiency. However, even with the weights constrained to binary and ternary values, large-scale CNNs still require billions of operations in a single forward propagation pass.\n In this paper, we introduce a novel approach to maximally eliminate redundancy in binary- and ternary-weight CNN inference, improving both the performance and energy efficiency. The initial kernels are transformed into much fewer and sparser ones, and the output feature maps are rebuilt from the immediate results. Overall, the number of total operations in convolution is reduced. To find an efficient transformation solution for each already trained network, we propose a searching algorithm, which iteratively matches and eliminates the overlap in a set of kernels. We design a specific hardware architecture to optimize the implementation of kernel transformation. Specialized dataflow and scheduling method are proposed. Tested on SVHN, AlexNet, and VGG-16, our architecture removes 43.4%--79.9% operations, and speeds up the inference by 1.48--3.01 times.", "title": "" }, { "docid": "f5eb1355dd1511bd647ec317d0336cd7", "text": "Cloud Computing holds the potential to eliminate the requirements for setting up of highcost computing infrastructure for the IT-based solutions and services that the industry uses. It promises to provide a flexible IT architecture, accessible through internet for lightweight portable devices. This would allow many-fold increase in the capacity or capabilities of the existing and new software. In a cloud computing environment, the entire data reside over a set of networked resources, enabling the data to be accessed through virtual machines. Since these data centres may lie in any corner of the world beyond the reach and control of users, there are multifarious security and privacy challenges that need to be understood and taken care of. Also, one can never deny the possibility of a server breakdown that has been witnessed, rather quite often in the recent times. There are various issues that need to be dealt with respect to security and privacy in a cloud computing scenario. This extensive survey paper aims to elaborate and analyze the numerous unresolved issues threatening the Cloud computing adoption and diffusion affecting the various stake-holders linked to it.", "title": "" }, { "docid": "ea9fe846b389c04355a34572383a1d95", "text": "Keloids are common in the Asian population. Multiple or huge keloids can appear on the chest wall because of its tendency to develop acne, sebaceous cyst, etc. It is difficult to find an ideal treatment for keloids in this area due to the limit of local soft tissues and higher recurrence rate. This study aims at establishing an individualized protocol that could be easily applied according to the size and number of chest wall keloids.A total of 445 patients received various methods (4 protocols) of treatment in our department from September 2006 to September 2012 according to the size and number of their chest wall keloids. All of the patients received adjuvant radiotherapy in our hospital. Patient and Observer Scar Assessment Scale (POSAS) was used to assess the treatment effect by both doctors and patients. With mean follow-up time of 13 months (range: 6-18 months), 362 patients participated in the assessment of POSAS with doctors.Both the doctors and the patients themselves used POSAS to evaluate the treatment effect. The recurrence rate was 0.83%. There was an obvious significant difference (P < 0.001) between the before-surgery score and the after-surgery score from both doctors and patients, indicating that both doctors and patients were satisfied with the treatment effect.Our preliminary clinical result indicates that good clinical results could be achieved by choosing the proper method in this algorithm for Chinese patients with chest wall keloids. This algorithm could play a guiding role for surgeons when dealing with chest wall keloid treatment.", "title": "" }, { "docid": "efc6c423fa98c012543352db8fb0688a", "text": "Wireless sensor networks consist of sensor nodes with sensing and communication capabilities. We focus on data aggregation problems in energy constrained sensor networks. The main goal of data aggregation algorithms is to gather and aggregate data in an energy efficient manner so that network lifetime is enhanced. In this paper, we present a survey of data aggregation algorithms in wireless sensor networks. We compare and contrast different algorithms on the basis of performance measures such as lifetime, latency and data accuracy. We conclude with possible future research directions.", "title": "" }, { "docid": "2f649ca20a652ab96db6be136e2e90cc", "text": "iii TABLE OF CONTENTS iv", "title": "" }, { "docid": "894e945c9bb27f5464d1b8f119139afc", "text": "Motion analysis is used in computer vision to understand the behaviour of moving objects in sequences of images. Optimising the interpretation of dynamic biological systems requires accurate and precise motion tracking as well as efficient representations of high-dimensional motion trajectories so that these can be used for prediction tasks. Here we use image sequences of the heart, acquired using cardiac magnetic resonance imaging, to create time-resolved three-dimensional segmentations using a fully convolutional network trained on anatomical shape priors. This dense motion model formed the input to a supervised denoising autoencoder (4Dsurvival), which is a hybrid network consisting of an autoencoder that learns a task-specific latent code representation trained on observed outcome data, yielding a latent representation optimised for survival prediction. To handle right-censored survival outcomes, our network used a Cox partial likelihood loss function. In a study of 302 patients the predictive accuracy (quantified by Harrell's C-index) was significantly higher (p = .0012) for our model C=0.75 (95% CI: 0.70 - 0.79) than the human benchmark of C=0.59 (95% CI: 0.53 - 0.65). This work demonstrates how a complex computer vision task using high-dimensional medical image data can efficiently predict human survival.", "title": "" }, { "docid": "099dbf8d4c0b401cd3389583eb4495f3", "text": "This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 437 15-minute video clips, where actions are localized in space and time, resulting in 1.59M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.8% mAP, underscoring the need for developing new approaches for video understanding.", "title": "" }, { "docid": "36b97ad6508f40acfaba05318d65211a", "text": "Actinomycotic infections are known to have an association with difficulties in diagnosis and treatment. These infections usually involve the head, neck, thorax, and abdomen. Actinomycosis of the upper lip is a rare condition and an important one as well, because it can imitate other diseases. As the initial impression, it can easily be mistaken for a mucocele, venous lake, or benign neoplasm. An 82-year-old man presented with an asymptomatic normal skin colored nodule on the upper lip. Histopathologic findings showed an abscess and sulfur granules in the dermis. Gram staining results showed a mesh of branching rods. In this report, we present an unusual case of actinomycosis of the upper lip and discuss its characteristics and therapeutic modalities.", "title": "" }, { "docid": "02e6ff753b0050792eda885ce1378966", "text": "Bacteria possess numerous and diverse means of gene regulation using RNA molecules, including mRNA leaders that affect expression in cis, small RNAs that bind to proteins or base pair with target RNAs, and CRISPR RNAs that inhibit the uptake of foreign DNA. Although examples of RNA regulators have been known for decades in bacteria, we are only now coming to a full appreciation of their importance and prevalence. Here, we review the known mechanisms and roles of regulatory RNAs, highlight emerging themes, and discuss remaining questions.", "title": "" }, { "docid": "580bdf8197e94c5bc82bc52bcc7cf6c7", "text": "This article reports a theoretical and experimental attempt to relate and contrast 2 traditionally separate research programs: inattentional blindness and attention capture. Inattentional blindness refers to failures to notice unexpected objects and events when attention is otherwise engaged. Attention capture research has traditionally used implicit indices (e.g., response times) to investigate automatic shifts of attention. Because attention capture usually measures performance whereas inattentional blindness measures awareness, the 2 fields have existed side by side with no shared theoretical framework. Here, the authors propose a theoretical unification, adapting several important effects from the attention capture literature to the context of sustained inattentional blindness. Although some stimulus properties can influence noticing of unexpected objects, the most influential factor affecting noticing is a person's own attentional goals. The authors conclude that many--but not all--aspects of attention capture apply to inattentional blindness but that these 2 classes of phenomena remain importantly distinct.", "title": "" }, { "docid": "2fc08ad59c39e9bbd79168dbf9ecff44", "text": "Machine learning models are susceptible to adversarial perturbations: small changes to input that can cause large changes in output. Additionally, there exist input-agnostic perturbations, called universal adversarial perturbations, which can change the inference of target model on most of the data samples. However, existing methods to craft universal perturbations are (i) task specific, (ii) require samples from the training data distribution, and (iii) perform complex optimizations. Additionally, fooling ability of the crafted perturbations is proportional to the available training data. In this paper, we present a novel, generalizable and data-free approach for crafting universal adversarial perturbations. Independent of the underlying task, our objective achieves fooling via corrupting the extracted features at multiple layers. Therefore, the proposed objective is generalizable to craft image-agnostic perturbations across multiple vision tasks such as object recognition, semantic segmentation, and depth estimation. In the practical setting of black-box attack scenario, we show that our objective outperforms the data dependent objectives. Further, via exploiting simple priors related to the data distribution, our objective remarkably boosts the fooling ability of the crafted perturbations. Significant fooling rates achieved by our objective emphasize that the current deep learning models are now at an increased risk, since our objective generalizes across multiple tasks without the requirement of training data.", "title": "" }, { "docid": "0ee3a55a5d4385005fb9d54dde843e6e", "text": "This paper provides overviews of interesting topics of game theory, information economics, rational expectations, and efficient market hypothesis. Then, the paper shows how these topics are interconnected, with the rational expectations topic playing the pivotal role. Finally, by way of proving a theorem in the context of the well-known Kyle's [75] rational expectations equilibrium model, the paper provides an exposition of the interconnectedness of the topics.", "title": "" }, { "docid": "3ea1b050c06e723be5234d98ea577edd", "text": "Profiling gene expression in brain structures at various spatial and temporal scales is essential to understanding how genes regulate the development of brain structures. The Allen Developing Mouse Brain Atlas provides high-resolution 3-D in situ hybridization (ISH) gene expression patterns in multiple developing stages of the mouse brain. Currently, the ISH images are annotated with anatomical terms manually. In this paper, we propose a computational approach to annotate gene expression pattern images in the mouse brain at various structural levels over the course of development. We applied deep convolutional neural network that was trained on a large set of natural images to extract features from the ISH images of developing mouse brain. As a baseline representation, we applied invariant image feature descriptors to capture local statistics from ISH images and used the bag-of-words approach to build image-level representations. Both types of features from multiple ISH image sections of the entire brain were then combined to build 3-D, brain-wide gene expression representations. We employed regularized learning methods for discriminating gene expression patterns in different brain structures. Results show that our approach of using convolutional model as feature extractors achieved superior performance in annotating gene expression patterns at multiple levels of brain structures throughout four developing ages. Overall, we achieved average AUC of 0.894 ± 0.014, as compared with 0.820 ± 0.046 yielded by the bag-of-words approach. Deep convolutional neural network model trained on natural image sets and applied to gene expression pattern annotation tasks yielded superior performance, demonstrating its transfer learning property is applicable to such biological image sets.", "title": "" }, { "docid": "e6e86f903da872b89b1043c4df9a41d6", "text": "With the emergence of Web 2.0 technology and the expansion of on-line social networks, current Internet users have the ability to add their reviews, ratings and opinions on social media and on commercial and news web sites. Sentiment analysis aims to classify these reviews reviews in an automatic way. In the literature, there are numerous approaches proposed for automatic sentiment analysis for different language contexts. Each language has its own properties that makes the sentiment analysis more challenging. In this regard, this work presents a comprehensive survey of existing Arabic sentiment analysis studies, and covers the various approaches and techniques proposed in the literature. Moreover, we highlight the main difficulties and challenges of Arabic sentiment analysis, and the proposed techniques in literature to overcome these barriers.", "title": "" }, { "docid": "c9398b3dad75ba85becbec379a65a219", "text": "Passwords are still the predominant mode of authentication in contemporary information systems, despite a long list of problems associated with their insecurity. Their primary advantage is the ease of use and the price of implementation, compared to other systems of authentication (e.g. two-factor, biometry, …). In this paper we present an analysis of passwords used by students of one of universities and their resilience against brute force and dictionary attacks. The passwords were obtained from a university's computing center in plaintext format for a very long period - first passwords were created before 1980. The results show that early passwords are extremely easy to crack: the percentage of cracked passwords is above 95 % for those created before 2006. Surprisingly, more than 40 % of passwords created in 2014 were easily broken within a few hours. The results show that users - in our case students, despite positive trends, still choose easy to break passwords. This work contributes to loud warnings that a shift from traditional password schemes to more elaborate systems is needed.", "title": "" }, { "docid": "70cad4982e42d44eec890faf6ddc5c75", "text": "Both translation arrest and proteasome stress associated with accumulation of ubiquitin-conjugated protein aggregates were considered as a cause of delayed neuronal death after transient global brain ischemia; however, exact mechanisms as well as possible relationships are not fully understood. The aim of this study was to compare the effect of chemical ischemia and proteasome stress on cellular stress responses and viability of neuroblastoma SH-SY5Y and glioblastoma T98G cells. Chemical ischemia was induced by transient treatment of the cells with sodium azide in combination with 2-deoxyglucose. Proteasome stress was induced by treatment of the cells with bortezomib. Treatment of SH-SY5Y cells with sodium azide/2-deoxyglucose for 15 min was associated with cell death observed 24 h after treatment, while glioblastoma T98G cells were resistant to the same treatment. Treatment of both SH-SY5Y and T98G cells with bortezomib was associated with cell death, accumulation of ubiquitin-conjugated proteins, and increased expression of Hsp70. These typical cellular responses to proteasome stress, observed also after transient global brain ischemia, were not observed after chemical ischemia. Finally, chemical ischemia, but not proteasome stress, was in SH-SY5Y cells associated with increased phosphorylation of eIF2α, another typical cellular response triggered after transient global brain ischemia. Our results showed that short chemical ischemia of SH-SY5Y cells is not sufficient to induce both proteasome stress associated with accumulation of ubiquitin-conjugated proteins and stress response at the level of heat shock proteins despite induction of cell death and eIF2α phosphorylation.", "title": "" } ]
scidocsrr
d2fd4f5772946f23135d762390315b83
User privacy and data trustworthiness in mobile crowd sensing
[ { "docid": "bd19395492dfbecd58f5cfd56b0d00a7", "text": "The ubiquity of the various cheap embedded sensors on mobile devices, for example cameras, microphones, accelerometers, and so on, is enabling the emergence of participatory sensing applications. While participatory sensing can benefit the individuals and communities greatly, the collection and analysis of the participators' location and trajectory data may jeopardize their privacy. However, the existing proposals mostly focus on participators' location privacy, and few are done on participators' trajectory privacy. The effective analysis on trajectories that contain spatial-temporal history information will reveal participators' whereabouts and the relevant personal privacy. In this paper, we propose a trajectory privacy-preserving framework, named TrPF, for participatory sensing. Based on the framework, we improve the theoretical mix-zones model with considering the time factor from the perspective of graph theory. Finally, we analyze the threat models with different background knowledge and evaluate the effectiveness of our proposal on the basis of information entropy, and then compare the performance of our proposal with previous trajectory privacy protections. The analysis and simulation results prove that our proposal can protect participators' trajectories privacy effectively with lower information loss and costs than what is afforded by the other proposals.", "title": "" } ]
[ { "docid": "093deb80586f3bb3295354d3878d32cd", "text": "Augmented feedback (AF) can play an important role when learning or improving a motor skill. As research dealing with AF is broad and diverse, the purpose of this review is to provide the reader with an overview of the use of AF in exercise, motor learning and injury prevention research with respect to how it can be presented, its informational content and the limitations. The term 'augmented' feedback is used because additional information provided by an external source is added to the task-intrinsic feedback that originates from a person's sensory system. In recent decades, numerous studies from various fields within sport science (exercise science, sports medicine, motor control and learning, psychology etc.) have investigated the potential influence of AF on performance improvements. The first part of the review gives a theoretical background on feedback in general but particularly AF. The second part tries to highlight the differences between feedback that is given as knowledge of result and knowledge of performance. The third part introduces studies which have applied AF in exercise and prevention settings. Finally, the limitations of feedback research and the possible reasons for the diverging findings are discussed. The focus of this review lies mainly on the positive influence of AF on motor performance. Underlying neuronal adaptations and theoretical assumptions from learning theories are addressed briefly.", "title": "" }, { "docid": "643be78202e4d118e745149ed389b5ef", "text": "Little clinical research exists on the contribution of the intrinsic foot muscles (IFM) to gait or on the specific clinical evaluation or retraining of these muscles. The purpose of this clinical paper is to review the potential functions of the IFM and their role in maintaining and dynamically controlling the medial longitudinal arch. Clinically applicable methods of evaluation and retraining of these muscles for the effective management of various foot and ankle pain syndromes are discussed.", "title": "" }, { "docid": "d9c6898c239487fd57b5b8aea949de5d", "text": "In distributed reflective denial-of-service (DRDoS) attacks, adversaries send requests to public servers (e.g., open recursive DNS resolvers) and spoof the IP address of a victim. These servers, in turn, flood the victim with valid responses and – unknowingly – exhaust its bandwidth. Recently, attackers launched DRDoS attacks with hundreds of Gb/s bandwidth of this kind. While the attack technique is well-known for a few protocols such as DNS, it is unclear if further protocols are vulnerable to similar or worse attacks. In this paper, we revisit popular UDP-based protocols of network services, online games, P2P filesharing networks and P2P botnets to assess their security against DRDoS abuse. We find that 14 protocols are susceptible to bandwidth amplification and multiply the traffic up to a factor 4670. In the worst case, attackers thus need only 0.02% of the bandwidth that they want their victim(s) to receive, enabling far more dangerous attacks than what is known today. Worse, we identify millions of public hosts that can be abused as amplifiers. We then analyze more than 130 real-world DRDoS attacks. For this, we announce bait services to monitor their abuse and analyze darknet as well as network traffic from large ISPs. We use traffic analysis to detect both, victims and amplifiers, showing that attackers already started to abuse vulnerable protocols other than DNS. Lastly, we evaluate countermeasures against DRDoS attacks, such as preventing spoofing or hardening protocols and service configurations. We shows that carefully-crafted DRDoS attacks may evade poorly-designed rate limiting solutions. In addition, we show that some attacks evade packet-based filtering techniques, such as port-, contentor length-based filters.", "title": "" }, { "docid": "6a252976282ba1d0d354d8a86d0c49f1", "text": "Ethics of brain emulations Whole brain emulation attempts to achieve software intelligence by copying the function of biological nervous systems into software. This paper aims at giving an overview of the ethical issues of the brain emulation approach, and analyse how they should affect responsible policy for developing the field. Animal emulations have uncertain moral status, and a principle of analogy is proposed for judging treatment of virtual animals. Various considerations of developing and using human brain emulations are discussed. Introduction Whole brain emulation (WBE) is an approach to achieve software intelligence by copying the functional structure of biological nervous systems into software. Rather than attempting to understand the high-level processes underlying perception, action, emotions and intelligence, the approach assumes that they would emerge from a sufficiently close imitation of the low-level neural functions, even if this is done through a software process. (Sandberg 2013) of brain emulations have been discussed, little analysis of the ethics of the project so far has been done. The main questions of this paper are to what extent brain emulations are moral patients, and what new ethical concerns are introduced as a result of brain emulation technology. The basic idea is to take a particular brain, scan its structure in detail at some resolution, construct a software model of the physiology that is so faithful to the original that, when run on appropriate hardware, it will have an internal causal structure that is essentially the same as the original brain. All relevant functions on some level of description are present, and higher level functions supervene from these. While at present an unfeasibly ambitious challenge, the necessary computing power and various scanning methods are rapidly developing. Large scale computational brain models are a very active research area, at present reaching the size of mammalian nervous systems. al. 2012) WBE can be viewed as the logical endpoint of current trends in computational neuroscience and systems biology. Obviously the eventual feasibility depends on a number of philosophical issues (physicalism, functionalism, non-organicism) and empirical facts (computability, scale separation, detectability, scanning and simulation tractability) that cannot be predicted beforehand; WBE can be viewed as a program trying to test them empirically. (Sandberg 2013) Early projects are likely to merge data from multiple brains and studies, attempting to show that this can produce a sufficiently rich model to produce nontrivial behaviour but not attempting to emulate any particular individual. However, …", "title": "" }, { "docid": "784b654ce28567d0055a4552959ad7fa", "text": "Understanding the privacy implication of adopting a certain privacy setting is a complex task for the users of social network systems. Users need tool support to articulate potential access scenarios and perform policy analysis. Such a need is particularly acute for Facebook-style Social Network Systems (FSNSs), in which semantically rich topology-based policies are used for access control. In this work, we develop a prototypical tool for Reflective Policy Assessment (RPA) --- a process in which a user examines her profile from the viewpoint of another user in her extended neighbourhood in the social graph. We verify the utility and usability of our tool in a within-subject user study.", "title": "" }, { "docid": "88c287378ce5a2ae0871b9ff32e93d37", "text": "Design-oriented research is an act of collective imagining—a way in which we work together to bring about a future that lies slightly out of our grasp. In this paper, we examine the collective imagining of ubiquitous computing by bringing it into alignment with a related phenomenon, science fiction, in particular as imagined by a series of television shows that form part of the cultural backdrop for many members of the research community. A comparative reading of these fictional narratives highlights a series of themes that are also implicit in the research literature. We argue both that these themes are important considerations in the shaping of technological design and that an attention to the tropes of popular culture holds methodological value for ubiquitous computing.", "title": "" }, { "docid": "f27c527dce75f1006ceff2b77d4e76b8", "text": "Geckos are exceptional in their ability to climb rapidly up smooth vertical surfaces. Microscopy has shown that a gecko's foot has nearly five hundred thousand keratinous hairs or setae. Each 30–130 µm long seta is only one-tenth the diameter of a human hair and contains hundreds of projections terminating in 0.2–0.5 µm spatula-shaped structures. After nearly a century of anatomical description, here we report the first direct measurements of single setal force by using a two-dimensional micro-electro-mechanical systems force sensor and a wire as a force gauge. Measurements revealed that a seta is ten times more effective at adhesion than predicted from maximal estimates on whole animals. Adhesive force values support the hypothesis that individual seta operate by van der Waals forces. The gecko's peculiar behaviour of toe uncurling and peeling led us to discover two aspects of setal function which increase their effectiveness. A unique macroscopic orientation and preloading of the seta increased attachment force 600-fold above that of frictional measurements of the material. Suitably orientated setae reduced the forces necessary to peel the toe by simply detaching above a critical angle with the substratum.", "title": "" }, { "docid": "bb77f2d4b85aaaee15284ddf7f16fb18", "text": "We present a demonstration of WalkCompass, a system to appear in the MobiSys 2014 main conference. WalkCompass exploits smartphone sensors to estimate the direction in which a user is walking. We find that several smartphone localization systems in the recent past, including our own, make a simplifying assumption that the user's walking direction is known. In trying to relax this assumption, we were not able to find a generic solution from past work. While intuition suggests that the walking direction should be detectable through the accelerometer, in reality this direction gets blended into various other motion patterns during the act of walking, including up and down bounce, side-to-side sway, swing of arms or legs, etc. WalkCompass analyzes the human walking dynamics to estimate the dominating forces and uses this knowledge to find the heading direction of the pedestrian. In the demonstration we will show the performance of this system when the user holds the smartphone on the palm. A collection of YouTube videos of the demo is posted at http://synrg.csl.illinois.edu/projects/ localization/walkcompass.", "title": "" }, { "docid": "43fc501b2bf0802b7c1cc8c4280dcd85", "text": "We propose a data-driven stochastic method (DSM) to study stochastic partial differential equations (SPDEs) in the multiquery setting. An essential ingredient of the proposed method is to construct a data-driven stochastic basis under which the stochastic solutions to the SPDEs enjoy a compact representation for a broad range of forcing functions and/or boundary conditions. Our method consists of offline and online stages. A data-driven stochastic basis is computed in the offline stage using the Karhunen–Loève (KL) expansion. A two-level preconditioning optimization approach and a randomized SVD algorithm are used to reduce the offline computational cost. In the online stage, we solve a relatively small number of coupled deterministic PDEs by projecting the stochastic solution into the data-driven stochastic basis constructed offline. Compared with a generalized polynomial chaos method (gPC), the ratio of the computational complexities between DSM (online stage) and gPC is of order O((m/Np) ). Herem andNp are the numbers of elements in the basis used in DSM and gPC, respectively. Typically we expect m Np when the effective dimension of the stochastic solution is small. A timing model, which takes into account the offline computational cost of DSM, is constructed to demonstrate the efficiency of DSM. Applications of DSM to stochastic elliptic problems show considerable computational savings over traditional methods even with a small number of queries. We also provide a method for an a posteriori error estimate and error correction.", "title": "" }, { "docid": "2cd2a85598c0c10176a34c0bd768e533", "text": "BACKGROUND\nApart from skills, and knowledge, self-efficacy is an important factor in the students' preparation for clinical work. The Physiotherapist Self-Efficacy (PSE) questionnaire was developed to measure physical therapy (TP) students' self-efficacy in the cardiorespiratory, musculoskeletal, and neurological clinical areas. The aim of this study was to establish the measurement properties of the Dutch PSE questionnaire, and to explore whether self-efficacy beliefs in students are clinical area specific.\n\n\nMETHODS\nMethodological quality of the PSE was studied using COSMIN guidelines. Item analysis, structural validity, and internal consistency of the PSE were determined in 207 students. Test-retest reliability was established in another sample of 60 students completing the PSE twice. Responsiveness of the scales was determined in 80 students completing the PSE at the start and the end of the second year. Hypothesis testing was used to determine construct validity of the PSE.\n\n\nRESULTS\nExploratory factor analysis resulted in three meaningful components explaining similar proportions of variance (25%, 21%, and 20%), reflecting the three clinical areas. Internal consistency of each of the three subscales was excellent (Cronbach's alpha > .90). Intra Class Correlation Coefficient was good (.80). Hypothesis testing confirmed construct validity of the PSE.\n\n\nCONCLUSION\nThe PSE shows excellent measurement properties. The component structure of the PSE suggests that self-efficacy about physiotherapy in PT students is not generic, but specific for a clinical area. As self-efficacy is considered a predictor of performance in clinical settings, enhancing self-efficacy is an explicit goal of educational interventions. Further research is needed to determine if the scale is specific enough to assess the effect of educational interventions on student self-efficacy.", "title": "" }, { "docid": "ccd356a943f19024478c42b5db191293", "text": "This paper discusses the relationship between concepts of narrative, patterns of interaction within computer games constituting gameplay gestalts, and the relationship between narrative and the gameplay gestalt. The repetitive patterning involved in gameplay gestalt formation is found to undermine deep narrative immersion. The creation of stronger forms of interactive narrative in games requires the resolution of this confl ict. The paper goes on to describe the Purgatory Engine, a game engine based upon more fundamentally dramatic forms of gameplay and interaction, supporting a new game genre referred to as the fi rst-person actor. The fi rst-person actor does not involve a repetitive gestalt mode of gameplay, but defi nes gameplay in terms of character development and dramatic interaction.", "title": "" }, { "docid": "3fe30cef3e308c2bbbb8c65197394bfe", "text": "The success of any Intrusion Detection System (IDS) is a complicated problem due to its nonlinearity and the quantitative or qualitative network traffic data stream with irrelevant and redundant features. How to choose the effective and key features to IDS is very important topic in information security. Support vector machine (SVM) has been employed to provide potential solutions for the IDS problem. However, the practicability of SVM is affected due to the difficulty of selecting appropriate SVM parameters. Particle swarm optimization (PSO) is an optimization method, which is not only has strong global search capability, but also is very easy to implement. Thus, the proposed PSO–SVM model is applied to an intrusion detection problem, the KDD Cup 99 data set. The standard PSO is used to determine free parameters of support vector machine and the binary PSO is to obtain the optimum feature subset at building intrusion detection system. The experimental results indicate that the PSO–SVM method can achieve higher detection rate than regular SVM algorithms in the same time.", "title": "" }, { "docid": "05778f208ed7e290139d4660dedb372e", "text": "As battery-powered mobile devices become more popular and energy hungry, wireless power transfer technology, which allows the power to be transferred from a charger to ambient devices wirelessly, receives intensive interests. Existing schemes mainly focus on the power transfer efficiency but overlook the health impairments caused by RF exposure. In this paper, we study the safe charging problem SCP of scheduling power chargers so that more energy can be received while no location in the field has electromagnetic radiation EMR exceeding a given threshold $R_{t}$ . We show that SCP is NP-hard and propose a solution, which provably outperforms the optimal solution to SCP with a relaxed EMR threshold $1-\\epsilon R_{t}$ . Testbed results based on 8 Powercast TX91501 chargers validate our results. Extensive simulation results show that the gap between our solution and the optimal one is only 6.7% when $\\epsilon = 0.1$ , while a naive greedy algorithm is 34.6% below our solution.", "title": "" }, { "docid": "ec7f20169de673cc14b31e8516937df2", "text": "Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However, use of a template does not certify that the paper has been accepted for publication in the named journal. INFORMS journal templates are for the exclusive purpose of submitting to an INFORMS journal and should not be used to distribute the papers in print or online or to submit the papers to another publication.", "title": "" }, { "docid": "84f0a7acf907b4a9a40199f7a8d0ae84", "text": "To support effective data exploration, there is a well-recognized need for solutions that can automatically recommend interesting visualizations, which reveal useful insights into the analyzed data. However, such visualizations come at the expense of high data processing costs, where a large number of views are generated to evaluate their usefulness. Those costs are further escalated in the presence of numerical dimensional attributes, due to the potentially large number of possible binning aggregations, which lead to a drastic increase in the number of possible visualizations. To address that challenge, in this paper we propose the MuVE scheme for Multi-Objective View Recommendation for Visual Data Exploration. MuVE introduces a hybrid multi-objective utility function, which captures the impact of binning on the utility of visualizations. Consequently, novel algorithms are proposed for the efficient recommendation of data visualizations that are based on numerical dimensions. The main idea underlying MuVE is to incrementally and progressively assess the different benefits provided by a visualization, which allows an early pruning of a large number of unnecessary operations. Our extensive experimental results show the significant gains provided by our proposed scheme.", "title": "" }, { "docid": "bfbab49beac603acd24b88414bac96d3", "text": "We consider the problem of automatically generating textual paraphrases with modified attributes or stylistic properties, focusing on the setting without parallel data (Hu et al., 2017; Shen et al., 2017). This setting poses challenges for learning and evaluation. We show that the metric of post-transfer classification accuracy is insufficient on its own, and propose additional metrics based on semantic content preservation and fluency. For reliable evaluation, all three metric categories must be taken into account. We contribute new loss functions and training strategies to address the new metrics. Semantic preservation is addressed by adding a cyclic consistency loss and a loss based on paraphrase pairs, while fluency is improved by integrating losses based on style-specific language models. Automatic and manual evaluation show large improvements over the baseline method of Shen et al. (2017). Our hope is that these losses and metrics can be general and useful tools for a range of textual transfer settings without parallel corpora.", "title": "" }, { "docid": "f178c362aac13afaf0229b83a8f5ace0", "text": "Around the world, Rotating Savings and Credit Associations (ROSCAs) are a prevalent saving mechanism in markets with low financial inclusion ratios. ROSCAs, which rely on social networks, facilitate credit and financing needs for individuals and small businesses. Despite their benefits, informality in ROSCAs leads to problems driven by disagreements and frauds. This further necessitates ROSCA participants’ dependency on social capital. To overcome these problems, to build on ROSCA participants’ financial proclivities, and to enhance access and efficiency of ROSCAs, we explore opportunities to digitize ROSCAs in Pakistan by building a digital platform for collection and distribution of ROSCA funds. Digital ROSCAs have the potential to mitigate issues with safety and privacy of ROSCA money, frauds and defaults in ROSCAs, and record keeping, including payment history. In this context, we illustrate features of a digital ROSCA and examine aspects of gender, social capital, literacy, and religion as they relate to digital ROSCAs.", "title": "" }, { "docid": "32670b62c6f6e7fa698e00f7cf359996", "text": "Four cases of self-poisoning with 'Roundup' herbicide are described, one of them fatal. One of the survivors had a protracted hospital stay and considerable clinical and laboratory detail is presented. Serious self-poisoning is associated with massive gastrointestinal fluid loss and renal failure. The management of such cases and the role of surfactant toxicity are discussed.", "title": "" }, { "docid": "8ed5032f5bf2e26c177577a28bdb7d3a", "text": "Wireless Sensor Network (WSN) is an important research area nowadays. Wireless Sensor Network is deployed in hostile environment consisting of hundreds to thousands of nodes. They can be deployed for various mission-critical applications, such as health care, military monitoring as well as civilian applications. There are various security issues in these networks. One of such issue is outlier detection. In outlier detection, data obtained by some of the nodes whose behavior is different from the data of other nodes are spotted in the group of data. But identification of such nodes is a little difficult. In this paper, machine learning based methods for outlier detection are discussed among which the Bayesian Network looks advantageous over other methods. Bayesian classification algorithm can be used for calculating the conditional dependency of the available nodes in WSN. This method can also calculate the missing data value.", "title": "" } ]
scidocsrr
1a9b973409d28883ae5d88ab3c585117
Situation entity types: automatic classification of clause-level aspect
[ { "docid": "afd00b4795637599f357a7018732922c", "text": "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.", "title": "" } ]
[ { "docid": "c17bb7273413c35ab98d9a241bbcfdc8", "text": "Software-defined-network technologies like OpenFlow could change how datacenters, cloud systems, and perhaps even the Internet handle tomorrow's heavy network loads.", "title": "" }, { "docid": "b898a5e8d209cf8ed7d2b8bfae0e58e2", "text": "Large datasets often have unreliable labels—such as those obtained from Amazon's Mechanical Turk or social media platforms—and classifiers trained on mislabeled datasets often exhibit poor performance. We present a simple, effective technique for accounting for label noise when training deep neural networks. We augment a standard deep network with a softmax layer that models the label noise statistics. Then, we train the deep network and noise model jointly via end-to-end stochastic gradient descent on the (perhaps mislabeled) dataset. The augmented model is underdetermined, so in order to encourage the learning of a non-trivial noise model, we apply dropout regularization to the weights of the noise model during training. Numerical experiments on noisy versions of the CIFAR-10 and MNIST datasets show that the proposed dropout technique outperforms state-of-the-art methods.", "title": "" }, { "docid": "9ec7b122117acf691f3bee6105deeb81", "text": "We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available.", "title": "" }, { "docid": "fa396377fbec310c9d4b9792cc66f9b9", "text": "Attention-based deep learning model as a human-centered smart technology has become the state-of-the-art method in addressing relation extraction, while implementing natural language processing. How to effectively improve the computational performance of that model has always been a research focus in both academic and industrial communities. Generally, the structures of model would greatly affect the final results of relation extraction. In this article, a deep learning model with a novel structure is proposed. In our model, after incorporating the highway network into a bidirectional gated recurrent unit, the attention mechanism is additionally utilized in an effort to assign weights of key issues in the network structure. Here, the introduction of highway network could enable the proposed model to capture much more semantic information. Experiments on a popular benchmark data set are conducted, and the results demonstrate that the proposed model outperforms some existing relation extraction methods. Furthermore, the performance of our method is also tested in the analysis of geological data, where the relation extraction in Chinese geological field is addressed and a satisfactory display result is achieved.", "title": "" }, { "docid": "1ce8e79e7fe4761858b3e83c49b80c80", "text": "Taking the concept of thin clients to the limit, this paper proposes that desktop machines should just be simple, stateless I/O devices (display, keyboard, mouse, etc.) that access a shared pool of computational resources over a dedicated interconnection fabric --- much in the same way as a building's telephone services are accessed by a collection of handset devices. The stateless desktop design provides a useful mobility model in which users can transparently resume their work on any desktop console.This paper examines the fundamental premise in this system design that modern, off-the-shelf interconnection technology can support the quality-of-service required by today's graphical and multimedia applications. We devised a methodology for analyzing the interactive performance of modern systems, and we characterized the I/O properties of common, real-life applications (e.g. Netscape, streaming video, and Quake) executing in thin-client environments. We have conducted a series of experiments on the Sun Ray&trade; 1 implementation of this new system architecture, and our results indicate that it provides an effective means of delivering computational services to a workgroup.We have found that response times over a dedicated network are so low that interactive performance is indistinguishable from a dedicated workstation. A simple pixel encoding protocol requires only modest network resources (as little as a 1Mbps home connection) and is quite competitive with the X protocol. Tens of users running interactive applications can share a processor without any noticeable degradation, and many more can share the network. The simple protocol over a 100Mbps interconnection fabric can support streaming video and Quake at display rates and resolutions which provide a high-fidelity user experience.", "title": "" }, { "docid": "8f5ca16c82dfdb7d551fdf203c9ebf7a", "text": "We analyze a few of the commonly used statistics based and machine learning algorithms for natural language disambiguation tasks and observe that they can bc recast as learning linear separators in the feature space. Each of the methods makes a priori assumptions, which it employs, given the data, when searching for its hypothesis. Nevertheless, as we show, it searches a space that is as rich as the space of all linear separators. We use this to build an argument for a data driven approach which merely searches for a good linear separator in the feature space, without further assumptions on the domain or a specific problem. We present such an approach a sparse network of linear separators, utilizing the Winnow learning aigorlthrn and show how to use it in a variety of ambiguity resolution problems. The learning approach presented is attribute-efficient and, therefore, appropriate for domains having very large number of attributes. In particular, we present an extensive experimental comparison of our approach with other methods on several well studied lexical disambiguation tasks such as context-sensltlve spelling correction, prepositional phrase attachment and part of speech tagging. In all cases we show that our approach either outperforms other methods tried for these tasks or performs comparably to the best.", "title": "" }, { "docid": "8a55bf5b614d750a7de6ac34dc321b10", "text": "Unsupervised image-to-image translation aims at learning the relationship between samples from two image domains without supervised pair information. The relationship between two domain images can be one-to-one, one-to-many or many-to-many. In this paper, we study the one-to-many unsupervised image translation problem in which an input sample from one domain can correspond to multiple samples in the other domain. To learn the complex relationship between the two domains, we introduce an additional variable to control the variations in our one-to-many mapping. A generative model with an XO-structure, called the XOGAN, is proposed to learn the cross domain relationship among the two domains and the additional variables. Not only can we learn to translate between the two image domains, we can also handle the translated images with additional variations. Experiments are performed on unpaired image generation tasks, including edges-to-objects translation and facial image translation. We show that the proposed XOGAN model can generate plausible images and control variations, such as color and texture, of the generated images. Moreover, while state-of-the-art unpaired image generation algorithms tend to generate images with monotonous colors, XOGAN can generate more diverse results.", "title": "" }, { "docid": "55b76c1b1d4cabee6ebbe9aa26c4058f", "text": "The Fundamental Law of Information Recovery states, informally, that “overly accurate” estimates of “too many” statistics completely destroys privacy ([DN03] et sequelae). Differential privacy is a mathematically rigorous definition of privacy tailored to analysis of large datasets and equipped with a formal measure of privacy loss [DMNS06, Dwo06]. Moreover, differentially private algorithms take as input a parameter, typically called ε, that caps the permitted privacy loss in any execution of the algorithm and offers a concrete privacy/utility tradeoff. One of the strengths of differential privacy is the ability to reason about cumulative privacy loss over multiple analyses, given the values of ε used in each individual analysis. By appropriate choice of ε it is possible to stay within the bounds of the Fundamental Law while releasing any given number of estimated statistics; however, before this work the bounds were not tight. Roughly speaking, differential privacy ensures that the outcome of any anlysis on a database x is distributed very similarly to the outcome on any neighboring database y that differs from x in just one row (Definition 2.3). That is, differentially private algorithms are randomized, and in particular the max divergence between these two distributions (a sort maximum log odds ratio for any event; see Definition 2.2 below) is bounded by the privacy parameter ε. This absolute guarantee on the maximum privacy loss is now sometimes referred to as “pure” differential privacy. A popular relaxation, (ε, δ)-differential privacy (Definition 2.4)[DKM+06], guarantees that with probability at most 1−δ the privacy loss does not exceed ε.1 Typically δ is taken to be “cryptographically” small, that is, smaller than the inverse of any polynomial in the size of the dataset, and pure differential privacy is simply the special case in which δ = 0. The relaxation frequently permits asymptotically better accuracy than pure differential privacy for the same value of ε, even when δ is very small. What happens in the case of multiple analyses? While the composition of k (ε, 0)-differentially privacy algorithms is at worst (kε, 0)-differentially private, it is also simultaneously ( √", "title": "" }, { "docid": "6087ad77caa9947591eb9a3f8b9b342d", "text": "Geobacter sulfurreducens is a well-studied representative of the Geobacteraceae, which play a critical role in organic matter oxidation coupled to Fe(III) reduction, bioremediation of groundwater contaminated with organics or metals, and electricity production from waste organic matter. In order to investigate G. sulfurreducens central metabolism and electron transport, a metabolic model which integrated genome-based predictions with available genetic and physiological data was developed via the constraint-based modeling approach. Evaluation of the rates of proton production and consumption in the extracellular and cytoplasmic compartments revealed that energy conservation with extracellular electron acceptors, such as Fe(III), was limited relative to that associated with intracellular acceptors. This limitation was attributed to lack of cytoplasmic proton consumption during reduction of extracellular electron acceptors. Model-based analysis of the metabolic cost of producing an extracellular electron shuttle to promote electron transfer to insoluble Fe(III) oxides demonstrated why Geobacter species, which do not produce shuttles, have an energetic advantage over shuttle-producing Fe(III) reducers in subsurface environments. In silico analysis also revealed that the metabolic network of G. sulfurreducens could synthesize amino acids more efficiently than that of Escherichia coli due to the presence of a pyruvate-ferredoxin oxidoreductase, which catalyzes synthesis of pyruvate from acetate and carbon dioxide in a single step. In silico phenotypic analysis of deletion mutants demonstrated the capability of the model to explore the flexibility of G. sulfurreducens central metabolism and correctly predict mutant phenotypes. These results demonstrate that iterative modeling coupled with experimentation can accelerate the understanding of the physiology of poorly studied but environmentally relevant organisms and may help optimize their practical applications.", "title": "" }, { "docid": "b5df3d884385b8c4e65c42d8ee3a3b1b", "text": "Reinforcement learning (RL) makes it possible to train agents capable of achieving sophisticated goals in complex and uncertain environments. A key difficulty in reinforcement learning is specifying a reward function for the agent to optimize. Traditionally, imitation learning in RL has been used to overcome this problem. Unfortunately, hitherto imitation learning methods tend to require that demonstrations are supplied in the first-person: the agent is provided with a sequence of states and a specification of the actions that it should have taken. While powerful, this kind of imitation learning is limited by the relatively hard problem of collecting first-person demonstrations. Humans address this problem by learning from third-person demonstrations: they observe other humans perform tasks, infer the task, and accomplish the same task themselves. In this paper, we present a method for unsupervised third-person imitation learning. Here third-person refers to training an agent to correctly achieve a simple goal in a simple environment when it is provided a demonstration of a teacher achieving the same goal but from a different viewpoint; and unsupervised refers to the fact that the agent receives only these third-person demonstrations, and is not provided a correspondence between teacher states and student states. Our methods primary insight is that recent advances from domain confusion can be utilized to yield domain agnostic features which are crucial during the training process. To validate our approach, we report successful experiments on learning from third-person demonstrations in a pointmass domain, a reacher domain, and inverted pendulum.", "title": "" }, { "docid": "e2ea8ec9139837feb95ac432a63afe88", "text": "Augmented and virtual reality have the potential of being indistinguishable from the real world. Holographic displays, including head mounted units, support this vision by creating rich stereoscopic scenes, with objects that appear to float in thin air - often within arm's reach. However, one has but to reach out and grasp nothing but air to destroy the suspension of disbelief. Snake-charmer is an attempt to provide physical form to virtual objects by revisiting the concept of Robotic Graphics or Encountered-type Haptic interfaces with current commodity hardware. By means of a robotic arm, Snake-charmer brings physicality to a virtual scene and explores what it means to truly interact with an object. We go beyond texture and position simulation and explore what it means to have a physical presence inside a virtual scene. We demonstrate how to render surface characteristics beyond texture and position, including temperature; how to physically move objects; and how objects can physically interact with the user's hand. We analyze our implementation, present the performance characteristics, and provide guidance for the construction of future physical renderers.", "title": "" }, { "docid": "660998f8595df10e67bdb550c7ac5a5c", "text": "The role of information technology (IT) in education has significantly increased, but resistance to technology by public school teachers worldwide remains high. This study examined public school teachers’ technology acceptance decision-making by using a research model that is based on key findings from relevant prior research and important characteristics of the targeted user acceptance phenomenon. The model was longitudinally tested using responses from more than 130 teachers attending an intensive 4-week training program on Microsoft PowerPoint, a common but important classroom presentation technology. In addition to identifying key acceptance determinants, we examined plausible changes in acceptance drivers over the course of the training, including their influence patterns and magnitudes. Overall, our model showed a reasonably good fit with the data and exhibited satisfactory explanatory power, based on the responses collected from training commencement and completion. Our findings suggest a highly prominent and significant core influence path from job relevance to perceived usefulness and then technology acceptance. Analysis of data collected at the beginning and the end of the training supports most of our hypotheses and sheds light on plausible changes in their influences over time. Specifically, teachers appear to consider a rich set of factors in initial acceptance but concentrate on fundamental determinants (e.g. perceived usefulness and perceived ease of use) in their continued acceptance. # 2003 Published by Elsevier B.V.", "title": "" }, { "docid": "dde768e5944f1ce8c0a68b4cc42eaf81", "text": "The problem of aspect-based sentiment analysis deals with classifying sentiments (negative, neutral, positive) for a given aspect in a sentence. A traditional sentiment classification task involves treating the entire sentence as a text document and classifying sentiments based on all the words. Let us assume, we have a sentence such as ”the acceleration of this car is fast, but the reliability is horrible”. This can be a difficult sentence because it has two aspects with conflicting sentiments about the same entity. Considering machine learning techniques (or deep learning), how do we encode the information that we are interested in one aspect and its sentiment but not the other? Let us explore various pre-processing steps, features, and methods used to facilitate in solving this task.", "title": "" }, { "docid": "01bc5bc18963665e54c3799128b6851b", "text": "In many recent applications, data may take the form of continuous data streams, rather than finite stored data sets. Several aspects of data management need to be reconsidered in the presence of data streams, offering a new research direction for the database community. In this paper we focus primarily on the problem of query processing, specifically on how to define and evaluate continuous queries over data streams. We address semantic issues as well as efficiency concerns. Our main contributions are threefold. First, we specify a general and flexible architecture for query processing in the presence of data streams. Second, we use our basic architecture as a tool to clarify alternative semantics and processing techniques for continuous queries. The architecture also captures most previous work on continuous queries and data streams, as well as related concepts such as triggers and materialized views. Finally, we map out research topics in the area of query processing over data streams, showing where previous work is relevant and describing problems yet to be addressed.", "title": "" }, { "docid": "99f22bc84690fc357df55484cb7c6e54", "text": "This work presents a Text Segmentation algorithm called TopicTiling. This algorithm is based on the well-known TextTiling algorithm, and segments documents using the Latent Dirichlet Allocation (LDA) topic model. We show that using the mode topic ID assigned during the inference method of LDA, used to annotate unseen documents, improves performance by stabilizing the obtained topics. We show significant improvements over state of the art segmentation algorithms on two standard datasets. As an additional benefit, TopicTiling performs the segmentation in linear time and thus is computationally less expensive than other LDA-based segmentation methods.", "title": "" }, { "docid": "80adf87179f4b3b61bf99d946da4cb2a", "text": "In modern intensive care units (ICUs) a vast and varied amount of physiological data is measured and collected, with the intent of providing clinicians with detailed information about the physiological state of each patient. The data include measurements from the bedside monitors of heavily instrumented patients, imaging studies, laboratory test results, and clinical observations. The clinician’s task of integrating and interpreting the data, however, is complicated by the sheer volume of information and the challenges of organizing it appropriately. This task is made even more difficult by ICU patients’ frequently-changing physiological state. Although the extensive clinical information collected in ICUs presents a challenge, it also opens up several opportunities. In particular, we believe that physiologically-based computational models and model-based estimation methods can be harnessed to better understand and track patient state. These methods would integrate a patient’s hemodynamic data streams by analyzing and interpreting the available information, and presenting resultant pathophysiological hypotheses to the clinical staff in an efficient manner. In this thesis, such a possibility is developed in the context of cardiovascular dynamics. The central results of this thesis concern averaged models of cardiovascular dynamics and a novel estimation method for continuously tracking cardiac output and total peripheral resistance. This method exploits both intra-beat and inter-beat dynamics of arterial blood pressure, and incorporates a parametrized model of arterial compliance. We validated our method with animal data from laboratory experiments and ICU patient data. The resulting root-mean-square-normalized errors – at most 15% depending on the data set – are quite low and clinically acceptable. In addition, we describe a novel estimation scheme for continuously monitoring left ventricular ejection fraction and left ventricular end-diastolic volume. We validated this method on an animal data set. Again, the resulting root-mean-square-normalized errors were quite low – at most 13%. By continuously monitoring cardiac output, total peripheral resistance, left ventricular ejection fraction, left ventricular end-diastolic volume, and arterial blood pressure, one has the basis for distinguishing between cardiogenic, hypovolemic, and septic shock. We hope that the results in this thesis will contribute to the development of a next-generation patient monitoring system. Thesis Supervisor: Professor George C. Verghese Title: Professor of Electrical Engineering Thesis Supervisor: Dr. Thomas Heldt Title: Postdoctoral Associate", "title": "" }, { "docid": "62e979cf9787ef2fcd8f317413f3fa94", "text": "Starting from conflictive predictions of hitherto disconnected debates in the natural and social sciences, this article examines the spatial structure of transnational human activity (THA) worldwide (a) across eight types of mobility and communication and (b) in its development over time. It is shown that the spatial structure of THA is similar to that of animal displacements and local-scale human motion in that it can be approximated by Lévy flights with heavy tails that obey power laws. Scaling exponent and power-law fit differ by type of THA, being highest in refuge-seeking and tourism and lowest in student exchange. Variance in the availability of resources and opportunities for satisfying associated needs appears to explain these differences. Over time (1960-2010), the Lévy-flight pattern remains intact and remarkably stable, contradicting the popular notion that socio-technological trends lead to a \"death of distance.\" Humans have not become more \"global\" over time, they rather became more mobile in general, i.e. they move and communicate more at all distances. Hence, it would be more adequate to speak of \"mobilization\" than of \"globalization.\" Longitudinal change occurs only in some types of THA and predominantly at short distances, indicating regional rather than global shifts.", "title": "" }, { "docid": "c534935b7ba93e32d8138ecc2046f4e9", "text": "This paper reviews the findings of several studies and surveys that address the increasing popularity and usage of so-called fitness “gamification.” Fitness gamification is used as an overarching and information term for the use of video game elements in non-gaming systems to improve user experience and user engagement. In this usage, game components such as a scoreboard, competition amongst friends, and awards and achievements are employed to motivate users to achieve personal health goals. The rise in smartphone usage has also increased the number of mobile fitness applications that utilize gamification principles. The most popular and successful fitness applications are the ones that feature an assemblage of workout tracking, social sharing, and achievement systems. This paper provides an overview of gamification, a description of gamification characteristics, and specific examples of how fitness gamification applications function and is used.", "title": "" }, { "docid": "52ab1e33476341ec7553bdc4cd422461", "text": "Thanks to the decreasing cost of whole-body sensing technology and its increasing reliability, there is an increasing interest in, and understanding of, the role played by body expressions as a powerful affective communication channel. The aim of this survey is to review the literature on affective body expression perception and recognition. One issue is whether there are universal aspects to affect expression perception and recognition models or if they are affected by human factors such as culture. Next, we discuss the difference between form and movement information as studies have shown that they are governed by separate pathways in the brain. We also review psychological studies that have investigated bodily configurations to evaluate if specific features can be identified that contribute to the recognition of specific affective states. The survey then turns to automatic affect recognition systems using body expressions as at least one input modality. The survey ends by raising open questions on data collecting, labeling, modeling, and setting benchmarks for comparing automatic recognition systems.", "title": "" } ]
scidocsrr
67c0d9ec75b3acd859ae215a18889eab
Class point approach for software effort estimation using stochastic gradient boosting technique
[ { "docid": "8e3ced84f384192cfe742294dcee74bc", "text": "The construction of software cost estimation models remains an active topic of research. The basic premise of cost modelling is that a historical database of software project cost data can be used to develop a quantitative model to predict the cost of future projects. One of the difficulties faced by workers in this area is that many of these historical databases contain substantial amounts of missing data. Thus far, the common practice has been to ignore observations with missing data. In principle, such a practice can lead to gross biases, and may be detrimental to the accuracy of cost estimation models. In this paper we describe an extensive simulation where we evaluate different techniques for dealing with missing data in the context of software cost modelling. Three techniques are evaluated: listwise deletion, mean imputation and eight different types of hot-deck imputation. Our results indicate that all the missing data techniques perform well, with small biases and high precision. This suggests that the simplest technique, listwise deletion, is a reasonable choice. However, this will not necessarily provide the best performance. Consistent best performance (minimal bias and highest precision) can be obtained by using hot-deck imputation with Euclidean distance and a z-score standardisation.", "title": "" } ]
[ { "docid": "7bd0d6ef1d523c49c1a1595e31413e31", "text": "Germination vigor is driven by the ability of the plant embryo, embedded within the seed, to resume its metabolic activity in a coordinated and sequential manner. Studies using \"-omics\" approaches support the finding that a main contributor of seed germination success is the quality of the messenger RNAs stored during embryo maturation on the mother plant. In addition, proteostasis and DNA integrity play a major role in the germination phenotype. Because of its pivotal role in cell metabolism and its close relationships with hormone signaling pathways regulating seed germination, the sulfur amino acid metabolism pathway represents a key biochemical determinant of the commitment of the seed to initiate its development toward germination. This review highlights that germination vigor depends on multiple biochemical and molecular variables. Their characterization is expected to deliver new markers of seed quality that can be used in breeding programs and/or in biotechnological approaches to improve crop yields.", "title": "" }, { "docid": "055071ff6809eaea4eeb0a9f64e49274", "text": "Compressed bitmap indexes are used in systems such as Git or Oracle to accelerate queries. They represent sets and often support operations such as unions, intersections, differences, and symmetric differences. Several important systems such as Elasticsearch, Apache Spark, Netflix’s Atlas, LinkedIn’s Pivot, Metamarkets’ Druid, Pilosa, Apache Hive, Apache Tez, Microsoft Visual Studio Team Services and Apache Kylin rely on a specific type of compressed bitmap index called Roaring. We present an optimized software library written in C implementing Roaring bitmaps: CRoaring. It benefits from several algorithms designed for the single-instruction-multiple-data (SIMD) instructions available on commodity processors. In particular, we present vectorized algorithms to compute the intersection, union, difference and symmetric difference between arrays. We benchmark the library against a wide range of competitive alternatives, identifying weaknesses and strengths in our software. Our work is available under a liberal open-source license.", "title": "" }, { "docid": "d90add899632bab1c5c2637c7080f717", "text": "Software Testing plays a important role in Software development because it can minimize the development cost. We Propose a Technique for Test Sequence Generation using UML Model Sequence Diagram.UML models give a lot of information that should not be ignored in testing. In This paper main features extract from Sequence Diagram after that we can write the Java Source code for that Features According to ModelJunit Library. ModelJUnit is a extended library of JUnit Library. By using that Source code we can Generate Test Case Automatic and Test Coverage. This paper describes a systematic Test Case Generation Technique performed on model based testing (MBT) approaches By Using Sequence Diagram.", "title": "" }, { "docid": "d8a68a9e769f137e06ab05e4d4075dce", "text": "The inelastic response of existing reinforced concrete (RC) buildings without seismic details is investigated, presenting the results from more than 1000 nonlinear analyses. The seismic performance is investigated for two buildings, a typical building form of the 60s and a typical form of the 80s. Both structures are designed according to the old Greek codes. These building forms are typical for that period for many Southern European countries. Buildings of the 60s do not have seismic details, while buildings of the 80s have elementary seismic details. The influence of masonry infill walls is also investigated for the building of the 60s. Static pushover and incremental dynamic analyses (IDA) for a set of 15 strong motion records are carried out for the three buildings, two bare and one infilled. The IDA predictions are compared with the results of pushover analysis and the seismic demand according to Capacity Spectrum Method (CSM) and N2 Method. The results from IDA show large dispersion on the response, available ductility capacity, behaviour factor and failure displacement, depending on the strong motion record. CSM and N2 predictions are enveloped by the nonlinear dynamic predictions, but have significant differences from the mean values. The better behaviour of the building of the 80s compared to buildings of the 60s is validated with both pushover and nonlinear dynamic analyses. Finally, both types of analysis show that fully infilled frames exhibit an improved behaviour compared to bare frames.", "title": "" }, { "docid": "4ac734960f264716721a0f0fa5305925", "text": "Most of recent research on layered chalcogenides is understandably focused on single atomic layers. However, it is unclear if single-layer units are the most ideal structures for enhanced gas-solid interactions. To probe this issue further, we have prepared large-area MoS2 sheets ranging from single to multiple layers on 300 nm SiO2/Si substrates using the micromechanical exfoliation method. The thickness and layering of the sheets were identified by optical microscope, invoking recently reported specific optical color contrast, and further confirmed by AFM and Raman spectroscopy. The MoS2 transistors with different thicknesses were assessed for gas-sensing performances with exposure to NO2, NH3, and humidity in different conditions such as gate bias and light irradiation. The results show that, compared to the single-layer counterpart, transistors of few MoS2 layers exhibit excellent sensitivity, recovery, and ability to be manipulated by gate bias and green light. Further, our ab initio DFT calculations on single-layer and bilayer MoS2 show that the charge transfer is the reason for the decrease in resistance in the presence of applied field.", "title": "" }, { "docid": "a6c8fe495cffd8d62d096d62eaa00bbc", "text": "Automated counting of people in crowd images is a challenging task. The major difficulty stems from the large diversity in the way people appear in crowds. In fact, features available for crowd discrimination largely depend on the crowd density to the extent that people are only seen as blobs in a highly dense scene. We tackle this problem with a growing CNN which can progressively increase its capacity to account for the wide variability seen in crowd scenes. Our model starts from a base CNN density regressor, which is trained in equivalence on all types of crowd images. In order to adapt with the huge diversity, we create two child regressors which are exact copies of the base CNN. A differential training procedure divides the dataset into two clusters and fine-tunes the child networks on their respective specialties. Consequently, without any hand-crafted criteria for forming specialties, the child regressors become experts on certain types of crowds. The child networks are again split recursively, creating two experts at every division. This hierarchical training leads to a CNN tree, where the child regressors are more fine experts than any of their parents. The leaf nodes are taken as the final experts and a classifier network is then trained to predict the correct specialty for a given test image patch. The proposed model achieves higher count accuracy on major crowd datasets. Further, we analyse the characteristics of specialties mined automatically by our method.", "title": "" }, { "docid": "2c834988686bf2d28ba7668ffaf14b0e", "text": "Revealing the latent community structure, which is crucial to understanding the features of networks, is an important problem in network and graph analysis. During the last decade, many approaches have been proposed to solve this challenging problem in diverse ways, i.e. different measures or data structures. Unfortunately, experimental reports on existing techniques fell short in validity and integrity since many comparisons were not based on a unified code base or merely discussed in theory. We engage in an in-depth benchmarking study of community detection in social networks. We formulate a generalized community detection procedure and propose a procedure-oriented framework for benchmarking. This framework enables us to evaluate and compare various approaches to community detection systematically and thoroughly under identical experimental conditions. Upon that we can analyze and diagnose the inherent defect of existing approaches deeply, and further make effective improvements correspondingly. We have re-implemented ten state-of-the-art representative algorithms upon this framework and make comprehensive evaluations of multiple aspects, including the efficiency evaluation, performance evaluations, sensitivity evaluations, etc. We discuss their merits and faults in depth, and draw a set of take-away interesting conclusions. In addition, we present how we can make diagnoses for these algorithms resulting in significant improvements.", "title": "" }, { "docid": "e5b2aa76e161661ea613912ba40695bd", "text": "Three meanings of “information” are distinguished: “Information-as-process”; “information-as-knowledge”; and “information-as-thing,” the attributive use of “information” to denote things regarded as informative. The nature and characteristics of “information-asthing” are discussed, using an indirect approach (“What things are informative?“). Varieties of “informationas-thing” include data, text, documents, objects, and events. On this view “information” includes but extends beyond communication. Whatever information storage and retrieval systems store and retrieve is necessarily “information-as-thing.” These three meanings of “information,” along with “information processing,” offer a basis for classifying disparate information-related activities (e.g., rhetoric, bibliographic retrieval, statistical analysis) and, thereby, suggest a topography for “information science.”", "title": "" }, { "docid": "484662edf689c774e5cf4ad551a9eb90", "text": "Generative Adversarial Networks (GANs) have achieved remarkable results in the task of generating realistic natural images. In most successful applications, GAN models share two common aspects: solving a challenging saddle point optimization problem, interpreted as an adversarial game between a generator and a discriminator functions; and parameterizing the generator and the discriminator as deep convolutional neural networks. The goal of this paper is to disentangle the contribution of these two factors to the success of GANs. In particular, we introduce Generative Latent Optimization (GLO), a framework to train deep convolutional generators using simple reconstruction losses. Throughout a variety of experiments, we show that GLO enjoys many of the desirable properties of GANs: synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors; all of this without the adversarial optimization scheme.", "title": "" }, { "docid": "d3098b988137a75d77cac438e7ae5287", "text": "Leitsymptome der Aufmerksamkeitsdefizit-/Hyperaktivitätsstörung (ADHS) sind Unaufmerksamkeit, motorische Unruhe und Impulsivität. ADHS wird ätiologisch vorrangig auf genetische Ursachen zurückgeführt und bringt eine erhebliche psychosoziale Problematik für die Betroffenen und ihr soziales Umfeld mit sich. Im Rahmen des Kinder- und Jugendgesundheitssurvey (KiGGS) beantworteten die Eltern von insgesamt 7569 Jungen und 7267 Mädchen im Alter von 3–17 Jahren schriftlich einen Fragebogen, der unter anderem eine ADHS-Diagnosefrage und den Strengths and Difficulties Questionnaire (SDQ) enthielt. Zusätzlich erfolgten Verhaltensbeobachtungen von 7919 Kindern (Altersspanne 3–11 Jahre) während der medizinischphysikalischen Tests. Als ADHS-Fälle wurden Teilnehmer eingestuft, deren Eltern eine jemals von einem Arzt oder Psychologen gestellte ADHS-Diagnose angegeben hatten. Als ADHS-Verdachtsfälle wurden Teilnehmer betrachtet, die Werte von ≥ 7 auf der Unaufmerksamkeits-/Hyperaktivitätsskala des SDQ (Elternurteil) aufwiesen. Bei insgesamt 4,8 % der Kinder und Jugendlichen wurde jemals ADHS diagnostiziert. Weitere 4,9 % der Teilnehmer können als Verdachtsfälle gelten. Bei Jungen wurde ADHS um den Faktor 4,3 häufiger diagnostiziert als bei Mädchen. Bereits bei 1,8 % der Teilnehmer im Vorschulalter wurde ADHS diagnostiziert. Im Grundschulalter (7–10 Jahre) steigt die Diagnosehäufigkeit stark. Im Alter von 11–17 Jahren wurde bei jedem zehnten Jungen und jedem 43. Mädchen jemals ADHS diagnostiziert. ADHS wurde häufiger bei Teilnehmern mit niedrigem sozioökonomischem Status diagnostiziert als bei Teilnehmern mit hohem Status. Von Migranten wird seltener über eine ADHS-Diagnose berichtetet, sie sind jedoch häufiger unter den Verdachtsfällen. Diese Diskrepanz könnte auf eine Unterdiagnostizierung oder auf Inanspruchnahmeeffekte bei Migranten hinweisen. Die kurz- und langfristigen medizinischen, sozialen und gesundheitsökonomischen Konsequenzen von ADHS verdeutlichen die hohe Public-Health-Relevanz der Störung. Der hohe Anteil genetischer Faktoren an der Ätiologie der ADHS lässt hier vor allem an Maßnahmen der Sekundär-(Früherkennung und Frühförderung) und Tertiärprävention denken. Mit weiteren Auswertungen der KiGGS-Daten können Risikogruppen zukünftig genauer identifiziert und Präventionsansätze weiterentwickelt werden. The cardinal symptoms of attention-deficit/hyperactivity disorder (ADHD) are inattention, hyperactivity and impulsivity. Etiologically, ADHD is mainly put down to genetic causes; it entails a considerable range of psychosocial problems for those affected and their social environment. The parents of a total of 7,569 boys (B) and 7,267 girls (G) aged 3–17 who took part in the German Health Interview and Examination Survey for Children and Adolescents (KiGGS) answered a self-administered questionnaire including an ADHD diagnosis question and the Strengths and Difficulties Questionnaire (SDQ). In addition behavioural observations of 7,919 children (aged 3–11) were carried out during the medical and physical tests. Participants whose parents reported that they had ever been given an ADHD diagnosis by a doctor or psychologist were classified as ADHD cases. Participants were classified as suspected cases of ADHD if they had a value of ≥7 on the SDQ inattention/hyperactivity scale. ADHD had ever been diagnosed in 4.8 % of the children and adolescents altogether (B: 7.7 %, G: 1.8 %). Another 4.9 % of the participants can be considered as suspected cases. Already 1.8 % of the preeschoolers had been given an ADHD diagnosis. At primary school age (7–10 years old) the frequency of diagnosis rises sharply. At age 11–17, ADHD had ever been diagnosed in 1 in 10 boys and 1 in 43 girls. ADHD had been diagnosed significantly more frequently among participants of low socio-economic status (SES) than among participants of high SES. A diagnosis of ADHD is reported less often for migrants, they rank more frequently among the suspected cases. The discrepancy between confirmed and suspected cases of ADHD among migrants may point to lower diagnosis rates or lower utilization of medical services. The short- and long-term medical, social and health-economic effects of ADHD illustrate the major public health relevance of the disorder. As for prevention, the high share of genetic factors in ADHD etiology primarily suggests secondary prevention (early support and early diagnosis) and tertiary prevention measures. Further analysis of the KiGGS data could prospectively identify risk groups more precisely and refine preventional approaches.", "title": "" }, { "docid": "d3fa8a6b4cd436b16d98166e2c4c230d", "text": "Inferring phenotypic patterns from population-scale clinical data is a core computational task in the development of personalized medicine. One important source of data on which to conduct this type of research is patient Electronic Medical Records (EMR). However, the patient EMRs are typically sparse and noisy, which creates significant challenges if we use them directly to represent patient phenotypes. In this paper, we propose a data driven phenotyping framework called Pacifier (PAtient reCord densIFIER), where we interpret the longitudinal EMR data of each patient as a sparse matrix with a feature dimension and a time dimension, and derive more robust patient phenotypes by exploring the latent structure of those matrices. Specifically, we assume that each derived phenotype is composed of a subset of the medical features contained in original patient EMR, whose value evolves smoothly over time. We propose two formulations to achieve such goal. One is Individual Basis Approach (IBA), which assumes the phenotypes are different for every patient. The other is Shared Basis Approach (SBA), which assumes the patient population shares a common set of phenotypes. We develop an efficient optimization algorithm that is capable of resolving both problems efficiently. Finally we validate Pacifier on two real world EMR cohorts for the tasks of early prediction of Congestive Heart Failure (CHF) and End Stage Renal Disease (ESRD). Our results show that the predictive performance in both tasks can be improved significantly by the proposed algorithms (average AUC score improved from 0.689 to 0.816 on CHF, and from 0.756 to 0.838 on ESRD respectively, on diagnosis group granularity). We also illustrate some interesting phenotypes derived from our data.", "title": "" }, { "docid": "7b507a50fd567d0d8679fea29495becd", "text": "Ontologies are the backbone of the Semantic Web and facilitate sharing, integration, and discovery of data. However, the number of existing ontologies is vastly growing, which makes it is problematic for software developers to decide which ontology is suitable for their application. Furthermore, often, only a small part of the ontology will be relevant for a certain application. In other cases, ontologies are so large, that they have to be split up in more manageable chunks to work with them. To this end, in this demo, we present OAPT, an ontology analysis and partitioning tool. First, before a candidate input ontology is partitioned, OAPT analyzes it to determine, if this ontology is worth to be considered using a predefined set of criteria that quantify the semantic richness of the ontology. Once the ontology is investigated, we apply a seeding-based partitioning algorithm to partition it into a set of modules. Through the demonstration of OAPT we introduce the tool’s capabilities and highlight its effectiveness and usability.", "title": "" }, { "docid": "37a6f3773aebf46cc40266b8bb5692af", "text": "The theory of myofascial pain syndrome (MPS) caused by trigger points (TrPs) seeks to explain the phenomena of muscle pain and tenderness in the absence of evidence for local nociception. Although it lacks external validity, many practitioners have uncritically accepted the diagnosis of MPS and its system of treatment. Furthermore, rheumatologists have implicated TrPs in the pathogenesis of chronic widespread pain (FM syndrome). We have critically examined the evidence for the existence of myofascial TrPs as putative pathological entities and for the vicious cycles that are said to maintain them. We find that both are inventions that have no scientific basis, whether from experimental approaches that interrogate the suspect tissue or empirical approaches that assess the outcome of treatments predicated on presumed pathology. Therefore, the theory of MPS caused by TrPs has been refuted. This is not to deny the existence of the clinical phenomena themselves, for which scientifically sound and logically plausible explanations based on known neurophysiological phenomena can be advanced.", "title": "" }, { "docid": "49af635c42b9360e0df85eac7eea8842", "text": "We may not be able to make you love reading, but low power methodology manual for system on chip design 2nd printing will lead you to love reading starting from now. Book is the window to open the new world. The world that you want is in the better stage and level. World will always guide you to even the prestige stage of the life. You know, this is some of how reading will give you the kindness. In this case, more books you read more knowledge you know, but it can mean also the bore is full.", "title": "" }, { "docid": "ef2ab09b5095fc151e7ac4054f099426", "text": "We present a two stage parser that recovers Penn Treebank style syntactic analyses of new sentences including skeletal syntactic structure, and, for the first time, both function tags and empty categories. The accuracy of the first-stage parser on the standard Parseval metric matches that of the (Collins, 2003) parser on which it is based, despite the data fragmentation caused by the greatly enriched space of possible node labels. This first stage simultaneously achieves near state-of-theart performance on recovering function tags with minimal modifications to the underlying parser, modifying less than ten lines of code. The second stage achieves state-of-the-art performance on the recovery of empty categories by combining a linguistically-informed architecture and a rich feature set with the power of modern machine learning methods.", "title": "" }, { "docid": "96e34b9e05860a2cbed2f7464d139c5b", "text": "BACKGROUND\nFindings from family and twin studies support a genetic contribution to the development of sexual orientation in men. However, previous studies have yielded conflicting evidence for linkage to chromosome Xq28.\n\n\nMETHOD\nWe conducted a genome-wide linkage scan on 409 independent pairs of homosexual brothers (908 analyzed individuals in 384 families), by far the largest study of its kind to date.\n\n\nRESULTS\nWe identified two regions of linkage: the pericentromeric region on chromosome 8 (maximum two-point LOD = 4.08, maximum multipoint LOD = 2.59), which overlaps with the second strongest region from a previous separate linkage scan of 155 brother pairs; and Xq28 (maximum two-point LOD = 2.99, maximum multipoint LOD = 2.76), which was also implicated in prior research.\n\n\nCONCLUSIONS\nResults, especially in the context of past studies, support the existence of genes on pericentromeric chromosome 8 and chromosome Xq28 influencing development of male sexual orientation.", "title": "" }, { "docid": "c8a9aff29f3e420a1e0442ae7caa46eb", "text": "Four new species of Ixora (Rubiaceae, Ixoreae) from Brazil are described and illustrated and their relationships to morphologically similar species as well as their conservation status are discussed. The new species, Ixora cabraliensis, Ixora emygdioi, Ixora grazielae, and Ixora pilosostyla are endemic to the Atlantic Forest of southern Bahia and Espirito Santo. São descritas e ilustradas quatro novas espécies de Ixora (Rubiaceae, Ixoreae) para o Brasil bem como discutidos o relacionamento morfológico com espécies mais similares e o estado de conservação. As novas espécies Ixora cabraliensis, Ixora emygdioi, Ixora grazielae e Ixora pilosostyla são endêmicas da Floresta Atlântica, no trecho do sul do estado da Bahia e o estado do Espírito Santo.", "title": "" }, { "docid": "1bdd050958754ef19dd35f53dd055b5a", "text": "We present a method for isotropic remeshing of arbitrary genus surfaces. The method is based on a mesh adaptation process, namely, a sequence of local modifications performed on a copy of the original mesh, while referring to the original mesh geometry. The algorithm has three stages. In the first stage the required number or vertices are generated by iterative simplification or refinement. The second stage performs an initial vertex partition using an area-based relaxation method. The third stage achieves precise isotropic vertex sampling prescribed by a given density function on the mesh. We use a modification of Lloyd’s relaxation method to construct a weighted centroidal Voronoi tessellation of the mesh. We apply these iterations locally on small patches of the mesh that are parameterized into the 2D plane. This allows us to handle arbitrary complex meshes with any genus and any number of boundaries. The efficiency and the accuracy of the remeshing process is achieved using a patch-wise parameterization technique. Key-words: Surface mesh generation, isotropic triangle meshing, centroidal Voronoi tessellation, local parameterization. ∗ Technion, Haifa, Israel † INRIA Sophia-Antipolis ‡ Technion, Haifa, Israel Remaillage isotrope de surfaces utilisant une paramétrisation locale Résumé : Cet article décrit une méthode de remaillage isotrope de surfaces triangulées. L’approche repose sur une technique d’adaptation locale du maillage. L’idée consiste à opérer une séquence d’opérations élémentaires sur une copie du maillage original, tout en faisant référence au maillage original pour la géométrie. L’algorithme comporte trois étapes. La première étape ramène la complexité du maillage au nombre de sommets désiré par raffinement ou décimation itérative. La seconde étape opère une première répartition des sommets via une technique de relaxation optimisant un équilibrage local des aires sur les triangles. La troisième étape opère un placement isotrope des sommets via une relaxation de Lloyd pour construire une tessellation de Voronoi centrée. Les itérations de relaxation de Lloyd sont appliquées localement dans un espace paramétrique 2D calculé à la volée sur un sous-ensemble de la triangulation originale de telle que sorte que les triangulations de complexité et de genre arbitraire puissent être efficacement remaillées. Mots-clés : Maillage de surfaces, maillage triangulaire isotrope, diagrammes de Voronoi centrés, paramétrisation locale. Isotropic Remeshing of Surfaces", "title": "" }, { "docid": "1bdf73110d3fdbe2cfbbd99f8388d170", "text": "ACKNOWLEDGEMENT First of all I would like to thank my ALLAH Almighty Who gave me the courage, health, and energy to accomplish my thesis in due time and without Whose help this study which required untiring efforts would have not been possible to complete within the time limits. key elements required from the supervisor(s) to write and complete a thesis of a good standard and a quality within deadlines. It is a matter of utmost pleasure for me to extend my gratitude and give due credit to my supervisor Yinghong Chen whose support has always been there in need of time and who provided me with all these key elements to complete my dissertation within the time frame. Acknowledgement would be incomplete without extending my gratitude to one of my friends in Pakistan Mr. mammoth help in data collection made this study possible. Moreover, he has been supporting me enthusiastically throughout my work to make my thesis ready in due time. My thanks is also due to my examiner Max Zamanian whose valuable comments and suggestions made colossal contribution in improving my dissertation. Last but not least, I extend my thanks to my entire family for moral support and prays for my health and successful completion of my dissertation within time limits. ABSTRACT Islamic banking and finance in Pakistan started in 1977-78 with the elimination of interest in compliance with the Principles of Islamic Shari'ah in Islamic banking practices. Since then, amendments in financial system to allow the issuance of new interest-free instrument of corporate financing, promulgation of ordinance to permit the establishment of Mudaraba companies and floatation of Mudaraba Certificates, constitution of Commission for Transformation of Financial System (CTFS), and the establishments of Islamic Banking Department by the State Bank of Pakistan are some of the key steps taken place by the governments. The aim of this study is to examine and to evaluate the performance of the first Islamic bank in Pakistan, i.e. Meezan Bank Limited (MBL) in comparison with that of a group of 5 Pakistani conventional banks. The study evaluates performance of the Islamic bank (MBL) in profitability, liquidity, risk, and efficiency for the period of 2003-2007. Asset Utilization (AU), and Income to Expense ratio (IER) are used to assess banking performances. T-test and F-test are used in determining the significance of the differential performance of the two groups of banks. The study found that MBL …", "title": "" } ]
scidocsrr
904ad87b2ecd96dc356330cb8c2f2b77
Shallow and Deep Networks Intrusion Detection System: A Taxonomy and Survey
[ { "docid": "4c1dd5cdf03e618f4ac1923c4fbcc251", "text": "With the rapid expansion of computer usage and computer network the security of the computer system has became very important. Every day new kind of attacks are being faced by industries. As the threat becomes a serious matter year by year, intrusion detection technologies are indispensable for network and computer security. A variety of intrusion detection approaches be present to resolve this severe issue but the main problem is performance. It is important to increase the detection rates and reduce false alarm rates in the area of intrusion detection. In order to detect the intrusion, various approaches have been developed and proposed over the last decade. In this paper, a detailed survey of intrusion detection based various techniques has been presented. Here, the techniques are classified as follows: i) papers related to Neural network ii) papers related to Support vector machine iii) papers related to K-means classifier iv) papers related to hybrid technique and v) paper related to other detection techniques. For comprehensive analysis, detection rate, time and false alarm rate from various research papers have been taken.", "title": "" } ]
[ { "docid": "531ac7d6500373005bae464c49715288", "text": "We have used acceleration sensors to monitor the heart motion during surgery. A three-axis accelerometer was made from two commercially available two-axis sensors, and was used to measure the heart motion in anesthetized pigs. The heart moves due to both respiration and heart beating. The heart beating was isolated from respiration by high-pass filtering at 1.0 Hz, and heart wall velocity and position were calculated by numerically integrating the filtered acceleration traces. The resulting curves reproduced the heart motion in great detail, noise was hardly visible. Events that occurred during the measurements, e.g. arrhythmias and fibrillation, were recognized in the curves, and confirmed by comparison with synchronously recorded ECG data. We conclude that acceleration sensors are able to measure heart motion with good resolution, and that such measurements can reveal patterns that may be an indication of heart circulation failure.", "title": "" }, { "docid": "2c328d1dd45733ad8063ea89a6b6df43", "text": "We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvement. We study RPL in five challenging MuJoCo tasks involving partial observability, sensor noise, model misspecification, and controller miscalibration. By combining learning with control algorithms, RPL can perform long-horizon, sparse-reward tasks for which reinforcement learning alone fails. Moreover, we find that RPL consistently and substantially improves on the initial controllers. We argue that RPL is a promising approach for combining the complementary strengths of deep reinforcement learning and robotic control, pushing the boundaries of what either can achieve independently.", "title": "" }, { "docid": "7dc5e63ddbb8ec509101299924093c8b", "text": "The task of aspect and opinion terms co-extraction aims to explicitly extract aspect terms describing features of an entity and opinion terms expressing emotions from user-generated texts. To achieve this task, one effective approach is to exploit relations between aspect terms and opinion terms by parsing syntactic structure for each sentence. However, this approach requires expensive effort for parsing and highly depends on the quality of the parsing results. In this paper, we offer a novel deep learning model, named coupled multi-layer attentions. The proposed model provides an end-to-end solution and does not require any parsers or other linguistic resources for preprocessing. Specifically, the proposed model is a multilayer attention network, where each layer consists of a couple of attentions with tensor operators. One attention is for extracting aspect terms, while the other is for extracting opinion terms. They are learned interactively to dually propagate information between aspect terms and opinion terms. Through multiple layers, the model can further exploit indirect relations between terms for more precise information extraction. Experimental results on three benchmark datasets in SemEval Challenge 2014 and 2015 show that our model achieves stateof-the-art performances compared with several baselines.", "title": "" }, { "docid": "c68397cdbe538fd22fe88c0ff4e47879", "text": "With the higher demand of the three dimensional (3D) imaging, a high definition real-time 3D video system based on FPGA is proposed. The system is made up of CMOS image sensors, DDR2 SDRAM, High Definition Multimedia Interface (HDMI) transmitter and Field Programmable Gate Array (FPGA). CMOS image sensor produces digital video streaming. DDR2 SDRAM buffers large amount of video data. FPGA processes the video streaming and realizes 3D data format conversion. HDMI transmitter is utilized to transmit 3D format data. Using the active 3D display device and shutter glasses, the system can achieve the living effect of real-time 3D high definition imaging. The resolution of the system is 720p@60Hz in 3D mode.", "title": "" }, { "docid": "5abcd733dce7e8ced901830cbcaad56b", "text": "Stored-value cards, or prepaid cards, are increasingly popular. Like credit cards, their use is vulnerable to fraud, costing merchants and card processors millions of dollars. Prior techniques to automate fraud detection rely on a priori rules or specialized learned models associated with the customer. Mostly, these techniques do not consider fraud sequences or changing behavior, which can lead to false alarms. This study demonstrates how a transaction model can be dynamically created and updated, and fraud can be automatically detected for prepaid cards. A card processing company creates models of the store terminals rather than the customers, in part, because of the anonymous nature of prepaid cards. The technique automatically creates, updates, and compares hidden Markov models (HMM) of merchant terminals. We present fraud detection and experiments on real transactional data, showing the efficiency and effectiveness of the approach. In the fraud test cases, derived from known fraud cases, the technique has a good F-score. The technique can detect fraud in real-time for merchants, as card transactions are processed by a modern transaction processing system. © 2017 Published by Elsevier Ltd.", "title": "" }, { "docid": "fc25e19d03a6686a0829a823d97cedbe", "text": "OBJECTIVE\nThe problem of identifying, in advance, the most effective treatment agent for various psychiatric conditions remains an elusive goal. To address this challenge, we investigate the performance of the proposed machine learning (ML) methodology (based on the pre-treatment electroencephalogram (EEG)) for prediction of response to treatment with a selective serotonin reuptake inhibitor (SSRI) medication in subjects suffering from major depressive disorder (MDD).\n\n\nMETHODS\nA relatively small number of most discriminating features are selected from a large group of candidate features extracted from the subject's pre-treatment EEG, using a machine learning procedure for feature selection. The selected features are fed into a classifier, which was realized as a mixture of factor analysis (MFA) model, whose output is the predicted response in the form of a likelihood value. This likelihood indicates the extent to which the subject belongs to the responder vs. non-responder classes. The overall method was evaluated using a \"leave-n-out\" randomized permutation cross-validation procedure.\n\n\nRESULTS\nA list of discriminating EEG biomarkers (features) was found. The specificity of the proposed method is 80.9% while sensitivity is 94.9%, for an overall prediction accuracy of 87.9%. There is a 98.76% confidence that the estimated prediction rate is within the interval [75%, 100%].\n\n\nCONCLUSIONS\nThese results indicate that the proposed ML method holds considerable promise in predicting the efficacy of SSRI antidepressant therapy for MDD, based on a simple and cost-effective pre-treatment EEG.\n\n\nSIGNIFICANCE\nThe proposed approach offers the potential to improve the treatment of major depression and to reduce health care costs.", "title": "" }, { "docid": "70d901bae1e40dc5c585ae1f73c00776", "text": "Sexual abuse includes any activity with a child, before the age of legal consent, that is for the sexual gratification of an adult or a significantly older child. Sexual mistreatment of children by family members (incest) and nonrelatives known to the child is the most common type of sexual abuse. Intrafamiliar sexual abuse is difficult to document and manage, because the child must be protected from additional abuse and coercion not to reveal or to deny the abuse, while attempts are made to preserve the family unit. The role of a comprehensive forensic medical examination is of major importance in the full investigation of such cases and the building of an effective prosecution in the court. The protection of the sexually abused child from any additional emotional trauma during the physical examination is of great importance. A brief assessment of the developmental, behavioral, mental and emotional status should also be obtained. The physical examination includes inspection of the whole body with special attention to the mouth, breasts, genitals, perineal region, buttocks and anus. The next concern for the doctor is the collection of biologic evidence, provided that the alleged sexual abuse has occurred within the last 72 hours. Cultures and serologic tests for sexually transmitted diseases are decided by the doctor according to the special circumstances of each case. Pregnancy test should also be performed in each case of a girl in reproductive age.", "title": "" }, { "docid": "a56b8688c380226d844f705a1017ba5f", "text": "[1] Data analysis has been one of the core activities in scientific research, but limited by the availability of analysis methods in the past, data analysis was often relegated to data processing. To accommodate the variety of data generated by nonlinear and nonstationary processes in nature, the analysis method would have to be adaptive. Hilbert-Huang transform, consisting of empirical mode decomposition and Hilbert spectral analysis, is a newly developed adaptive data analysis method, which has been used extensively in geophysical research. In this review, we will briefly introduce the method, list some recent developments, demonstrate the usefulness of the method, summarize some applications in various geophysical research areas, and finally, discuss the outstanding open problems. We hope this review will serve as an introduction of the method for those new to the concepts, as well as a summary of the present frontiers of its applications for experienced research scientists.", "title": "" }, { "docid": "0b4c076b80d91eb20ef71e63f17e9654", "text": "Current sports injury reporting systems lack a common conceptual basis. We propose a conceptual foundation as a basis for the recording of health problems associated with participation in sports, based on the notion of impairment used by the World Health Organization. We provide definitions of sports impairment concepts to represent the perspectives of health services, the participants in sports and physical exercise themselves, and sports institutions. For each perspective, the duration of the causative event is used as the norm for separating concepts into those denoting impairment conditions sustained instantly and those developing gradually over time. Regarding sports impairment sustained in isolated events, 'sports injury' denotes the loss of bodily function or structure that is the object of observations in clinical examinations; 'sports trauma' is defined as an immediate sensation of pain, discomfort or loss of functioning that is the object of athlete self-evaluations; and 'sports incapacity' is the sidelining of an athlete because of a health evaluation made by a legitimate sports authority that is the object of time loss observations. Correspondingly, sports impairment caused by excessive bouts of physical exercise is denoted as 'sports disease' (overuse syndrome) when observed by health service professionals during clinical examinations, 'sports illness' when observed by the athlete in self-evaluations, and 'sports sickness' when recorded as time loss from sports participation by a sports body representative. We propose a concerted development effort in this area that takes advantage of concurrent ontology management resources and involves the international sporting community in building terminology systems that have broad relevance.", "title": "" }, { "docid": "a302b0a5f20daf162b6d10f5b0f8aaab", "text": "In this work we present a novel end-to-end framework for tracking and classifying a robot’s surroundings in complex, dynamic and only partially observable real-world environments. The approach deploys a recurrent neural network to filter an input stream of raw laser measurements in order to directly infer object locations, along with their identity in both visible and occluded areas. To achieve this we first train the network using unsupervised Deep Tracking, a recently proposed theoretical framework for end-to-end space occupancy prediction. We show that by learning to track on a large amount of unsupervised data, the network creates a rich internal representation of its environment which we in turn exploit through the principle of inductive transfer of knowledge to perform the task of it’s semantic classification. As a result, we show that only a small amount of labelled data suffices to steer the network towards mastering this additional task. Furthermore we propose a novel recurrent neural network architecture specifically tailored to tracking and semantic classification in real-world robotics applications. We demonstrate the tracking and classification performance of the method on real-world data collected at a busy road junction. Our evaluation shows that the proposed end-to-end framework compares favourably to a state-of-the-art, model-free tracking solution and that it outperforms a conventional one-shot training scheme for semantic classification.", "title": "" }, { "docid": "35e6ad2d7c84a5a96c44234962eea57d", "text": "Material Recognition Ting-Chun Wang1, Jun-Yan Zhu1, Ebi Hiroaki2, Manmohan Chandraker2, Alexei A. Efros1, Ravi Ramamoorthi2 1University of California, Berkeley University of California, San Diego Motivation • Light-field images should help recognize materials since reflectance can be estimated • CNNs have recently been very successful in material recognition • We combine these two and propose a new light-field dataset since no one is currently available", "title": "" }, { "docid": "ab5963208b0c5a513ceca6e926e8aab9", "text": "This paper presents a large-scale corpus for non-task-oriented dialogue response selection, which contains over 27K distinct prompts more than 82K responses collected from social media.1 To annotate this corpus, we define a 5-grade rating scheme: bad, mediocre, acceptable, good, and excellent, according to the relevance, coherence, informativeness, interestingness, and the potential to move a conversation forward. To test the validity and usefulness of the produced corpus, we compare various unsupervised and supervised models for response selection. Experimental results confirm that the proposed corpus is helpful in training response selection models.", "title": "" }, { "docid": "c73623dd471b82bb8ab1308d31b14713", "text": "It's coming again, the new collection that this site has. To complete your curiosity, we offer the favorite mathematical problems in image processing partial differential equations and the calculus of variations book as the choice today. This is a book that will show you even new to old thing. Forget it; it will be right for you. Well, when you are really dying of mathematical problems in image processing partial differential equations and the calculus of variations, just pick it. You know, this book is always making the fans to be dizzy if not to find.", "title": "" }, { "docid": "231f27d4cb32a5687a05dd26e775fbb8", "text": "There are currently more objects connected to the Internet than there are people in the world. This gap will continue to grow, as more objects gain the ability to directly interface with the Internet or become physical representations of data accessible via Internet systems. This trend toward greater independent object interaction in the Internet is collectively described as the Internet of Things (IoT). As with previous global technology trends, such as widespread mobile adoption and datacentre consolidation, the changing operating environment associated with the Internet of Things represents considerable impact to the attack surface and threat environment of the Internet and Internet-connected systems. The increase in Internet-connected systems and the accompanying, non-linear increase in Internet attack surface can be represented by several tiers of increased surface complexity. Users, or groups of users, are linked to a non-linear number of connected entities, which in turn are linked to a non-linear number of indirectly connected, trackable entities. At each tier of this model, the increasing population, complexity, heterogeneity, interoperability, mobility, and distribution of entities represents an expanding attack surface, measurable by additional channels, methods, and data items. Further, this expansion will necessarily increase the field of security stakeholders and introduce new manageability challenges. This document provides a framework for measurement and analysis of the security implications inherent in an Internet that is dominated by non-user endpoints, content in the form of objects, and content that is generated by objects without direct user involvement.", "title": "" }, { "docid": "d676b25f9704fe89d5d8fe929c639829", "text": "The landscape of cloud computing has significantly changed over the last decade. Not only have more providers and service offerings crowded the space, but also cloud infrastructure that was traditionally limited to single provider data centers is now evolving. In this paper, we firstly discuss the changing cloud infrastructure and consider the use of infrastructure from multiple providers and the benefit of decentralising computing away from data centers. These trends have resulted in the need for a variety of new computing architectures that will be offered by future cloud infrastructure. These architectures are anticipated to impact areas, such as connecting people and devices, data-intensive computing, the service space and self-learning systems. Finally, we lay out a roadmap of challenges thatwill need to be addressed for realising the potential of next generation cloud systems. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "cef1270ff3e263d2becf551288b08efe", "text": "Sentiment Analysis has become a significant research matter for its probable in tapping into the vast amount of opinions generated by the people. Sentiment analysis deals with the computational conduct of opinion, sentiment within the text. People sometimes uses sarcastic text to express their opinion within the text. Sarcasm is a type of communication act in which the people write the contradictory of what they mean in reality. The intrinsically vague nature of sarcasm sometimes makes it hard to understand. Recognizing sarcasm can promote many sentiment analysis applications. Automatic detecting sarcasm is an approach for predicting sarcasm in text. In this paper we have tried to talk of the past work that has been done for detecting sarcasm in the text. This paper talk of approaches, features, datasets, and issues associated with sarcasm detection. Performance values associated with the past work also has been discussed. Various tables that present different dimension of past work like dataset used, features, approaches, performance values has also been discussed.", "title": "" }, { "docid": "b53e5d6054b684990e9c5c1e5d2b6b7d", "text": "Automatic Dependent Surveillance-Broadcast (ADS-B) is one of the key technologies for future “e-Enabled” aircrafts. ADS-B uses avionics in the e-Enabled aircrafts to broadcast essential flight data such as call sign, altitude, heading, and other extra positioning information. On the one hand, ADS-B brings significant benefits to the aviation industry, but, on the other hand, it could pose security concerns as channels between ground controllers and aircrafts for the ADS-B communication are not secured, and ADS-B messages could be captured by random individuals who own ADS-B receivers. In certain situations, ADS-B messages contain sensitive information, particularly when communications occur among mission-critical civil airplanes. These messages need to be protected from any interruption and eavesdropping. The challenge here is to construct an encryption scheme that is fast enough for very frequent encryption and that is flexible enough for effective key management. In this paper, we propose a Staged Identity-Based Encryption (SIBE) scheme, which modifies Boneh and Franklin's original IBE scheme to address those challenges, that is, to construct an efficient and functional encryption scheme for ADS-B system. Based on the proposed SIBE scheme, we provide a confidentiality framework for future e-Enabled aircraft with ADS-B capability.", "title": "" }, { "docid": "cfaff335370e7bb63dd5179527157ef7", "text": "The predictive accuracy of a survival model can be summarized using extensions of the proportion of variation explained by the model, or R2, commonly used for continuous response models, or using extensions of sensitivity and specificity, which are commonly used for binary response models. In this article we propose new time-dependent accuracy summaries based on time-specific versions of sensitivity and specificity calculated over risk sets. We connect the accuracy summaries to a previously proposed global concordance measure, which is a variant of Kendall's tau. In addition, we show how standard Cox regression output can be used to obtain estimates of time-dependent sensitivity and specificity, and time-dependent receiver operating characteristic (ROC) curves. Semiparametric estimation methods appropriate for both proportional and nonproportional hazards data are introduced, evaluated in simulations, and illustrated using two familiar survival data sets.", "title": "" }, { "docid": "47ce8e943d402976b0455f221d0ea537", "text": "A major challenge for Medical Image Retrieval (MIR) is the discovery of relationships between low-level image features (intensity, gradient, texture, etc.) and high-level semantics such as modality, anatomy or pathology. Convolutional Neural Networks (CNNs) have been shown to have an inherent ability to automatically extract hierarchical representations from raw data. Their successful application in a variety of generalised imaging tasks suggests great potential for MIR. However, a major hurdle to their deployment in the medical domain is the relative lack of robust training corpora when compared to general imaging benchmarks such as ImageNET and CIFAR. In this paper, we present the adaptation of CNNs to the medical clustering task at ImageCLEF 2015.", "title": "" }, { "docid": "cae197969610816a4bd1946fc370851f", "text": "Outlier detection is a critical function across a diverse range of tasks and domains. There are numerous outlier detection methods, the majority of which produce scores to indicate an outlier versus inlier. An issue with these scores is that they can be difficult to interpret and do not allow for comparisons between different methods. One solution is to convert the outlier score to probabilities. These probability estimates can provide understandable and meaningful results for assessing outlying values. Moreover, the probabilities can be combined to produce an ensemble of outlier detection methods, further enhancing the detection of outliers. In this paper, we propose a unique approach leveraging probabilistic programming to fit the original outlier score distributions to a 3-parameter Lognormal distribution. We provide empirical evidence for the use of this distribution, compare the probability estimates with the outlier scores, discuss confidence in these estimates, evaluate detection performance via the probabilities, and provide an ensemble detection example. Our research indicates this approach reasonably models the original outlier scores, resulting in meaningful outlier probability estimates.", "title": "" } ]
scidocsrr
9ff0800c7b2d62b54f9f0863956a8311
Can neural machine translation do simultaneous translation?
[ { "docid": "6af09f57f2fcced0117dca9051917a0d", "text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.", "title": "" }, { "docid": "cb929b640f8ee7b550512dd4d0dc8e17", "text": "The existing machine translation systems, whether phrase-based or neural, have relied almost exclusively on word-level modelling with explicit segmentation. In this paper, we ask a fundamental question: can neural machine translation generate a character sequence without any explicit segmentation? To answer this question, we evaluate an attention-based encoder– decoder with a subword-level encoder and a character-level decoder on four language pairs–En-Cs, En-De, En-Ru and En-Fi– using the parallel corpora from WMT’15. Our experiments show that the models with a character-level decoder outperform the ones with a subword-level decoder on all of the four language pairs. Furthermore, the ensembles of neural models with a character-level decoder outperform the state-of-the-art non-neural machine translation systems on En-Cs, En-De and En-Fi and perform comparably on En-Ru.", "title": "" }, { "docid": "247eced239dfd8c1631d80a592593471", "text": "In this paper, we propose new algorithms for learning segmentation strategies for simultaneous speech translation. In contrast to previously proposed heuristic methods, our method finds a segmentation that directly maximizes the performance of the machine translation system. We describe two methods based on greedy search and dynamic programming that search for the optimal segmentation strategy. An experimental evaluation finds that our algorithm is able to segment the input two to three times more frequently than conventional methods in terms of number of words, while maintaining the same score of automatic evaluation.1", "title": "" } ]
[ { "docid": "bebead03e8645e35a304a425dc34e038", "text": "Given the potential importance of technology parks, their complexity in terms of the scope of required investment and the growing interest of governments to use them as tools for creating sustainable development there is a pressing need for a better understanding of the critical success factors of these entities. However, Briggs and watt (2001) argued that the goal of many technology parks and the factors driving innovation success are still a mystery. In addition, it is argued that the problem with analyzing technology parks and cluster building is that recent studies analyze “the most celebrated case studies... to ‘explain’ their success” (Holbrook and Wolfe, 2002). This study uses intensive interviewing of technology parks’ managers and managers of tenant firms in the technology park to explore critical success factors of four of Australia’s' technology parks. The study identified the following critical success factors: a culture of risk-taking “entrepreneurism”, an autonomous park management that is independent of university officials and government bureaucrats, an enabling environment, a critical mass of companies that allows for synergies within the technology park, the presence of internationally renounced innovative companies, and finally a shared vision among the technology park stakeholders.", "title": "" }, { "docid": "3039e9b5271445addc3e824c56f89490", "text": "From the recent availability of images recorded by synthetic aperture radar (SAR) airborne systems, automatic results of digital elevation models (DEMs) on urban structures have been published lately. This paper deals with automatic extraction of three-dimensional (3-D) buildings from stereoscopic high-resolution images recorded by the SAR airborne RAMSES sensor from the French Aerospace Research Center (ONERA). On these images, roofs are not very textured whereas typical strong L-shaped echoes are visible. These returns generally result from dihedral corners between ground and structures. They provide a part of the building footprints and the ground altitude, but not the building heights. Thus, we present an adapted processing scheme in two steps. First is stereoscopic structure extraction from L-shaped echoes. Buildings are detected on each image using the Hough transform. Then they are recognized during a stereoscopic refinement stage based on a criterion optimization. Second, is height measurement. As most of previous extracted footprints indicate the ground altitude, building heights are found by monoscopic and stereoscopic measures. Between structures, ground altitudes are obtained by a dense matching process. Experiments are performed on images representing an industrial area. Results are compared with a ground truth. Advantages and limitations of the method are brought out.", "title": "" }, { "docid": "f68a02ac83df98b48e9afbe4b54c49f3", "text": "We propose a brand new “Liberal” Event Extraction paradigm to extract events and discover event schemas from any input corpus simultaneously. We incorporate symbolic (e.g., Abstract Meaning Representation) and distributional semantics to detect and represent event structures and adopt a joint typing framework to simultaneously extract event types and argument roles and discover an event schema. Experiments on general and specific domains demonstrate that this framework can construct high-quality schemas with many event and argument role types, covering a high proportion of event types and argument roles in manually defined schemas. We show that extraction performance using discovered schemas is comparable to supervised models trained from a large amount of data labeled according to predefined event types. The extraction quality of new event types is also promising.", "title": "" }, { "docid": "4b354edbd555b6072ae04fb9befc48eb", "text": "We present a generative method for the creation of geometrically complex andmaterially heterogeneous objects. By combining generative design and additive manufacturing, we demonstrate a unique formfinding approach and method for multi-material 3D printing. The method offers a fast, automated and controllable way to explore an expressive set of symmetrical, complex and colored objects, which makes it a useful tool for design exploration andprototyping.Wedescribe a recursive grammar for the generation of solid boundary surfacemodels suitable for a variety of design domains.We demonstrate the generation and digital fabrication ofwatertight 2-manifold polygonalmeshes, with feature-aligned topology that can be produced on a wide variety of 3D printers, as well as post-processed with traditional 3D modeling tools. To date, objects with intricate spatial patterns and complex heterogeneous material compositions generated by this method can only be produced through 3D printing. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "02f09c60a5d6aaad43831e933b967aeb", "text": "The problem of plagiarism in programming assignments by students in computer science courses has caused considerable concern among both faculty and students. There are a number of methods which instructors use in an effort to control the plagiarism problem. In this paper we describe a plagiarism detection system which was recently implemented in our department. This system is being used to detect similarities in student programs.", "title": "" }, { "docid": "85e76a44cf95521296a92dadcbc5e8d0", "text": "This paper presents a four-channel bi-directional core chip in 0.13 um CMOS for X-band phased array Transmit/Receive (T/R) module. Each channel consists of a 5-bit step attenuator, a 6-bit phase shifter, bi-directional gain blocks (BDGB), and a bi-directional amplifier (BDA). Additional circuits such as low drop out (LDO) regulator, bias circuits with band-gap reference (BGR), and serial to parallel interface (SPI) are integrated for stable biasing and ease of interface. The chip size is 6.9 × 1.6 mm2 including pads which corresponds to 2.8 mm2 per channel. The phase and attenuation coverage is 360° with the LSB of 5.625°, and 31dB with the LSB of 1dB, respectively. The RMS phase error is better than 2.3°, and the RMS attenuation error is better than 0.25 dB at 9-10 GHz. The Tx mode reference-state gain in each channel is 11.3-12.2 dB including the 4-way power combiner insertion losses ideally 6 dB, and the Rx mode gain is 8.6-9.5 dB at 9-10 GHz. The output P1dB in Tx mode is > 11 dBm at 9-10 GHz. To the best of authors' knowledge, this is the smallest size per channel X-band core chip in CMOS technology with bi-directional operation and competitive RF performance to-date.", "title": "" }, { "docid": "fbddd20271cf134e15b33e7d6201c374", "text": "Authors and publishers who wish their publications to be considered for review in Computational Linguistics should send a copy to the book review editor, Graeme Hirst, Department of Computer Science, University of Toronto, Toronto, Canada M5S 3G4. All relevant books received will be listed, but not all can be reviewed. Technical reports (other than dissertations) will not be listed or reviewed. Authors should be aware that some publishers will not send books for review (even when instructed to do so); authors wishing to enquire as to whether their book has been received for review may contact the book review editor.", "title": "" }, { "docid": "1d1291cdad5f4ae0453417caa465cc95", "text": "Multipath TCP is a new transport protocol that enables systems to exploit available paths through multiple network interfaces. MPTCP is particularly useful for mobile devices, which frequently have multiple wireless interfaces. However, these devices have limited power capacity and thus judicious use of these interfaces is required. In this work, we develop a model for MPTCP energy consumption derived from experimental measurements using MPTCP on a mobile device with both cellular and WiFi interfaces. Using our MPTCP energy model, we identify the operating region where MPTCP can be more power efficient than either standard TCP or MPTCP. Based on our findings, we also design and implement an improved energy-efficient MPTCP that reduces power consumption by up to 8% in our experiments, while preserving the availability and robustness benefits of MPTCP.", "title": "" }, { "docid": "95903410bc39b26e44f6ea80ad85e182", "text": "We propose distributed deep neural networks (DDNNs) over distributed computing hierarchies, consisting of the cloud, the edge (fog) and end devices. While being able to accommodate inference of a deep neural network (DNN) in the cloud, a DDNN also allows fast and localized inference using shallow portions of the neural network at the edge and end devices. When supported by a scalable distributed computing hierarchy, a DDNN can scale up in neural network size and scale out in geographical span. Due to its distributed nature, DDNNs enhance sensor fusion, system fault tolerance and data privacy for DNN applications. In implementing a DDNN, we map sections of a DNN onto a distributed computing hierarchy. By jointly training these sections, we minimize communication and resource usage for devices and maximize usefulness of extracted features which are utilized in the cloud. The resulting system has built-in support for automatic sensor fusion and fault tolerance. As a proof of concept, we show a DDNN can exploit geographical diversity of sensors to improve object recognition accuracy and reduce communication cost. In our experiment, compared with the traditional method of offloading raw sensor data to be processed in the cloud, DDNN locally processes most sensor data on end devices while achieving high accuracy and is able to reduce the communication cost by a factor of over 20x.", "title": "" }, { "docid": "6a26336e9aaaaaf32c8f8828205f3e76", "text": "OBJECTIVE\nLesion-based mapping of speech pathways has been possible only during invasive neurosurgical procedures using direct cortical stimulation (DCS). However, navigated transcranial magnetic stimulation (nTMS) may allow for lesion-based interrogation of language pathways noninvasively. Although not lesion-based, magnetoencephalographic imaging (MEGI) is another noninvasive modality for language mapping. In this study, we compare the accuracy of nTMS and MEGI with DCS.\n\n\nMETHODS\nSubjects with lesions around cortical language areas underwent preoperative nTMS and MEGI for language mapping. nTMS maps were generated using a repetitive TMS protocol to deliver trains of stimulations during a picture naming task. MEGI activation maps were derived from adaptive spatial filtering of beta-band power decreases prior to overt speech during picture naming and verb generation tasks. The subjects subsequently underwent awake language mapping via intraoperative DCS. The language maps obtained from each of the 3 modalities were recorded and compared.\n\n\nRESULTS\nnTMS and MEGI were performed on 12 subjects. nTMS yielded 21 positive language disruption sites (11 speech arrest, 5 anomia, and 5 other) while DCS yielded 10 positive sites (2 speech arrest, 5 anomia, and 3 other). MEGI isolated 32 sites of peak activation with language tasks. Positive language sites were most commonly found in the pars opercularis for all three modalities. In 9 instances the positive DCS site corresponded to a positive nTMS site, while in 1 instance it did not. In 4 instances, a positive nTMS site corresponded to a negative DCS site, while 169 instances of negative nTMS and DCS were recorded. The sensitivity of nTMS was therefore 90%, specificity was 98%, the positive predictive value was 69% and the negative predictive value was 99% as compared with intraoperative DCS. MEGI language sites for verb generation and object naming correlated with nTMS sites in 5 subjects, and with DCS sites in 2 subjects.\n\n\nCONCLUSION\nMaps of language function generated with nTMS correlate well with those generated by DCS. Negative nTMS mapping also correlates with negative DCS mapping. In our study, MEGI lacks the same level of correlation with intraoperative mapping; nevertheless it provides useful adjunct information in some cases. nTMS may offer a lesion-based method for noninvasively interrogating language pathways and be valuable in managing patients with peri-eloquent lesions.", "title": "" }, { "docid": "69058572e8baaef255a3be6ac9eef878", "text": "Web developers often want to repurpose interactive behaviors from third-party web pages, but struggle to locate the specific source code that implements the behavior. This task is challenging because developers must find and connect all of the non-local interactions between event-based JavaScript code, declarative CSS styles, and web page content that combine to express the behavior.\n The Scry tool embodies a new approach to locating the code that implements interactive behaviors. A developer selects a page element; whenever the element changes, Scry captures the rendering engine's inputs (DOM, CSS) and outputs (screenshot) for the element. For any two captured element states, Scry can compute how the states differ and which lines of JavaScript code were responsible. Using Scry, a developer can locate an interactive behavior's implementation by picking two output states; Scry indicates the JavaScript code directly responsible for their differences.", "title": "" }, { "docid": "3ad19b3710faeda90db45e2f7cebebe8", "text": "Motion planning is a fundamental problem in robotics. It comes in a variety of forms, but the simplest version is as follows. We are given a robot system B, which may consist of several rigid objects attached to each other through various joints, hinges, and links, or moving independently, and a 2D or 3D environment V cluttered with obstacles. We assume that the shape and location of the obstacles and the shape of B are known to the planning system. Given an initial placement Z1 and a final placement Z2 of B, we wish to determine whether there exists a collisionavoiding motion of B from Z1 to Z2, and, if so, to plan such a motion. In this simplified and purely geometric setup, we ignore issues such as incomplete information, nonholonomic constraints, control issues related to inaccuracies in sensing and motion, nonstationary obstacles, optimality of the planned motion, and so on. Since the early 1980s, motion planning has been an intensive area of study in robotics and computational geometry. In this chapter we will focus on algorithmic motion planning, emphasizing theoretical algorithmic analysis of the problem and seeking worst-case asymptotic bounds, and only mention briefly practical heuristic approaches to the problem. The majority of this chapter is devoted to the simplified version of motion planning, as stated above. Section 51.1 presents general techniques and lower bounds. Section 51.2 considers efficient solutions to a variety of specific moving systems with a small number of degrees of freedom. These efficient solutions exploit various sophisticated methods in computational and combinatorial geometry related to arrangements of curves and surfaces (Chapter 30). Section 51.3 then briefly discusses various extensions of the motion planning problem such as computing optimal paths with respect to various quality measures, computing the path of a tethered robot, incorporating uncertainty, moving obstacles, and more.", "title": "" }, { "docid": "7dba7b28582845bf13d9f9373e39a2af", "text": "The Internet and social media provide a major source of information about people's opinions. Due to the rapidly growing number of online documents, it becomes both time-consuming and hard task to obtain and analyze the desired opinionated information. Sentiment analysis is the classification of sentiments expressed in documents. To improve classification perfromance feature selection methods which help to identify the most valuable features are generally applied. In this paper, we compare the performance of four feature selection methods namely Chi-square, Information Gain, Query Expansion Ranking, and Ant Colony Optimization using Maximum Entropi Modeling classification algorithm over Turkish Twitter dataset. Therefore, the effects of feature selection methods over the performance of sentiment analysis of Turkish Twitter data are evaluated. Experimental results show that Query Expansion Ranking and Ant Colony Optimization methods outperform other traditional feature selection methods for sentiment analysis.", "title": "" }, { "docid": "107bb53e3ceda3ee29fc348febe87f11", "text": "The objective here is to develop a flat surface area measuring system which is used to calculate the surface area of any irregular sheet. The irregular leather sheet is used in this work. The system is self protected by user name and password set through software for security purpose. Only authorize user can enter into the system by entering the valid pin code. After entering into the system, the user can measure the area of any irregular sheet, monitor and control the system. The heart of the system is Programmable Logic Controller (Master K80S) which controls the complete working of the system. The controlling instructions for the system are given through the designed Human to Machine Interface (HMI). For communication purpose the GSM modem is also interfaced with the Programmable Logic Controller (PLC). The remote user can also monitor the current status of the devices by sending SMS message to the GSM modem.", "title": "" }, { "docid": "673e1ec63a0e84cf3fbf450928d89905", "text": "This study proposed an IoT (Internet of Things) system for the monitoring and control of the aquaculture platform. The proposed system is network surveillance combined with mobile devices and a remote platform to collect real-time farm environmental information. The real-time data is captured and displayed via ZigBee wireless transmission signal transmitter to remote computer terminals. This study permits real-time observation and control of aquaculture platform with dissolved oxygen sensors, temperature sensing elements using A/D and microcontrollers signal conversion. The proposed system will use municipal electricity coupled with a battery power source to provide power with battery intervention if municipal power is interrupted. This study is to make the best fusion value of multi-odometer measurement data for optimization via the maximum likelihood estimation (MLE).Finally, this paper have good efficient and precise computing in the experimental results.", "title": "" }, { "docid": "d80ca368563546b1c2a7aa99d97e39d2", "text": "In this paper we present a short history of logics: from parti cular cases of 2-symbol or numerical valued logic to the general case of n-symbol or num erical valued logic. We show generalizations of 2-valued Boolean logic to fuzzy log ic, also from the Kleene’s and Lukasiewicz’ 3-symbol valued logics or Belnap’s 4ymbol valued logic to the most generaln-symbol or numerical valued refined neutrosophic logic . Two classes of neutrosophic norm ( n-norm) and neutrosophic conorm ( n-conorm) are defined. Examples of applications of neutrosophic logic to physics are listed in the last section. Similar generalizations can be done for n-Valued Refined Neutrosophic Set , and respectively n-Valued Refined Neutrosopjhic Probability .", "title": "" }, { "docid": "b84d8b711738bbd889a3a88ba82f45c0", "text": "Transmission over wireless channel is challenging. As such, different application required different signal processing approach of radio system. So, a highly reconfigurable radio system is on great demand as the traditional fixed and embedded radio system are not viable to cater the needs for frequently change requirements of wireless communication. A software defined radio or better known as an SDR, is a software-based radio platform that offers flexibility to deliver the highly reconfigurable system requirements. This approach allows a different type of communication system requirements such as standard, protocol, or signal processing method, to be deployed by using the same set of hardware and software such as USRP and GNU Radio respectively. For researchers, this approach has opened the door to extend their studies in simulation domain into experimental domain. However, the realization of SDR concept is inherently limited by the analog components of the hardware being used. Despite that, the implementation of SDR is still new yet progressing, thus, this paper intends to provide an insight about its viability as a high re-configurable platform for communication system. This paper presents the SDR-based transceiver of common digital modulation system by means of GNU Radio and USRP.", "title": "" }, { "docid": "5a77a8a9e0a1ec5284d07140fff06f66", "text": "Among the many challenges facing modern space physics today is the need for a visualisation and analysis package which can examine the results from the diversity of numerical and empirical computer models as well as observational data. Magnetohydrodynamic (MHD) models represent the latest numerical models of the complex Earth’s space environment and have the unique ability to span the enormous distances present in the magnetosphere from several hundred kilometres to several thousand kilometres above the Earth surface. This feature enables scientist to study complex structures of processes where otherwise only point measurements from satellites or ground-based instruments are available. Only by combining these observational data and the MHD simulations it is possible to enlarge the scope of the point-to-point observations and to fill the gaps left by measurements in order to get a full 3-D representation of the processes in our geospace environment. In this paper we introduce the VisAn MHD toolbox for Matlab as a tool for the visualisation and analysis of observational data and MHD simulations. We have created an easy to use tool which is capable of highly sophisticated visualisations and data analysis of the results from a diverse set of MHD models in combination with in situ measurements from satellites and groundbased instruments. The toolbox is being released under an open-source licensing agreement to facilitate and encourage community use and contribution.", "title": "" }, { "docid": "d8839a4ee6afb89a49d807861f8d3a08", "text": "Single-phase photovoltaic (PV) energy conversion systems are the main solution for small-scale rooftop PV applications. Some multilevel topologies have been commercialized for PV systems and an they are attractive alternative to implement small-scale rooftop PV applications. Efficiency, reliability, power quality and power losses are important concepts to consider in PV converters. For this reason this paper presents a comparison of four multilevel converter based in the T-type topology proposed by Conergy. The presented control scheme is based in single-phase voltage oriented control and simulation results are presented to provide a preliminary validation of each topology. Finally a summary table with the different features of the converters is provided.", "title": "" } ]
scidocsrr
70a6a2388b519e3b3e95d6a55440d96c
Deep multimodal fusion for persuasiveness prediction
[ { "docid": "2bb194184bea4b606ec41eb9eee0bfaa", "text": "Our lives are heavily influenced by persuasive communication, and it is essential in almost any types of social interactions from business negotiation to conversation with our friends and family. With the rapid growth of social multimedia websites, it is becoming ever more important and useful to understand persuasiveness in the context of social multimedia content online. In this paper, we introduce our newly created multimedia corpus of 1,000 movie review videos obtained from a social multimedia website called ExpoTV.com, which will be made freely available to the research community. Our research results presented here revolve around the following 3 main research hypotheses. Firstly, we show that computational descriptors derived from verbal and nonverbal behavior can be predictive of persuasiveness. We further show that combining descriptors from multiple communication modalities (audio, text and visual) improve the prediction performance compared to using those from single modality alone. Secondly, we investigate if having prior knowledge of a speaker expressing a positive or negative opinion helps better predict the speaker's persuasiveness. Lastly, we show that it is possible to make comparable prediction of persuasiveness by only looking at thin slices (shorter time windows) of a speaker's behavior.", "title": "" } ]
[ { "docid": "bfd94756f73fc7f9eb81437f5d192ac3", "text": "Technological advances in upper-limb prosthetic design offer dramatically increased possibilities for powered movement. The DEKA Arm system allows users 10 powered degrees of movement. Learning to control these movements by utilizing a set of motions that, in most instances, differ from those used to obtain the desired action prior to amputation is a challenge for users. In the Department of Veterans Affairs \"Study to Optimize the DEKA Arm,\" we attempted to facilitate motor learning by using a virtual reality environment (VRE) program. This VRE program allows users to practice controlling an avatar using the controls designed to operate the DEKA Arm in the real world. In this article, we provide highlights from our experiences implementing VRE in training amputees to use the full DEKA Arm. This article discusses the use of VRE in amputee rehabilitation, describes the VRE system used with the DEKA Arm, describes VRE training, provides qualitative data from a case study of a subject, and provides recommendations for future research and implementation of VRE in amputee rehabilitation. Our experience has led us to believe that training with VRE is particularly valuable for upper-limb amputees who must master a large number of controls and for those amputees who need a structured learning environment because of cognitive deficits.", "title": "" }, { "docid": "0b28624e1ec6367d8f8fd9ad92c4bc88", "text": "The margin, or the difference between the received signal-to-noise (SNR) and the SNR required to maintain a given bit error ratio (BER), is important to the design and operation of optical amplifier transmission systems A new tehnique is described for estimating the SNR at the receiver's decision circuit when the BER is too low to be measured in a reasonable time. The SNR is determined from the behavior of the BER as a function of the decision threshold setting in the region where the BER is measurable. The authors obtain good agreement between the BER predicted using the measured SNR value and the actual measured BER.<<ETX>>", "title": "" }, { "docid": "b591667db2fd53ac9332464b4babd877", "text": "Health Insurance fraud is a major crime that imposes significant financial and personal costs on individuals, businesses, government and society as a whole. So there is a growing concern among the insurance industry about the increasing incidence of abuse and fraud in health insurance. Health Insurance frauds are driving up the overall costs of insurers, premiums for policyholders, providers and then intern countries finance system. It encompasses a wide range of illicit practices and illegal acts. This paper provides an approach to detect and predict potential frauds by applying big data, hadoop environment and analytic methods which can lead to rapid detection of claim anomalies. The solution is based on a high volume of historical data from various insurance company data and hospital data of a specific geographical area. Such sources are typically voluminous, diverse, and vary significantly over the time. Therefore, distributed and parallel computing tools collectively termed big data have to be developed. Paper demonstrate the effectiveness and efficiency of the open-source predictive modeling framework we used, describe the results from various predictive modeling techniques .The platform is able to detect erroneous or suspicious records in submitted health care data sets and gives an approach of how the hospital and other health care data is helpful for the detecting health care insurance fraud by implementing various data analytic module such as decision tree, clustering and naive Bayesian classification. Aim is to build a model that can identify the claim is a fraudulent or not by relating data from hospitals and insurance company to make health insurance more efficient and to ensure that the money is spent on legitimate causes. Critical objectives included the development of a fraud detection engine with an aim to help those in the health insurance business and minimize the loss of funds to fraud.", "title": "" }, { "docid": "f74ea8439f1d0be11e86f7e4838bfc73", "text": "In this paper, we investigate large-scale zero-shot activity recognition by modeling the visual and linguistic attributes of action verbs. For example, the verb “salute” has several properties, such as being a light movement, a social act, and short in duration. We use these attributes as the internal mapping between visual and textual representations to reason about a previously unseen action. In contrast to much prior work that assumes access to gold standard attributes for zero-shot classes and focuses primarily on object attributes, our model uniquely learns to infer action attributes from dictionary definitions and distributed word representations. Experimental results confirm that action attributes inferred from language can provide a predictive signal for zero-shot prediction of previously unseen activities.", "title": "" }, { "docid": "363e799cd63907ce64ad405cfdff3b56", "text": "This paper discusses visual methods that can be used to understand and interpret the results of classification using support vector machines (SVM) on data with continuous real-valued variables. SVM induction algorithms build pattern classifiers by identifying a maximal margin separating hyperplane from training examples in high dimensional pattern spaces or spaces induced by suitable nonlinear kernel transformations over pattern spaces. SVM have been demonstrated to be quite effective in a number of practical pattern classification tasks. Since the separating hyperplane is defined in terms of more than two variables it is necessary to use visual techniques that can navigate the viewer through high-dimensional spaces. We demonstrate the use of projection-based tour methods to gain useful insights into SVM classifiers with linear kernels on 8-dimensional data.", "title": "" }, { "docid": "b825426604420620e1bba43c0f45115e", "text": "Taxonomies are the backbone of many structured, semantic knowledge resources. Recent works for extracting taxonomic relations from text focused on collecting lexical-syntactic patterns to extract the taxonomic relations by matching the patterns to text. These approaches, however, often show low coverage due to the lack of contextual analysis across sentences. To address this issue, we propose a novel approach that collectively utilizes contextual information of terms in syntactic structures such that if the set of contexts of a term includes most of contexts of another term, a subsumption relation between the two terms is inferred. We apply this method to the task of taxonomy construction from scratch, where we introduce another novel graph-based algorithm for taxonomic structure induction. Our experiment results show that the proposed method is well complementary with previous methods of linguistic pattern matching and significantly improves recall and thus F-measure.", "title": "" }, { "docid": "299f24e2ef6cc833d008656a5d8e4552", "text": "In computational intelligence, the term ‘memetic algorithm’ has come to be associated with the algorithmic pairing of a global search method with a local search method. In a sociological context, a ‘meme’ has been loosely defined as a unit of cultural information, the social analog of genes for individuals. Both of these definitions are inadequate, as ‘memetic algorithm’ is too specific, and ultimately a misnomer, as much as a ‘meme’ is defined too generally to be of scientific use. In this paper, we extend the notion of memes from a computational viewpoint and explore the purpose, definitions, design guidelines and architecture for effective memetic computing. Utilizing two conceptual case studies, we illustrate the power of high-order meme-based learning. With applications ranging from cognitive science to machine learning, memetic computing has the potential to provide much-needed stimulation to the field of computational intelligence by providing a framework for higher order learning.", "title": "" }, { "docid": "effe6b869444790d513a5404049452e6", "text": "We develop an approach to combine two types of music generation models, namely symbolic and raw audio models. While symbolic models typically operate at the note level and are able to capture long-term dependencies, they lack the expressive richness and nuance of performed music. Raw audio models train directly on raw audio waveforms, and can be used to produce expressive music; however, these models typically lack structure and long-term dependencies. We describe a work-in-progress model that trains a raw audio model based on the recently-proposed WaveNet architecture, but that incorporates the notes of the composition as a secondary input to the network. When generating novel compositions, we utilize an LSTM network whose output feeds into the raw audio model, thus yielding an end-to-end model that generates raw audio outputs combining the best of both worlds. We describe initial results of our approach, which we believe to show considerable promise for structured music generation.", "title": "" }, { "docid": "6dfb62138ad7e0c23826a2c6b7c2507e", "text": "End-to-end speech recognition systems have been successfully designed for English. Taking into account the distinctive characteristics between Chinese Mandarin and English, it is worthy to do some additional work to transfer these approaches to Chinese. In this paper, we attempt to build a Chinese speech recognition system using end-to-end learning method. The system is based on a combination of deep Long Short-Term Memory Projected (LSTMP) network architecture and the Connectionist Temporal Classification objective function (CTC). The Chinese characters (the number is about 6,000) are used as the output labels directly. To integrate language model information during decoding, the CTC Beam Search method is adopted and optimized to make it more effective and more efficient. We present the first-pass decoding results which are obtained by decoding from scratch using CTC-trained network and language model. Although these results are not as good as the performance of DNN-HMMs hybrid system, they indicate that it is feasible to choose Chinese characters as the output alphabet in the end-toend speech recognition system.", "title": "" }, { "docid": "d3572050b68eebeca483616c7c1833dd", "text": "Explanations for women's underrepresentation in math-intensive fields of science often focus on sex discrimination in grant and manuscript reviewing, interviewing, and hiring. Claims that women scientists suffer discrimination in these arenas rest on a set of studies undergirding policies and programs aimed at remediation. More recent and robust empiricism, however, fails to support assertions of discrimination in these domains. To better understand women's underrepresentation in math-intensive fields and its causes, we reprise claims of discrimination and their evidentiary bases. Based on a review of the past 20 y of data, we suggest that some of these claims are no longer valid and, if uncritically accepted as current causes of women's lack of progress, can delay or prevent understanding of contemporary determinants of women's underrepresentation. We conclude that differential gendered outcomes in the real world result from differences in resources attributable to choices, whether free or constrained, and that such choices could be influenced and better informed through education if resources were so directed. Thus, the ongoing focus on sex discrimination in reviewing, interviewing, and hiring represents costly, misplaced effort: Society is engaged in the present in solving problems of the past, rather than in addressing meaningful limitations deterring women's participation in science, technology, engineering, and mathematics careers today. Addressing today's causes of underrepresentation requires focusing on education and policy changes that will make institutions responsive to differing biological realities of the sexes. Finally, we suggest potential avenues of intervention to increase gender fairness that accord with current, as opposed to historical, findings.", "title": "" }, { "docid": "d6f278b9c9cc72a85c94659729b143bc", "text": "Diet and physical activity are known as important lifestyle factors in self-management and prevention of many chronic diseases. Mobile sensors such as accelerometers have been used to measure physical activity or detect eating time. In many intervention studies, however, stringent monitoring of overall dietary composition and energy intake is needed. Currently, such a monitoring relies on self-reported data by either entering text or taking an image that represents food intake. These approaches suffer from limitations such as low adherence in technology adoption and time sensitivity to the diet intake context. In order to address these limitations, we introduce development and validation of Speech2Health, a voice-based mobile nutrition monitoring system that devises speech processing, natural language processing (NLP), and text mining techniques in a unified platform to facilitate nutrition monitoring. After converting the spoken data to text, nutrition-specific data are identified within the text using an NLP-based approach that combines standard NLP with our introduced pattern mapping technique. We then develop a tiered matching algorithm to search the food name in our nutrition database and accurately compute calorie intake values. We evaluate Speech2Health using real data collected with 30 participants. Our experimental results show that Speech2Health achieves an accuracy of 92.2% in computing calorie intake. Furthermore, our user study demonstrates that Speech2Health achieves significantly higher scores on technology adoption metrics compared to text-based and image-based nutrition monitoring. Our research demonstrates that new sensor modalities such as voice can be used either standalone or as a complementary source of information to existing modalities to improve the accuracy and acceptability of mobile health technologies for dietary composition monitoring.", "title": "" }, { "docid": "370c728b64c8cf6c63815729f4f9b03e", "text": "Previous researchers studying baseball pitching have compared kinematic and kinetic parameters among different types of pitches, focusing on the trunk, shoulder, and elbow. The lack of data on the wrist and forearm limits the understanding of clinicians, coaches, and researchers regarding the mechanics of baseball pitching and the differences among types of pitches. The purpose of this study was to expand existing knowledge of baseball pitching by quantifying and comparing kinematic data of the wrist and forearm for the fastball (FA), curveball (CU) and change-up (CH) pitches. Kinematic and temporal parameters were determined from 8 collegiate pitchers recorded with a four-camera system (200 Hz). Although significant differences were observed for all pitch comparisons, the least number of differences occurred between the FA and CH. During arm cocking, peak wrist extension for the FA and CH pitches was greater than for the CU, while forearm supination was greater for the CU. In contrast to the current study, previous comparisons of kinematic data for trunk, shoulder, and elbow revealed similarities between the FA and CU pitches and differences between the FA and CH pitches. Kinematic differences among pitches depend on the segment of the body studied.", "title": "" }, { "docid": "6018c84c0e5666b5b4615766a5bb98a9", "text": "We introduce instancewise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informative for each given example. This feature selector is trained to maximize the mutual information between selected features and the response variable, where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.", "title": "" }, { "docid": "29258360cd268748c19dd613c75b1023", "text": "Despite continuously improving performance, contemporary image captioning models are prone to “hallucinating” objects that are not actually in a scene. One problem is that standard metrics only measure similarity to ground truth captions and may not fully capture image relevance. In this work, we propose a new image relevance metric to evaluate current models with veridical visual labels and assess their rate of object hallucination. We analyze how captioning model architectures and learning objectives contribute to object hallucination, explore when hallucination is likely due to image misclassification or language priors, and assess how well current sentence metrics capture object hallucination. We investigate these questions on the standard image captioning benchmark, MSCOCO, using a diverse set of models. Our analysis yields several interesting findings, including that models which score best on standard sentence metrics do not always have lower hallucination and that models which hallucinate more tend to make errors driven by language priors.", "title": "" }, { "docid": "b59b5bfb0758a07a72c6bbd7f90212e0", "text": "The ease with which digital images can be manipulated without severe degradation of quality makes it necessary to be able to verify the authenticity of digital images. One way to establish the image authenticity is by computing a hash sequence from an image. This hash sequence must be robust against non content-altering manipulations, but must be able to show if the content of the image has been tampered with. Furthermore, the hash has to have enough differentiating power such that the hash sequences from two different images are not similar. This paper presents an image hashing system based on local Histogram of Oriented Gradients. The system is shown to have good differentiating power, robust against non content-altering manipulations such as filtering and JPEG compression and is sensitive to content-altering attacks.", "title": "" }, { "docid": "a88e8fac39e0bef4746381930455be6d", "text": "Predicating macroscopic influences of drugs on human body, like efficacy and toxicity, is a central problem of smallmolecule based drug discovery. Molecules can be represented as an undirected graph, and we can utilize graph convolution networks to predication molecular properties. However, graph convolutional networks and other graph neural networks all focus on learning node-level representation rather than graph-level representation. Previous works simply sum all feature vectors for all nodes in the graph to obtain the graph feature vector for drug predication. In this paper, we introduce a dummy super node that is connected with all nodes in the graph by a directed edge as the representation of the graph and modify the graph operation to help the dummy super node learn graph-level feature. Thus, we can handle graph-level classification and regression in the same way as node-level classification and regression. In addition, we apply focal loss to address class imbalance in drug datasets. The experiments on MoleculeNet show that our method can effectively improve the performance of molecular properties predication.", "title": "" }, { "docid": "959547839a5769d6bfcca0efa6568cbf", "text": "Conventionally, maximum capacities for energy assimilation are presented as daily averages. However, maximum daily energy intake is determined by the maximum metabolizable energy intake rate and the time available for assimilation of food energy. Thrush nightingales (Luscinia luscinia) in migratory disposition were given limited food rations for 3 d to reduce their energy stores. Subsequently, groups of birds were fed ad lib. during fixed time periods varying between 7 and 23 h per day. Metabolizable energy intake rate, averaged over the available feeding time, was 1.9 W and showed no difference between groups on the first day of refueling. Total daily metabolizable energy intake increased linearly with available feeding time, and for the 23-h group, it was well above suggested maximum levels for animals. We conclude that both intake rate and available feeding time must be taken into account when interpreting potential constraints acting on animals' energy budgets. In the 7-h group, energy intake rates increased from 1.9 W on the first day to 3.1 W on the seventh day. This supports the idea that small birds can adaptively increase their energy intake rates on a short timescale.", "title": "" }, { "docid": "5b75356c6fc7e277158210f0b4640e41", "text": "A central methodological problem of historical studies, in linguistics as in other disciplines, is that data are limited to what happens to have survived the vicissitudes of time. In particular, we cannot perform experiments to broaden the range of facts available for analysis, to compensate for sampling biases in the preservation of data or to test the validity of hypotheses. In historical syntax, the domain of this study, the problem is particularly acute, since grammatical analysis depends on negative evidence, the knowledge that certain sentence types are unacceptable. When we study living languages, we obtain such information experimentally, usually by elicitation of judgments of acceptability from informants. Though the methodological difficulties inherent in the experimental method of contemporary syntactic investigation may be substantial (Labov, 1975b), the information it provides forms the necessary basis of grammatical analysis. Hence, syntacticians who wish to interrogate historical material find themselves in difficulty. The difficulty will be mitigated if two reasonable assumptions are made (see, for example, Adams, 1987b; Santorini, 1989): 1) The past is like the present and general principles derived from the study of living languages in the present will hold of archaic ones as well. This assumption allows the historical syntactician to, in the words of Labov, \"use the present to explain the past (Labov, 1975a).\" 2) For reasonably simple sentences, if a certain type does not occur in a substantial corpus, then it is not grammatically possible in the language of that corpus. Here the assumption is, of course, problematic since non-occurrence in a corpus may always be due to non-grammatical, contextual factors or even to chance. Still, for structurally simple cases, including those we will be considering in this paper, it is unlikely to lead us far astray.", "title": "" }, { "docid": "6fc870c703611e07519ce5fe956c15d1", "text": "Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.", "title": "" }, { "docid": "de3306194639c2f2f2a4c06b9075b58d", "text": "BACKGROUND\nDevastating fourth-degree electrical injuries to the face and head pose significant reconstructive challenges. To date, there have been few peer-reviewed articles in the literature that describe those reconstructive challenges. The authors present the largest case series to date that describes the management of these injuries, including the incorporation of face transplantation.\n\n\nMETHODS\nA retrospective case series was conducted of patients with devastating electrical injuries to the face who were managed at two level-1 trauma centers between 2007 and 2011. Data describing patient injuries, initial management, and reconstructive procedures were collected.\n\n\nRESULTS\nFive patients with devastating electrical injuries to the face were reviewed. After initial stabilization and treatment of life-threatening injuries, all five underwent burn excision and microsurgical reconstruction using distant flaps. Two of the patients eventually underwent face transplantation. The authors describe differences in management between the two trauma centers, one of which had the availability for composite tissue allotransplantation; the other did not. Also described is how initial attempts at traditional reconstruction affected the eventual face transplantation.\n\n\nCONCLUSIONS\nThe care of patients with complex electrical burns must be conducted in a multidisciplinary fashion. As with all other trauma, the initial priority should be management of the airway, breathing, and circulation. Additional considerations include cardiac arrhythmias and renal impairment attributable to myoglobinuria. Before embarking on aggressive reconstruction attempts, it is advisable to determine early whether the patient is a candidate for face transplantation in order to avoid antigen sensitization, loss of a reconstructive \"lifeboat,\" surgical plane disruption, and sacrifice of potential recipient vessels.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, V.", "title": "" } ]
scidocsrr
987e0266c73109191ccbacf73747a6b3
Performance optimization of Hadoop cluster using linux services
[ { "docid": "b104337e30aa30db3dadc4e254ed2ad4", "text": "We live in on-demand, on-command Digital universe with data prolifering by Institutions, Individuals and Machines at a very high rate. This data is categories as \"Big Data\" due to its sheer Volume, Variety and Velocity. Most of this data is unstructured, quasi structured or semi structured and it is heterogeneous in nature. The volume and the heterogeneity of data with the speed it is generated, makes it difficult for the present computing infrastructure to manage Big Data. Traditional data management, warehousing and analysis systems fall short of tools to analyze this data. Due to its specific nature of Big Data, it is stored in distributed file system architectures. Hadoop and HDFS by Apache is widely used for storing and managing Big Data. Analyzing Big Data is a challenging task as it involves large distributed file systems which should be fault tolerant, flexible and scalable. Map Reduce is widely been used for the efficient analysis of Big Data. Traditional DBMS techniques like Joins and Indexing and other techniques like graph search is used for classification and clustering of Big Data. These techniques are being adopted to be used in Map Reduce. In this paper we suggest various methods for catering to the problems in hand through Map Reduce framework over Hadoop Distributed File System (HDFS). Map Reduce is a Minimization technique which makes use of file indexing with mapping, sorting, shuffling and finally reducing. Map Reduce techniques have been studied in this paper which is implemented for Big Data analysis using HDFS.", "title": "" } ]
[ { "docid": "5ea560095b752ca8e7fb6672f4092980", "text": "Access control is a security aspect whose requirements evolve with technology advances and, at the same time, contemporary social contexts. Multitudes of access control models grow out of their respective application domains such as healthcare and collaborative enterprises; and even then, further administering means, human factor considerations, and infringement management are required to effectively deploy the model in the particular usage environment. This paper presents a survey of access control mechanisms along with their deployment issues and solutions available today. We aim to give a comprehensive big picture as well as pragmatic deployment details to guide in understanding, setting up and enforcing access control in its real world application.", "title": "" }, { "docid": "bf08bc98eb9ef7a18163fc310b10bcf6", "text": "An ultra-low voltage, low power, low line sensitivity MOSFET-only sub-threshold voltage reference with no amplifiers is presented. The low sensitivity is realized by the difference between two complementary currents and second-order compensation improves the temperature stability. The bulk-driven technique is used and most of the transistors work in the sub-threshold region, which allow a remarkable reduction in the minimum supply voltage and power consumption. Moreover, a trimming circuit is adopted to compensate the process-related reference voltage variation while the line sensitivity is not affected. The proposed voltage reference has been fabricated in the 0.18 μm 1.8 V CMOS process. The measurement results show that the reference could operate on a 0.45 V supply voltage. For supply voltages ranging from 0.45 to 1.8 V the power consumption is 15.6 nW, and the average temperature coefficient is 59.4 ppm/°C across a temperature range of -40 to 85 °C and a mean line sensitivity of 0.033%. The power supply rejection ratio measured at 100 Hz is -50.3 dB. In addition, the chip area is 0.013 mm2.", "title": "" }, { "docid": "443a4fe9e7484a18aa53a4b142d93956", "text": "BACKGROUND AND PURPOSE\nFrequency and duration of static stretching have not been extensively examined. Additionally, the effect of multiple stretches per day has not been evaluated. The purpose of this study was to determine the optimal time and frequency of static stretching to increase flexibility of the hamstring muscles, as measured by knee extension range of motion (ROM).\n\n\nSUBJECTS\nNinety-three subjects (61 men, 32 women) ranging in age from 21 to 39 years and who had limited hamstring muscle flexibility were randomly assigned to one of five groups. The four stretching groups stretched 5 days per week for 6 weeks. The fifth group, which served as a control, did not stretch.\n\n\nMETHODS\nData were analyzed with a 5 x 2 (group x test) two-way analysis of variance for repeated measures on one variable (test).\n\n\nRESULTS\nThe change in flexibility appeared to be dependent on the duration and frequency of stretching. Further statistical analysis of the data indicated that the groups that stretched had more ROM than did the control group, but no differences were found among the stretching groups.\n\n\nCONCLUSION AND DISCUSSION\nThe results of this study suggest that a 30-second duration is an effective amount of time to sustain a hamstring muscle stretch in order to increase ROM. No increase in flexibility occurred when the duration of stretching was increased from 30 to 60 seconds or when the frequency of stretching was increased from one to three times per day.", "title": "" }, { "docid": "8709706ffafdadfc2fb9210794dfa782", "text": "The increasing availability and affordability of wireless building and home automation networks has increased interest in residential and commercial building energy management. This interest has been coupled with an increased awareness of the environmental impact of energy generation and usage. Residential appliances and equipment account for 30% of all energy consumption in OECD countries and indirectly contribute to 12% of energy generation related carbon dioxide (CO2) emissions (International Energy Agency, 2003). The International Energy Association also predicts that electricity usage for residential appliances would grow by 12% between 2000 and 2010, eventually reaching 25% by 2020. These figures highlight the importance of managing energy use in order to improve stewardship of the environment. They also hint at the potential gains that are available through smart consumption strategies targeted at residential and commercial buildings. The challenge is how to achieve this objective without negatively impacting people’s standard of living or their productivity. The three primary purposes of building energy management are the reduction/management of building energy use; the reduction of electricity bills while increasing occupant comfort and productivity; and the improvement of environmental stewardship without adversely affecting standards of living. Building energy management systems provide a centralized platform for managing building energy usage. They detect and eliminate waste, and enable the efficient use electricity resources. The use of widely dispersed sensors enables the monitoring of ambient temperature, lighting, room occupancy and other inputs required for efficient management of climate control (heating, ventilation and air conditioning), security and lighting systems. Lighting and HVAC account for 50% of commercial and 40% of residential building electricity expenditure respectively, indicating that efficiency improvements in these two areas can significantly reduce energy expenditure. These savings can be made through two avenues: the first is through the use of energy-efficient lighting and HVAC systems; and the second is through the deployment of energy management systems which utilize real time price information to schedule loads to minimize energy bills. The latter scheme requires an intelligent power grid or smart grid which can provide bidirectional data flows between customers and utility companies. The smart grid is characterized by the incorporation of intelligenceand bidirectional flows of information and electricity throughout the power grid. These enhancements promise to revolutionize the grid by enabling customers to not only consume but also supply power.", "title": "" }, { "docid": "80fd067dd6cf2fe85ade3c632e82c04c", "text": "0957-4174/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.eswa.2009.03.046 * Corresponding author. Tel.: +98 09126121921. E-mail address: shahbazi_mo@yahoo.com (M. Sha Recommender systems are powerful tools that allow companies to present personalized offers to their customers and defined as a system which recommends an appropriate product or service after learning the customers’ preferences and desires. Extracting users’ preferences through their buying behavior and history of purchased products is the most important element of such systems. Due to users’ unlimited and unpredictable desires, identifying their preferences is very complicated process. In most researches, less attention has been paid to user’s preferences varieties in different product categories. This may decrease quality of recommended items. In this paper, we introduced a technique of recommendation in the context of online retail store which extracts user preferences in each product category separately and provides more personalized recommendations through employing product taxonomy, attributes of product categories, web usage mining and combination of two well-known filtering methods: collaborative and content-based filtering. Experimental results show that proposed technique improves quality, as compared to similar approaches. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c7a96129484bbedd063a0b322d9ae3d3", "text": "BACKGROUND\nNon-invasive detection of aneuploidies in a fetal genome through analysis of cell-free DNA circulating in the maternal plasma is becoming a routine clinical test. Such tests, which rely on analyzing the read coverage or the allelic ratios at single-nucleotide polymorphism (SNP) loci, are not sensitive enough for smaller sub-chromosomal abnormalities due to sequencing biases and paucity of SNPs in a genome.\n\n\nRESULTS\nWe have developed an alternative framework for identifying sub-chromosomal copy number variations in a fetal genome. This framework relies on the size distribution of fragments in a sample, as fetal-origin fragments tend to be smaller than those of maternal origin. By analyzing the local distribution of the cell-free DNA fragment sizes in each region, our method allows for the identification of sub-megabase CNVs, even in the absence of SNP positions. To evaluate the accuracy of our method, we used a plasma sample with the fetal fraction of 13%, down-sampled it to samples with coverage of 10X-40X and simulated samples with CNVs based on it. Our method had a perfect accuracy (both specificity and sensitivity) for detecting 5 Mb CNVs, and after reducing the fetal fraction (to 11%, 9% and 7%), it could correctly identify 98.82-100% of the 5 Mb CNVs and had a true-negative rate of 95.29-99.76%.\n\n\nAVAILABILITY AND IMPLEMENTATION\nOur source code is available on GitHub at https://github.com/compbio-UofT/FSDA CONTACT: : brudno@cs.toronto.edu.", "title": "" }, { "docid": "503ddcf57b4e7c1ddc4f4646fb6ca3db", "text": "Merging the virtual World Wide Web with nearby physical devices that are part of the Internet of Things gives anyone with a mobile device and the appropriate authorization the power to monitor or control anything.", "title": "" }, { "docid": "88302ac0c35e991b9db407f268fdb064", "text": "We propose a novel memory architecture for in-memory computation called McDRAM, where DRAM dies are equipped with a large number of multiply accumulate (MAC) units to perform matrix computation for neural networks. By exploiting high internal memory bandwidth and reducing off-chip memory accesses, McDRAM realizes both low latency and energy efficient computation. In our experiments, we obtained the chip layout based on the state-of-the-art memory, LPDDR4 where McDRAM is equipped with 2048 MACs in a single chip package with a small area overhead (4.7%). Compared with the state-of-the-art accelerator, TPU and the power-efficient GPU, Nvidia P4, McDRAM offers <inline-formula> <tex-math notation=\"LaTeX\">$9.5{\\times }$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$14.4{\\times }$ </tex-math></inline-formula> speedup, respectively, in the case that the large-scale MLPs and RNNs adopt the batch size of 1. McDRAM also gives <inline-formula> <tex-math notation=\"LaTeX\">$2.1{\\times }$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$3.7{\\times }$ </tex-math></inline-formula> better computational efficiency in TOPS/W than TPU and P4, respectively, for the large batches.", "title": "" }, { "docid": "3a5dacb4b43f663539108ed1524f0c06", "text": "This paper describes the design of CMOS receiver electronics for monolithic integration with capacitive micromachined ultrasonic transducer (CMUT) arrays for high-frequency intravascular ultrasound imaging. A custom 8-inch (20-cm) wafer is fabricated in a 0.35-μm two-poly, four-metal CMOS process and then CMUT arrays are built on top of the application specific integrated circuits (ASICs) on the wafer. We discuss advantages of the single-chip CMUT-on-CMOS approach in terms of receive sensitivity and SNR. Low-noise and high-gain design of a transimpedance amplifier (TIA) optimized for a forward-looking volumetric-imaging CMUT array element is discussed as a challenging design example. Amplifier gain, bandwidth, dynamic range, and power consumption trade-offs are discussed in detail. With minimized parasitics provided by the CMUT-on-CMOS approach, the optimized TIA design achieves a 90 fA/√Hz input-referred current noise, which is less than the thermal-mechanical noise of the CMUT element. We show successful system operation with a pulseecho measurement. Transducer-noise-dominated detection in immersion is also demonstrated through output noise spectrum measurement of the integrated system at different CMUT bias voltages. A noise figure of 1.8 dB is obtained in the designed CMUT bandwidth of 10 to 20 MHz.", "title": "" }, { "docid": "59a69e5d33d650ef3e4afc053a98abe6", "text": "Three-dimensional television (3D-TV) is the next major revolution in television. A successful rollout of 3D-TV will require a backward-compatible transmission/distribution system, inexpensive 3D displays, and an adequate supply of high-quality 3D program material. With respect to the last factor, the conversion of 2D images/videos to 3D will play an important role. This paper provides an overview of automatic 2D-to-3D video conversion with a specific look at a number of approaches for both the extraction of depth information from monoscopic images and the generation of stereoscopic images. Some challenging issues for the success of automatic 2D-to-3D video conversion are pointed out as possible research topics for the future.", "title": "" }, { "docid": "8f360c907e197beb5e6fc82b081c908f", "text": "This paper describes a 3D object-space paint program. This program allows the user to directly manipulate the parameters used to shade the surface of the 3D shape by applying pigment to its surface. The pigment has all the properties normally associated with material shading models. This includes, but is not limited to, the diffuse color, the specular color, and the surface roughness. The pigment also can have thickness, which is modeled by simultaneously creating a bump map attached to the shape. The output of the paint program is a 3D model with associated texture maps. This information can be used with any rendering program with texture mapping capabilities. Almost all traditional techniques of 2D computer image painting have analogues in 3D object painting, but there are also many new techniques unique to 3D. One example is the use of solid textures to pattern the surface.", "title": "" }, { "docid": "b723616272d078bdbaaae1cf650ace20", "text": "Most of industrial robots are still programmed using the typical teaching process, through the use of the robot teach pendant. In this paper is proposed an accelerometer-based system to control an industrial robot using two low-cost and small 3-axis wireless accelerometers. These accelerometers are attached to the human arms, capturing its behavior (gestures and postures). An Artificial Neural Network (ANN) trained with a back-propagation algorithm was used to recognize arm gestures and postures, which then will be used as input in the control of the robot. The aim is that the robot starts the movement almost at the same time as the user starts to perform a gesture or posture (low response time). The results show that the system allows the control of an industrial robot in an intuitive way. However, the achieved recognition rate of gestures and postures (92%) should be improved in future, keeping the compromise with the system response time (160 milliseconds). Finally, the results of some tests performed with an industrial robot are presented and discussed.", "title": "" }, { "docid": "d469d31d26d8bc07b9d8dfa8ce277e47", "text": "BACKGROUND/PURPOSE\nMorbidity in children treated with appendicitis results either from late diagnosis or negative appendectomy. A Prospective analysis of efficacy of Pediatric Appendicitis Score for early diagnosis of appendicitis in children was conducted.\n\n\nMETHODS\nIn the last 5 years, 1,170 children aged 4 to 15 years with abdominal pain suggestive of acute appendicitis were evaluated prospectively. Group 1 (734) were patients with appendicitis and group 2 (436) nonappendicitis. Multiple linear logistic regression analysis of all clinical and investigative parameters was performed for a model comprising 8 variables to form a diagnostic score.\n\n\nRESULTS\nLogistic regression analysis yielded a model comprising 8 variables, all statistically significant, P <.001. These variables in order of their diagnostic index were (1) cough/percussion/hopping tenderness in the right lower quadrant of the abdomen (0.96), (2) anorexia (0.88), (3) pyrexia (0.87), (4) nausea/emesis (0.86), (5) tenderness over the right iliac fossa (0.84), (6) leukocytosis (0.81), (7) polymorphonuclear neutrophilia (0.80) and (8) migration of pain (0.80). Each of these variables was assigned a score of 1, except for physical signs (1 and 5), which were scored 2 to obtain a total of 10. The Pediatric Appendicitis Score had a sensitivity of 1, specificity of 0.92, positive predictive value of 0.96, and negative predictive value of 0.99.\n\n\nCONCLUSION\nPediatric appendicitis score is a simple, relatively accurate diagnostic tool for accessing an acute abdomen and diagnosing appendicitis in children.", "title": "" }, { "docid": "51c0cdb22056a3dc3f2f9b95811ca1ca", "text": "Technology plays the major role in healthcare not only for sensory devices but also in communication, recording and display device. It is very important to monitor various medical parameters and post operational days. Hence the latest trend in Healthcare communication method using IOT is adapted. Internet of things serves as a catalyst for the healthcare and plays prominent role in wide range of healthcare applications. In this project the PIC18F46K22 microcontroller is used as a gateway to communicate to the various sensors such as temperature sensor and pulse oximeter sensor. The microcontroller picks up the sensor data and sends it to the network through Wi-Fi and hence provides real time monitoring of the health care parameters for doctors. The data can be accessed anytime by the doctor. The controller is also connected with buzzer to alert the caretaker about variation in sensor output. But the major issue in remote patient monitoring system is that the data as to be securely transmitted to the destination end and provision is made to allow only authorized user to access the data. The security issue is been addressed by transmitting the data through the password protected Wi-Fi module ESP8266 which will be encrypted by standard AES128 and the users/doctor can access the data by logging to the html webpage. At the time of extremity situation alert message is sent to the doctor through GSM module connected to the controller. Hence quick provisional medication can be easily done by this system. This system is efficient with low power consumption capability, easy setup, high performance and time to time response.", "title": "" }, { "docid": "d07d6fe33b01fbfb21ba5adc76ec786f", "text": "Dunaliella salina (Dunal) Teod, a unicellular eukaryotic green alga, is a highly salt-tolerant organism. To identify novel genes with potential roles in salinity tolerance, a salt stress-induced D. salina cDNA library was screened based on the expression in Haematococcus pluvialis, an alga also from Volvocales but one that is hypersensitive to salt. Five novel salt-tolerant clones were obtained from the library. Among them, Ds-26-16 and Ds-A3-3 contained the same open reading frame (ORF) and encoded a 6.1 kDa protein. Transgenic tobacco overexpressing Ds-26-16 and Ds-A3-3 exhibited increased leaf area, stem height, root length, total chlorophyll, and glucose content, but decreased proline content, peroxidase activity, and ascorbate content, and enhanced transcript level of Na+/H+ antiporter salt overly sensitive 1 gene (NtSOS1) expression, compared to those in the control plants under salt condition, indicating that Ds-26-16 enhanced the salt tolerance of tobacco plants. The transcript of Ds-26-16 in D. salina was upregulated in response to salt stress. The expression of Ds-26-16 in Escherichia coli showed that the ORF contained the functional region and changed the protein(s) expression profile. A mass spectrometry assay suggested that the most abundant and smallest protein that changed is possibly a DNA-binding protein or Cold shock-like protein. Subcellular localization analysis revealed that Ds-26-16 was located in the nuclei of onion epidermal cells or nucleoid of E. coli cells. In addition, the possible use of shoots regenerated from leaf discs to quantify the salt tolerance of the transgene at the initial stage of tobacco transformation was also discussed.", "title": "" }, { "docid": "ca32fb4df9c03951e14ce9e06f7d90a0", "text": "Future wireless local area networks (WLANs) are expected to serve thousands of users in diverse environments. To address the new challenges that WLANs will face, and to overcome the limitations that previous IEEE standards introduced, a new IEEE 802.11 amendment is under development. IEEE 802.11ax aims to enhance spectrum efficiency in a dense deployment; hence system throughput improves. Dynamic Sensitivity Control (DSC) and BSS Color are the main schemes under consideration in IEEE 802.11ax for improving spectrum efficiency In this paper, we evaluate DSC and BSS Color schemes when physical layer capture (PLC) is modelled. PLC refers to the case that a receiver successfully decodes the stronger frame when collision occurs. It is shown, that PLC could potentially lead to fairness issues and higher throughput in specific cases. We study PLC in a small and large scale scenario, and show that PLC could also improve fairness in specific scenarios.", "title": "" }, { "docid": "0acf9ef6e025805a76279d1c6c6c55e7", "text": "Android mobile devices are enjoying a lion's market share in smartphones and mobile devices. This also attracts malware writers to target the Android platform. Recently, we have discovered a new Android malware distribution channel: releasing malicious firmwares with pre-installed malware to the wild. This poses significant risk since users of mobile devices cannot change the content of the malicious firmwares. Furthermore, pre-installed applications have \" more permissions\" (i.e., silent installation) than other legitimate mobile apps, so they can download more malware or access users' confidential information. To understand and address this new form of malware distribution channel, we design and implement \"DroidRay\": a security evaluation system for customized Android firmwares. DroidRay uses both static and dynamic analyses to evaluate the firmware security on both the application and system levels. To understand the impact of this new malware distribution channel, we analyze 250 Android firmwares and 24,009 pre-installed applications. We reveal how the malicious firmware and pre-installed malware are injected, and discovered 1,947 (8.1%) pre-installed applications have signature vulnerability and 19 (7.6%) firmwares contain pre-installed malware. In addition, 142 (56.8%) firmwares have the default signature vulnerability, five (2.0%) firmwares contain malicious hosts file, at most 40 (16.0%) firmwares have the native level privilege escalation vulnerability and at least 249 (99.6%) firmwares have the Java level privilege escalation vulnerability. Lastly, we investigate a real-world case of a pre-installed zero-day malware known as CEPlugnew, which involves 348,018 infected Android smartphones, and we show its degree and geographical penetration. This shows the significance of this new malware distribution channel, and DroidRay is an effective tool to combat this new form of malware spreading.", "title": "" }, { "docid": "a31287791b12f55adebacbb93a03c8bc", "text": "Emotional adaptation increases pro-social behavior of humans towards robotic interaction partners. Social cues are an important factor in this context. This work investigates, if emotional adaptation still works under absence of human-like facial Action Units. A human-robot dialog scenario is chosen using NAO pretending to work for a supermarket and involving humans providing object names to the robot for training purposes. In a user study, two conditions are implemented with or without explicit emotional adaptation of NAO to the human user in a between-subjects design. Evaluations of user experience and acceptance are conducted based on evaluated measures of human-robot interaction (HRI). The results of the user study reveal a significant increase of helpfulness (number of named objects), anthropomorphism, and empathy in the explicit emotional adaptation condition even without social cues of facial Action Units, but only in case of prior robot contact of the test persons. Otherwise, an opposite effect is found. These findings suggest, that reduction of these social cues can be overcome by robot experience prior to the interaction task, e.g. realizable by an additional bonding phase, confirming the importance of such from previous work. Additionally, an interaction with academic background of the participants is found.", "title": "" }, { "docid": "6e46fd2a8370bc42d245ca128c9f537b", "text": "A literature review of the associations between involvement in bullying and depression is presented. Many studies have demonstrated a concurrent association between involvement in bullying and depression in adolescent population samples. Not only victims but also bullies display increased risk of depression, although not all studies have confirmed this for the bullies. Retrospective studies among adults support the notion that victimization is followed by depression. Prospective follow-up studies have suggested both that victimization from bullying may be a risk factor for depression and that depression may predispose adolescents to bullying. Research among clinically referred adolescents is scarce but suggests that correlations between victimization from bullying and depression are likely to be similar in clinical and population samples. Adolescents who bully present with elevated numbers of psychiatric symptoms and psychiatric and social welfare treatment contacts.", "title": "" }, { "docid": "d00f7e5085d5aa9d8ac38f2abc7b5237", "text": "Data-driven machine learning, in particular deep learning, is improving state-ofthe-art in many healthcare prediction tasks. A current standard protocol is to collect patient data to build, evaluate, and deploy machine learning algorithms for specific age groups (say source domain), which, if not properly trained, can perform poorly on data from other age groups (target domains). In this paper, we address the question of whether it is possible to adapt machine learning models built for one age group to also perform well on other age groups. Additionally, healthcare time series data is also challenging in that it is usually longitudinal and episodic with the potential of having complex temporal relationships. We address these problems with our proposed adversarially trained Variational Adversarial Deep Domain Adaptation (VADDA) model built atop a variational recurrent neural network, which has been shown to be capable of capturing complex temporal latent relationships. We assume and empirically justify that patient data from different age groups can be treated as being similar but different enough to be classified as coming from different domains, requiring the use of domain-adaptive approaches. Through experiments on the MIMIC-III dataset we demonstrate that our model outperforms current state-of-the-art domain adaptation approaches, being (as far as we know) the first to accomplish this for healthcare time-series data.", "title": "" } ]
scidocsrr
e928e50e7191ad2b7de5ae53d23205fe
Relational dynamic memory networks
[ { "docid": "a32d6897d74397f5874cc116221af207", "text": "A plausible definition of “reasoning” could be “algebraically manipulating previously acquired knowledge in order to answer a new question”. This definition covers first-order logical inference or probabilistic inference. It also includes much simpler manipulations commonly used to build large learning systems. For instance, we can build an optical character recognition system by first training a character segmenter, an isolated character recognizer, and a language model, using appropriate labelled training sets. Adequately concatenating these modules and fine tuning the resulting system can be viewed as an algebraic operation in a space of models. The resulting model answers a new question, that is, converting the image of a text page into a computer readable text. This observation suggests a conceptual continuity between algebraically rich inference systems, such as logical or probabilistic inference, and simple manipulations, such as the mere concatenation of trainable learning systems. Therefore, instead of trying to bridge the gap between machine learning systems and sophisticated “all-purpose” inference mechanisms, we can instead algebraically enrich the set of manipulations applicable to training systems, and build reasoning capabilities from the ground up.", "title": "" } ]
[ { "docid": "36d261d49f898664a6f42a84911a8b7c", "text": "Items in real-world recommender systems exhibit certain hierarchical structures. Similarly, user preferences also present hierarchical structures. Recent studies show that incorporating the hierarchy of items or user preferences can improve the performance of recommender systems. However, hierarchical structures are often not explicitly available, especially those of user preferences. Thus, there's a gap between the importance of hierarchies and their availability. In this paper, we investigate the problem of exploring the implicit hierarchical structures for recommender systems when they are not explicitly available. We propose a novel recommendation framework to bridge the gap, which enables us to explore the implicit hierarchies of users and items simultaneously. We then extend the framework to integrate explicit hierarchies when they are available, which gives a unified framework for both explicit and implicit hierarchical structures. Experimental results on real-world datasets demonstrate the effectiveness of the proposed framework by incorporating implicit and explicit structures.", "title": "" }, { "docid": "bb5f748fa34ddc91389fb22ad8c1d163", "text": "Hate speech detection on Twitter is critical for applications like controversial event extraction, building AI chatterbots, content recommendation, and sentiment analysis. We define this task as being able to classify a tweet as racist, sexist or neither. The complexity of the natural language constructs makes this task very challenging. We perform extensive experiments with multiple deep learning architectures to learn semantic word embeddings to handle this complexity. Our experiments on a benchmark dataset of 16K annotated tweets show that such deep learning methods outperform state-of-the-art char/word n-gram methods by ∼18 F1 points.", "title": "" }, { "docid": "bfd946e8b668377295a1672a7bb915a3", "text": "Code-Mixing is a frequently observed phenomenon in social media content generated by multi-lingual users. The processing of such data for linguistic analysis as well as computational modelling is challenging due to the linguistic complexity resulting from the nature of the mixing as well as the presence of non-standard variations in spellings and grammar, and transliteration. Our analysis shows the extent of Code-Mixing in English-Hindi data. The classification of Code-Mixed words based on frequency and linguistic typology underline the fact that while there are easily identifiable cases of borrowing and mixing at the two ends, a large majority of the words form a continuum in the middle, emphasizing the need to handle these at different levels for automatic processing of the data.", "title": "" }, { "docid": "6514ddb39c465a8ca207e24e60071e7f", "text": "The psychometric properties and clinical utility of the Separation Anxiety Avoidance Inventory, child and parent version (SAAI-C/P) were examined in two studies. The aim of the SAAI, a self- and parent-report measure, is to evaluate the avoidance relating to separation anxiety disorder (SAD) situations. In the first study, a school sample of 384 children and their parents (n = 279) participated. In the second study, 102 children with SAD and 35 children with other anxiety disorders (AD) were investigated. In addition, 93 parents of children with SAD, and 35 parents of children with other AD participated. A two-factor structure was confirmed by confirmatory factor analysis. The SAAI-C and SAAI-P demonstrated good internal consistency, test-retest reliability, as well as construct and discriminant validity. Furthermore, the SAAI was sensitive to treatment change. The parent-child agreement was substantial. Overall, these results provide support for the use of the SAAI-C/P version in clinical and research settings.", "title": "" }, { "docid": "8f29de514e2a266a02be4b75d62be44f", "text": "In this work, we apply word embeddings and neural networks with Long Short-Term Memory (LSTM) to text classification problems, where the classification criteria are decided by the context of the application. We examine two applications in particular. The first is that of Actionability, where we build models to classify social media messages from customers of service providers as Actionable or Non-Actionable. We build models for over 30 different languages for actionability, and most of the models achieve accuracy around 85%, with some reaching over 90% accuracy. We also show that using LSTM neural networks with word embeddings vastly outperform traditional techniques. Second, we explore classification of messages with respect to political leaning, where social media messages are classified as Democratic or Republican. The model is able to classify messages with a high accuracy of 87.57%. As part of our experiments, we vary different hyperparameters of the neural networks, and report the effect of such variation on the accuracy. These actionability models have been deployed to production and help company agents provide customer support by prioritizing which messages to respond to. The model for political leaning has been opened and made available for wider use.", "title": "" }, { "docid": "2afbb4e8963b9e6953fd6f7f8c595c06", "text": "Large-scale linguistically annotated corpora have played a crucial role in advancing the state of the art of key natural language technologies such as syntactic, semantic and discourse analyzers, and they serve as training data as well as evaluation benchmarks. Up till now, however, most of the evaluation has been done on monolithic corpora such as the Penn Treebank, the Proposition Bank. As a result, it is still unclear how the state-of-the-art analyzers perform in general on data from a variety of genres or domains. The completion of the OntoNotes corpus, a large-scale, multi-genre, multilingual corpus manually annotated with syntactic, semantic and discourse information, makes it possible to perform such an evaluation. This paper presents an analysis of the performance of publicly available, state-of-the-art tools on all layers and languages in the OntoNotes v5.0 corpus. This should set the benchmark for future development of various NLP components in syntax and semantics, and possibly encourage research towards an integrated system that makes use of the various layers jointly to improve overall performance.", "title": "" }, { "docid": "12a34678fa46825e11944f317fdd4977", "text": "The purpose of a distributed file system (DFS) is to allow users of physically distributed computers to share data and storage resources by using a common file system. A typical configuration for a DFS is a collection of workstations and mainframes connected by a local area network (LAN). A DFS is implemented as part of the operating system of each of the connected computers. This paper establishes a viewpoint that emphasizes the dispersed structure and decentralization of both data and control in the design of such systems. It defines the concepts of transparency, fault tolerance, and scalability and discusses them in the context of DFSs. The paper claims that the principle of distributed operation is fundamental for a fault tolerant and scalable DFS design. It also presents alternatives for the semantics of sharing and methods for providing access to remote files. A survey of contemporary UNIX-based systems, namely, UNIX United, Locus, Sprite, Sun's Network File System, and ITC's Andrew, illustrates the concepts and demonstrates various implementations and design alternatives. Based on the assessment of these systems, the paper makes the point that a departure from the extending centralized file systems over a communication network is necessary to accomplish sound distributed file system design.", "title": "" }, { "docid": "21025b37c5c172399c63148f1bfa49ab", "text": "Buffer overflows belong to the most common class of attacks on today’s Internet. Although stack-based variants are still by far more frequent and well-understood, heap-based overflows have recently gained more attention. Several real-world exploits have been published that corrupt heap management information and allow arbitrary code execution with the privileges of the victim process. This paper presents a technique that protects the heap management information and allows for run-time detection of heap-based overflows. We discuss the structure of these attacks and our proposed detection scheme that has been implemented as a patch to the GNU Lib C. We report the results of our experiments, which demonstrate the detection effectiveness and performance impact of our approach. In addition, we discuss different mechanisms to deploy the memory protection.", "title": "" }, { "docid": "2363f0f9b50bc2ebbccb0746bb6b1080", "text": "This communication presents a wideband, dual-polarized Vivaldi antenna or tapered slot antenna with over a decade (10.7:1) of bandwidth. The dual-polarized antenna structure is achieved by inserting two orthogonal Vivaldi antennas in a cross-shaped form without a galvanic contact. The measured -10 dB impedance bandwidth (S11) is approximately from 0.7 up to 7.30 GHz, corresponding to a 166% relative frequency bandwidth. The isolation (S21) between the antenna ports is better than 30 dB, and the measured maximum gain is 3.8-11.2 dB at the aforementioned frequency bandwidth. Orthogonal polarizations have the same maximum gain within the 0.7-3.6 GHz band, and a slight variation up from 3.6 GHz. The cross-polarization discrimination (XPD) is better than 19 dB across the measured 0.7-6.0 GHz frequency bandwidth, and better than 25 dB up to 4.5 GHz. The measured results are compared with the numerical ones in terms of S-parameters, maximum gain, and XPD.", "title": "" }, { "docid": "2ca5118d8f4402ed1a2d1c26fbcf9f53", "text": "Weakly supervised data is an important machine learning data to help improve learning performance. However, recent results indicate that machine learning techniques with the usage of weakly supervised data may sometimes cause performance degradation. Safely leveraging weakly supervised data is important, whereas there is only very limited effort, especially on a general formulation to help provide insight to guide safe weakly supervised learning. In this paper we present a scheme that builds the final prediction results by integrating several weakly supervised learners. Our resultant formulation brings two advantages. i) For the commonly used convex loss functions in both regression and classification tasks, safeness guarantees exist under a mild condition; ii) Prior knowledge related to the weights of base learners can be embedded in a flexible manner. Moreover, the formulation can be addressed globally by simple convex quadratic or linear program efficiently. Experiments on multiple weakly supervised learning tasks such as label noise learning, domain adaptation and semi-supervised learning validate the effectiveness.", "title": "" }, { "docid": "56ed889e2e7c359393f847f8f45e9bf1", "text": "In culture analytics, it is important to ask fundamental questions that address salient characteristics of collective human behavior. This paper explores how analyzing cooking recipes in aggregate and at scale identifies these characteristics in the cooking culture, and answer fundamental questions like 'what makes a chocolate chip cookie a chocolate chip cookie?'. Aspiring cooks, professional chefs and cooking hobbyists share their recipes online resulting in thousands of different procedural instructions towards a shared goal. However, existing approaches focus merely on analysis at the ingredient level, for example, extracting ingredient information from individual recipes. We introduce RecipeScape, a prototype interface which supports visually querying, browsing and comparing cooking recipes at scale. We also present the underlying computational pipeline of RecipeScape that scrapes recipes online, extracts their ingredient and instruction information, constructs a graphical representation, and computes similarity between pairs of recipes.", "title": "" }, { "docid": "7f4b27422520ad678dd2f5f658ffebc3", "text": "We present a generic framework to make wrapper induction algorithms tolerant to noise in the training data. This enables us to learn wrappers in a completely unsupervised manner from automatically and cheaply obtained noisy training data, e.g., using dictionaries and regular expressions. By removing the site-level supervision that wrapper-based techniques require, we are able to perform information extraction at web-scale, with accuracy unattained with existing unsupervised extraction techniques. Our system is used in production at Yahoo! and powers live applications.", "title": "" }, { "docid": "4afa269cb8ff0fb4b90f3fe5ddcd0675", "text": "Sleep specialists often conduct manual sleep stage scoring by visually inspecting the patient’s neurophysiological signals collected at sleep labs. This is, generally, a very difficult, tedious and time-consuming task. The limitations of manual sleep stage scoring have escalated the demand for developing Automatic Sleep Stage Classification (ASSC) systems. Sleep stage classification refers to identifying the various stages of sleep and is a critical step in an effort to assist physicians in the diagnosis and treatment of related sleep disorders. The aim of this paper is to survey the progress and challenges in various existing Electroencephalogram (EEG) signal-based methods used for sleep stage identification at each phase; including pre-processing, feature extraction and classification; in an attempt to find the research gaps and possibly introduce a reasonable solution. Many of the prior and current related studies use multiple EEG channels, and are based on 30 s or 20 s epoch lengths which affect the feasibility and speed of ASSC for real-time applications. Thus, in this paper, we also present a novel and efficient technique that can be implemented in an embedded hardware device to identify sleep stages using new statistical features applied to 10 s epochs of single-channel EEG signals. In this study, the PhysioNet Sleep European Data Format (EDF) Database was used. The proposed methodology achieves an average classification sensitivity, specificity and accuracy of 89.06%, 98.61% and 93.13%, respectively, when the decision tree classifier is applied. Finally, our new method is compared with those in recently published studies, which reiterates the high classification accuracy performance.", "title": "" }, { "docid": "36d79b2b2640d1b2ac7f8ef057abc75c", "text": "Published scientific articles are linked together into a graph, the citation graph, through their citations. This paper explores the notion of similarity based on connectivity alone, and proposes several algorithms to quantify it. Our metrics take advantage of the local neighborhoods of the nodes in the citation graph. Two variants of link-based similarity estimation between two nodes are described, one based on the separate local neighborhoods of the nodes, and another based on the joint local neighborhood expanded from both nodes at the same time. The algorithms are implemented and evaluated on a subgraph of the citation graph of computer science in a retrieval context. The results are compared with text-based similarity, and demonstrate the complementarity of link-based and text-based retrieval.", "title": "" }, { "docid": "5f01cb5c34ac9182f6485f70d19101db", "text": "Gastroeophageal reflux is a condition in which the acidified liquid content of the stomach backs up into the esophagus. The antiacid magaldrate and prokinetic domperidone are two drugs clinically used for the treatment of gastroesophageal reflux symptoms. However, the evidence of a superior effectiveness of this combination in comparison with individual drugs is lacking. A double-blind, randomized and comparative clinical trial study was designed to characterize the efficacy and safety of a fixed dose combination of magaldrate (800 mg)/domperidone (10 mg) against domperidone alone (10 mg), in patients with gastroesophageal reflux symptoms. One hundred patients with gastroesophageal reflux diagnosed by Carlsson scale were randomized to receive a chewable tablet of a fixed dose of magaldrate/domperidone combination or domperidone alone four times each day during a month. Magaldrate/domperidone combination showed a superior efficacy to decrease global esophageal (pyrosis, regurgitation, dysphagia, hiccup, gastroparesis, sialorrhea, globus pharyngeus and nausea) and extraesophageal (chronic cough, hoarseness, asthmatiform syndrome, laryngitis, pharyngitis, halitosis and chest pain) reflux symptoms than domperidone alone. In addition, magaldrate/domperidone combination improved in a statistically manner the quality of life of patients with gastroesophageal reflux respect to monotherapy, and more patients perceived the combination as a better treatment. Both treatments were well tolerated. Data suggest that oral magaldrate/domperidone mixture could be a better option in the treatment of gastroesophageal reflux symptoms than only domperidone.", "title": "" }, { "docid": "dd82e1c54a2b73e98788eb7400600be3", "text": "Supernovae Type-Ia (SNeIa) play a significant role in exploring the history of the expansion of the Universe, since they are the best-known standard candles with which we can accurately measure the distance to the objects. Finding large samples of SNeIa and investigating their detailed characteristics has become an important issue in cosmology and astronomy. Existing methods relied on a photometric approach that first measures the luminance of supernova candidates precisely and then fits the results to a parametric function of temporal changes in luminance. However, it inevitably requires a lot of observations and complex luminance measurements. In this work, we present a novel method for detecting SNeIa simply from single-shot observation images without any complex measurements, by effectively integrating the state-of-the-art computer vision methodology into the standard photometric approach. Experimental results show the effectiveness of the proposed method and reveal classification performance comparable to existing photometric methods with many observations.", "title": "" }, { "docid": "fe116849575dd91759a6c1ef7ed239f3", "text": "We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.", "title": "" }, { "docid": "2effb3276d577d961f6c6ad18a1e7b3e", "text": "This paper extends the recovery of structure and motion to im age sequences with several independently moving objects. The mot ion, structure, and camera calibration are all a-priori unknown. The fundamental constraint that we introduce is that multiple motions must share the same camer parameters. Existing work on independent motions has not employed this constr ai t, and therefore has not gained over independent static-scene reconstructi ons. We show how this constraint leads to several new results in st ructure and motion recovery, where Euclidean reconstruction becomes pos ible in the multibody case, when it was underconstrained for a static scene. We sho w how to combine motions of high-relief, low-relief and planar objects. Add itionally we show that structure and motion can be recovered from just 4 points in th e uncalibrated, fixed camera, case. Experiments on real and synthetic imagery demonstrate the v alidity of the theory and the improvement in accuracy obtained using multibody an alysis.", "title": "" }, { "docid": "eb9f859b8a8fe6ae9b98638610564a94", "text": "In this paper, we quantify the effectiveness of third-party tracker blockers on a large scale. First, we analyze the architecture of various state-of-the-art blocking solutions and discuss the advantages and disadvantages of each method. Second, we perform a two-part measurement study on the effectiveness of popular tracker-blocking tools. Our analysis quantifies the protection offered against trackers present on more than 100,000 popular websites and 10,000 popular Android applications. We provide novel insights into the ongoing arms race between trackers and developers of blocking tools as well as which tools achieve the best results under what circumstances. Among others, we discover that rule-based browser extensions outperform learning-based ones, trackers with smaller footprints are more successful at avoiding being blocked, and CDNs pose a major threat towards the future of tracker-blocking tools. Overall, the contributions of this paper advance the field of web privacy by providing not only the largest study to date on the effectiveness of tracker-blocking tools, but also by highlighting the most pressing challenges and privacy issues of third-party tracking.", "title": "" } ]
scidocsrr
87d435409e5dd54ef5ae6c22fc661ca3
High-performance secure multi-party computation for data mining applications
[ { "docid": "cd36a4e57a446e25ae612cdc31f6293e", "text": "Privacy and security concerns can prevent sharing of data, derailing data mining projects. Distributed knowledge discovery, if done correctly, can alleviate this problem. The key is to obtain valid results, while providing guarantees on the (non)disclosure of data. We present a method for k-means clustering when different sites contain different attributes for a common set of entities. Each site learns the cluster of each entity, but learns nothing about the attributes at other sites.", "title": "" } ]
[ { "docid": "eb150ae59ceffae1894c8985931ddfc9", "text": "This paper presents the design and implementation of Constant-Fraction-Discriminators (CFD) suitable for multi-channel mixed-mode ICs. Issues related to area occupation, power consumption and timing accuracy are discussed in detail. The circuits have been designed targeting a 0.13µm CMOS process.", "title": "" }, { "docid": "7bb9f8794f8df481967f6f01b9e9d924", "text": "It is widely realized that the integration of database and information retrieval techniques will provide users with a wide range of high quality services. In this paper, we study processing an l-keyword query, p1, p1, ..., pl, against a relational database which can be modeled as a weighted graph, G(V, E). Here V is a set of nodes (tuples) and E is a set of edges representing foreign key references between tuples. Let Vi ⊆ V be a set of nodes that contain the keyword pi. We study finding top-k minimum cost connected trees that contain at least one node in every subset Vi, and denote our problem as GST-k When k = 1, it is known as a minimum cost group Steiner tree problem which is NP-complete. We observe that the number of keywords, l, is small, and propose a novel parameterized solution, with l as a parameter, to find the optimal GST-1, in time complexity O(3ln + 2l ((l + logn)n + m)), where n and m are the numbers of nodes and edges in graph G. Our solution can handle graphs with a large number of nodes. Our GST-1 solution can be easily extended to support GST-k, which outperforms the existing GST-k solutions over both weighted undirected/directed graphs. We conducted extensive experimental studies, and report our finding.", "title": "" }, { "docid": "6b2118549a18be9af844f6bbf11fc0ee", "text": "Feature selection is an important technique for data mining. Despite its importance, most studies of feature selection are restricted to batch learning. Unlike traditional batch learning methods, online learning represents a promising family of efficient and scalable machine learning algorithms for large-scale applications. Most existing studies of online learning require accessing all the attributes/features of training instances. Such a classical setting is not always appropriate for real-world applications when data instances are of high dimensionality or it is expensive to acquire the full set of attributes/features. To address this limitation, we investigate the problem of online feature selection (OFS) in which an online learner is only allowed to maintain a classifier involved only a small and fixed number of features. The key challenge of online feature selection is how to make accurate prediction for an instance using a small number of active features. This is in contrast to the classical setup of online learning where all the features can be used for prediction. We attempt to tackle this challenge by studying sparsity regularization and truncation techniques. Specifically, this article addresses two different tasks of online feature selection: 1) learning with full input, where an learner is allowed to access all the features to decide the subset of active features, and 2) learning with partial input, where only a limited number of features is allowed to be accessed for each instance by the learner. We present novel algorithms to solve each of the two problems and give their performance analysis. We evaluate the performance of the proposed algorithms for online feature selection on several public data sets, and demonstrate their applications to real-world problems including image classification in computer vision and microarray gene expression analysis in bioinformatics. The encouraging results of our experiments validate the efficacy and efficiency of the proposed techniques.", "title": "" }, { "docid": "44b44e400b44f3f83b698f9492e5c8b7", "text": "Word vector representation techniques, built on word-word co-occurrence statistics, often provide representations that decode the differences in meaning between various words. This significant fact is a powerful tool that can be exploited to a great deal of natural language processing tasks. In this work, we propose a simple and efficient unsupervised approach for keyphrase extraction, called Reference Vector Algorithm (RVA) which utilizes a local word vector representation by applying the GloVe method in the context of one scientific publication at a time. Then, the mean word vector (reference vector) of the article’s abstract guides the candidate keywords’ selection process, using the cosine similarity. The experimental results that emerged through a thorough evaluation process show that our method outperforms the state-of-the-art methods by providing high quality keyphrases in most cases, proposing in this way an additional mode for the exploitation of GloVe word vectors.", "title": "" }, { "docid": "52c74771c7d9d31ca4c78cf1da7d9c01", "text": "This paper describes the Tezpur University dataset of online handwritten Assamese characters. The online data acquisition process involves the capturing of data as the text is written on a digitizer with an electronic pen. A sensor picks up the pen-tip movements, as well as pen-up/pen-down switching. The dataset contains 8,235 isolated online handwritten Assamese characters. Preliminary results on the classification of online handwritten Assamese characters using the above dataset are presented in this paper. The use of the support vector machine classifier and the classification accuracy for three different feature vectors are explored in our research.", "title": "" }, { "docid": "2a1bee8632e983ca683cd5a9abc63343", "text": "Phrase browsing techniques use phrases extracted automatically from a large information collection as a basis for browsing and accessing it. This paper describes a case study that uses an automatically constructed phrase hierarchy to facilitate browsing of an ordinary large Web site. Phrases are extracted from the full text using a novel combination of rudimentary syntactic processing and sequential grammar induction techniques. The interface is simple, robust and easy to use.\nTo convey a feeling for the quality of the phrases that are generated automatically, a thesaurus used by the organization responsible for the Web site is studied and its degree of overlap with the phrases in the hierarchy is analyzed. Our ultimate goal is to amalgamate hierarchical phrase browsing and hierarchical thesaurus browsing: the latter provides an authoritative domain vocabulary and the former augments coverage in areas the thesaurus does not reach.", "title": "" }, { "docid": "2fe0639b8a1fc6c64bb8e177576ec06e", "text": "A new approach for ranking fuzzy numbers based on a distance measure is introduced. A new class of distance measures for interval numbers that takes into account all the points in both intervals is developed -rst, and then it is used to formulate the distance measure for fuzzy numbers. The approach is illustrated by numerical examples, showing that it overcomes several shortcomings such as the indiscriminative and counterintuitive behavior of several existing fuzzy ranking approaches. c © 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "8fb99cd1e2db6b1e4f3f0c2fa1b125bc", "text": "Temptation pervades modern social life, including the temptation to engage in infidelity. The present investigation examines one factor that may put individuals at a greater risk of being unfaithful to their partner: dispositional avoidant attachment style. The authors hypothesize that avoidantly attached people may be less resistant to temptations for infidelity due to lower levels of commitment in romantic relationships. This hypothesis was confirmed in 8 studies. People with high, vs. low, levels of dispositional avoidant attachment had more permissive attitudes toward infidelity (Study 1), showed attentional bias toward attractive alternative partners (Study 2), expressed greater daily interest in meeting alternatives to their current relationship partner (Study 5), perceived alternatives to their current relationship partner more positively (Study 6), and engaged in more infidelity over time (Studies 3, 4, 7, and 8). This effect was mediated by lower levels of commitment (Studies 5-8). Thus, avoidant attachment predicted a broad spectrum of responses indicative of interest in alternatives and propensity to engage in infidelity, which were mediated by low levels of commitment.", "title": "" }, { "docid": "2717779fa409f10f3a509e398dc24233", "text": "Hallyu refers to the phenomenon of Korean popular culture which came into vogue in Southeast Asia and mainland China in late 1990s. Especially, hallyu is very popular among young people enchanted with Korean music (K-pop), dramas (K-drama), movies, fashion, food, and beauty in China, Taiwan, Hong Kong, and Vietnam, etc. This cultural phenomenon has been closely connected with multi-layered transnational movements of people, information and capital flows in East Asia. Since the 15 century, East and West have been the two subjects of cultural phenomena. Such East–West dichotomy was articulated by Westerners in the scholarly tradition known as “Orientalism.”During the Age of Exploration (1400–1600), West didn’t only take control of East by military force, but also created a new concept of East/Orient, as Edward Said analyzed it expertly in his masterpiece Orientalism in 1978. Throughout the history of imperialism for nearly 4-5 centuries, west was a cognitive subject, but East was an object being recognized by the former. Accordingly, “civilization and modernization” became the exclusive properties of which West had copyright (?!), whereas East was a “sub-subject” to borrow or even plagiarize from Western standards. In this sense, (making) modern history in East Asia was a compulsive imitation of Western civilization or a catch-up with the West in other wards. Thus, it is interesting to note that East Asian people, after gaining economic power through “compressed modernization,” are eager to be main agents of their cultural activities in and through the enjoyment of East Asian popular culture in a postmodern era. In this transition from Westerncentered into East Asian-based popular culture, they are no longer sub-subjects of modernity.", "title": "" }, { "docid": "8553a5d062f48f47de899cc5d23e2059", "text": "A systems approach to studying biology uses a variety of mathematical, computational, and engineering tools to holistically understand and model properties of cells, tissues, and organisms. Building from early biochemical, genetic, and physiological studies, systems biology became established through the development of genome-wide methods, high-throughput procedures, modern computational processing power, and bioinformatics. Here, we highlight a variety of systems approaches to the study of biological rhythms that occur with a 24-h period-circadian rhythms. We review how systems methods have helped to elucidate complex behaviors of the circadian clock including temperature compensation, rhythmicity, and robustness. Finally, we explain the contribution of systems biology to the transcription-translation feedback loop and posttranslational oscillator models of circadian rhythms and describe new technologies and \"-omics\" approaches to understand circadian timekeeping and neurophysiology.", "title": "" }, { "docid": "1349cdd5f181c2d6b958280a728d43b6", "text": "Colormaps are a vital method for users to gain insights into data in a visualization. With a good choice of colormaps, users are able to acquire information in the data more effectively and efficiently. In this survey, we attempt to provide readers with a comprehensive review of colormap generation techniques and provide readers a taxonomy which is helpful for finding appropriate techniques to use for their data and applications. Specifically, we first briefly introduce the basics of color spaces including color appearance models. In the core of our paper, we survey colormap generation techniques, including the latest advances in the field by grouping these techniques into four classes: procedural methods, user-study based methods, rule-based methods, and data-driven methods; we also include a section on methods that are beyond pure data comprehension purposes. We then classify colormapping techniques into a taxonomy for readers to quickly identify the appropriate techniques they might use. Furthermore, a representative set of visualization techniques that explicitly discuss the use of colormaps is reviewed and classified based on the nature of the data in these applications. Our paper is also intended to be a reference of colormap choices for readers when they are faced with similar data and/or tasks.", "title": "" }, { "docid": "a22f0e1bda2c3cfcf8e9f7cf3feabf6a", "text": "Object detection in aerial images is an active yet challenging task in computer vision because of the birdview perspective, the highly complex backgrounds, and the variant appearances of objects. Especially when detecting densely packed objects in aerial images, methods relying on horizontal proposals for common object detection often introduce mismatches between the Region of Interests (RoIs) and objects. This leads to the common misalignment between the final object classification confidence and localization accuracy. Although rotated anchors have been used to tackle this problem, the design of them always multiplies the number of anchors and dramatically increases the computational complexity. In this paper, we propose a RoI Transformer to address these problems. More precisely, to improve the quality of region proposals, we first designed a Rotated RoI (RRoI) learner to transform a Horizontal Region of Interest (HRoI) into a Rotated Region of Interest (RRoI). Based on the RRoIs, we then proposed a Rotated Position Sensitive RoI Align (RPS-RoI-Align) module to extract rotation-invariant features from them for boosting subsequent classification and regression. Our RoI Transformer is with light weight and can be easily embedded into detectors for oriented object detection. A simple implementation of the RoI Transformer has achieved state-of-the-art performances on two common and challenging aerial datasets, i.e., DOTA and HRSC2016, with a neglectable reduction to detection speed. Our RoI Transformer exceeds the deformable Position Sensitive RoI pooling when oriented bounding-box annotations are available. Extensive experiments have also validated the flexibility and effectiveness of our RoI Transformer. The results demonstrate that it can be easily integrated with other detector architectures and significantly improve the performances.", "title": "" }, { "docid": "0c62440845e4543ee16150e0c7222f49", "text": "Background\nTo ensure high quality patient care an effective interprofessional collaboration between healthcare professionals is required. Interprofessional education (IPE) has a positive impact on team work in daily health care practice. Nevertheless, there are various challenges for sustainable implementation of IPE. To identify enablers and barriers of IPE for medical and nursing students as well as to specify impacts of IPE for both professions, the 'Cooperative academical regional evidence-based Nursing Study in Mecklenburg-Western Pomerania' (Care-N Study M-V) was conducted. The aim is to explore, how IPE has to be designed and implemented in medical and nursing training programs to optimize students' impact for IPC.\n\n\nMethods\nA qualitative study was conducted using the Delphi method and included 25 experts. Experts were selected by following inclusion criteria: (a) ability to answer every research question, one question particularly competent, (b) interdisciplinarity, (c) sustainability and (d) status. They were purposely sampled. Recruitment was based on existing collaborations and a web based search.\n\n\nResults\nThe experts find more enablers than barriers for IPE between medical and nursing students. Four primary arguments for IPE were mentioned: (1) development and promotion of interprofessional thinking and acting, (2) acquirement of shared knowledge, (3) promotion of beneficial information and knowledge exchange, and (4) promotion of mutual understanding. Major barriers of IPE are the coordination and harmonization of the curricula of the two professions. With respect to the effects of IPE for IPC, experts mentioned possible improvements on (a) patient level and (b) professional level. Experts expect an improved patient-centered care based on better mutual understanding and coordinated cooperation in interprofessional health care teams. To sustainably implement IPE for medical and nursing students, IPE needs endorsement by both, medical and nursing faculties.\n\n\nConclusion\nIn conclusion, IPE promotes interprofessional cooperation between the medical and the nursing profession. Skills in interprofessional communication and roles understanding will be primary preconditions to improve collaborative patient-centered care. The impact of IPE for patients and caregivers as well as for both professions now needs to be more specifically analysed in prospective intervention studies.", "title": "" }, { "docid": "910b955d0d290e90fe207418b5601019", "text": "We propose a branch flow model for the analysis and optimization of mesh as well as radial networks. The model leads to a new approach to solving optimal power flow (OPF) that consists of two relaxation steps. The first step eliminates the voltage and current angles and the second step approximates the resulting problem by a conic program that can be solved efficiently. For radial networks, we prove that both relaxation steps are always exact, provided there are no upper bounds on loads. For mesh networks, the conic relaxation is always exact but the angle relaxation may not be exact, and we provide a simple way to determine if a relaxed solution is globally optimal. We propose convexification of mesh networks using phase shifters so that OPF for the convexified network can always be solved efficiently for an optimal solution. We prove that convexification requires phase shifters only outside a spanning tree of the network and their placement depends only on network topology, not on power flows, generation, loads, or operating constraints. Part I introduces our branch flow model, explains the two relaxation steps, and proves the conditions for exact relaxation. Part II describes convexification of mesh networks, and presents simulation results.", "title": "" }, { "docid": "0f659ff5414e75aefe23bb85127d93dd", "text": "Important information is captured in medical documents. To make use of this information and intepret the semantics, technologies are required for extracting, analysing and interpreting it. As a result, rich semantics including relations among events, subjectivity or polarity of events, become available. The First Workshop on Extraction and Processing of Rich Semantics from Medical Texts, is devoted to the technologies for dealing with clinical documents for medical information gathering and application in knowledge based systems. New approaches for identifying and analysing rich semantics are presented. In this paper, we introduce the topic and summarize the workshop contributions.", "title": "" }, { "docid": "2578607ec2e7ae0d2e34936ec352ff6e", "text": "AI Innovation in Industry is a new department for IEEE Intelligent Systems, and this paper examines some of the basic concerns and uses of AI for big data (AI has been used in several different ways to facilitate capturing and structuring big data, and it has been used to analyze big data for key insights).", "title": "" }, { "docid": "9402365e2fdbdbdea13c18da5e4a05de", "text": "Battery models capture the characteristics of real-life batteries, and can be used to predict their behavior under various operating conditions. In this paper, a dynamic model of lithium-ion battery has been developed with MATLAB/Simulink® in order to investigate the output characteristics of lithium-ion batteries. Dynamic simulations are carried out, including the observation of the changes in battery terminal output voltage under different charging/discharging, temperature and cycling conditions, and the simulation results are compared with the results obtained from several recent studies. The simulation studies are presented for manifesting that the model is effective and operational.", "title": "" }, { "docid": "fea1bc4b60abe7435c4953f2eb4b5dae", "text": "Facing a large number of personal photos and limited resource of mobile devices, cloud plays an important role in photo storing, sharing and searching. Meanwhile, some recent reputation damage and stalk events caused by photo leakage increase people's concern about photo privacy. Though most would agree that photo search function and privacy are both valuable, few cloud system supports both of them simultaneously. The center of such an ideal system is privacy-preserving outsourced image similarity measurement, which is extremely challenging when the cloud is untrusted and a high extra overhead is disliked. In this work, we introduce a framework POP, which enables privacy-seeking mobile device users to outsource burdensome photo sharing and searching safely to untrusted servers. Unauthorized parties, including the server, learn nothing about photos or search queries. This is achieved by our carefully designed architecture and novel non-interactive privacy-preserving protocols for image similarity computation. Our framework is compatible with the state-of-the-art image search techniques, and it requires few changes to existing cloud systems. For efficiency and good user experience, our framework allows users to define personalized private content by a simple check-box configuration and then enjoy the sharing and searching services as usual. All privacy protection modules are transparent to users. The evaluation of our prototype implementation with 31,772 real-life images shows little extra communication and computation overhead caused by our system.", "title": "" }, { "docid": "ca0d3a031ee0b29c8135613787ee19c4", "text": "As children and youth with diabetes grow up, they become increasingly responsible for controlling and monitoring their condition. We conducted a scoping review to explore the research literature on self-management interventions for children and youth with diabetes. Eleven studies met the inclusion criteria. Some of the studies reviewed combined the participant population so that children with Type 1 as well as children with Type 2 diabetes were included. The majority of the studies focused on children age 14 yr or older and provided self-management education, self-management support, or both. Parent involvement was a key component of the majority of the interventions, and the use of technology was evident in 3 studies. The findings highlight factors that occupational therapy practitioners should consider when working with pediatric diabetes teams to select self-management interventions.", "title": "" }, { "docid": "990ee920895672c2b8b05bc6cf4fad3f", "text": "The world market of e-scooter is expected to experiment an increase of 15% in Western Europe between 2015 and 2025. In order to push this growth it is needed to develop new low-cost more efficient and reliable drives with high torque to weight ratio. In this paper a new axial-flux switched reluctance motor is proposed in order to accomplish this goal. The motor is constituted by a stator sandwiched by two rotors in which the ferromagnetic parts are made of soft magnetic composites. It has a new disposition of the stator and the rotor poles and shorter flux paths Simulations have demonstrated that the proposed axial-flux switched reluctance motor drive is able to meet the requirements of an e-scooter.", "title": "" } ]
scidocsrr
8fb257e2a9b3c8fe0352b6f7c724df84
Knowledge Base Question Answering Based on Deep Learning Models
[ { "docid": "59c24fb5b9ac9a74b3f89f74b332a27c", "text": "This paper addresses the problem of learning to map sentences to logical form, given training data consisting of natural language sentences paired with logical representations of their meaning. Previous approaches have been designed for particular natural languages or specific meaning representations; here we present a more general method. The approach induces a probabilistic CCG grammar that represents the meaning of individual words and defines how these meanings can be combined to analyze complete sentences. We use higher-order unification to define a hypothesis space containing all grammars consistent with the training data, and develop an online learning algorithm that efficiently searches this space while simultaneously estimating the parameters of a log-linear parsing model. Experiments demonstrate high accuracy on benchmark data sets in four languages with two different meaning representations.", "title": "" }, { "docid": "8b6832586f5ec4706e7ace59101ea487", "text": "We develop a semantic parsing framework based on semantic similarity for open domain question answering (QA). We focus on single-relation questions and decompose each question into an entity mention and a relation pattern. Using convolutional neural network models, we measure the similarity of entity mentions with entities in the knowledge base (KB) and the similarity of relation patterns and relations in the KB. We score relational triples in the KB using these measures and select the top scoring relational triple to answer the question. When evaluated on an open-domain QA task, our method achieves higher precision across different recall points compared to the previous approach, and can improve F1 by 7 points.", "title": "" }, { "docid": "9ce0e8d06436bf17d8859c02bb8136e6", "text": "This paper presents a series of new latent semantic models based on a convolutional neural network (CNN) to learn low-dimensional semantic vectors for search queries and Web documents. By using the convolution-max pooling operation, local contextual information at the word n-gram level is modeled first. Then, salient local fea-tures in a word sequence are combined to form a global feature vector. Finally, the high-level semantic information of the word sequence is extracted to form a global vector representation. The proposed models are trained on clickthrough data by maximizing the conditional likelihood of clicked documents given a query, us-ing stochastic gradient ascent. The new models are evaluated on a Web document ranking task using a large-scale, real-world data set. Results show that our model significantly outperforms other se-mantic models, which were state-of-the-art in retrieval performance prior to this work.", "title": "" } ]
[ { "docid": "3d3f5b45b939f926d1083bab9015e548", "text": "Industry is facing an era characterised by unpredictable market changes and by a turbulent competitive environment. The key to compete in such a context is to achieve high degrees of responsiveness by means of high flexibility and rapid reconfiguration capabilities. The deployment of modular solutions seems to be part of the answer to face these challenges. Semantic modelling and ontologies may represent the needed knowledge representation to support flexibility and modularity of production systems, when designing a new system or when reconfiguring an existing one. Although numerous ontologies for production systems have been developed in the past years, they mainly focus on discrete manufacturing, while logistics aspects, such as those related to internal logistics and warehousing, have not received the same attention. The paper aims at offering a representation of logistics aspects, reflecting what has become a de-facto standard terminology in industry and among researchers in the field. Such representation is to be used as an extension to the already-existing production systems ontologies that are more focused on manufacturing processes. The paper presents the structure of the hierarchical relations within the examined internal logistics elements, namely Storage and Transporters, structuring them in a series of classes and sub-classes, suggesting also the relationships and the attributes to be considered to complete the modelling. Finally, the paper proposes an industrial example with a miniload system to show how such a modelling of internal logistics elements could be instanced in the real world. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f478bbf48161da50017d3ec9f8e677b4", "text": "Between November 1998 and December 1999, trained medical record abstractors visited the Micronesian jurisdictions of Chuuk, Kosrae, Pohnpei, and Yap (the four states of the Federated States of Micronesia), as well as the Republic of Palau (Belau), the Republic of Kiribati, the Republic of the Marshall Islands (RMI), and the Republic of Nauru to review all available medical records in order to describe the epidemiology of cancer in Micronesia. Annualized age-adjusted, site-specific cancer period prevalence rates for individual jurisdictions were calculated. Site-specific cancer occurrence in Micronesia follows a pattern characteristic of developing nations. At the same time, cancers associated with developed countries are also impacting these populations. Recommended are jurisdiction-specific plans that outline the steps and resources needed to establish or improve local cancer registries; expand cancer awareness and screening activities; and improve diagnostic and treatment capacity.", "title": "" }, { "docid": "6efa8ef12c1c4c63cb8a85ebfd5fcad9", "text": "Timeand pitch-scale modifications of speech signals find important applications in speech synthesis, playback systems, voice conversion, learning/hearing aids, etc.. There is a requirement for computationally efficient and real-time implementable algorithms. In this paper, we propose a high quality and computationally efficient timeand pitch-scaling methodology based on the glottal closure instants (GCIs) or epochs in speech signals. The proposed algorithm, termed as epoch-synchronous overlapadd time/pitch-scaling (ESOLA-TS/PS), segments speech signals into overlapping short-time frames with the overlap between frames being dependent on the time-scaling factor. The adjacent frames are then aligned with respect to the epochs and the frames are overlap-added to synthesize time-scale modified speech. Pitch scaling is achieved by resampling the time-scaled speech by a desired sampling factor. We also propose a concept of epoch embedding into speech signals, which facilitates the identification and time-stamping of samples corresponding to epochs and using them for time/pitch-scaling to multiple scaling factors whenever desired, thereby contributing to faster and efficient implementation. The results of perceptual evaluation tests reported in this paper indicate the superiority of ESOLA over state-of-the-art techniques. The proposed ESOLA significantly outperforms the conventional pitch synchronous overlap-add (PSOLA) techniques in terms of perceptual quality and intelligibility of the modified speech. Unlike the waveform similarity overlap-add (WSOLA) or synchronous overlap-add (SOLA) techniques, the ESOLA technique has the capability to do exact time-scaling of speech with high quality to any desired modification factor within a range of 0.5 to 2. Compared to synchronous overlap-add with fixed synthesis (SOLAFS), the ESOLA is computationally advantageous and at least three times faster.", "title": "" }, { "docid": "0a81730588c23c4ed153dab18791bdc2", "text": "Deep neural networks (DNNs) have shown an inherent vulnerability to adversarial examples which are maliciously crafted on real examples by attackers, aiming at making target DNNs misbehave. The threats of adversarial examples are widely existed in image, voice, speech, and text recognition and classification. Inspired by the previous work, researches on adversarial attacks and defenses in text domain develop rapidly. In order to make people have a general understanding about the field, this article presents a comprehensive review on adversarial examples in text, including attack and defense approaches. We analyze the advantages and shortcomings of recent adversarial examples generation methods and elaborate the efficiency and limitations on the countermeasures. Finally, we discuss the challenges in adversarial texts and provide a research direction of this aspect.", "title": "" }, { "docid": "98e8a120c393ac669f03f86944c81068", "text": "In this paper, we investigate deep neural networks for blind motion deblurring. Instead of regressing for the motion blur kernel and performing non-blind deblurring outside of the network (as most methods do), we propose a compact and elegant end-to-end deblurring network. Inspired by the data-driven sparse-coding approaches that are capable of capturing linear dependencies in data, we generalize this notion by embedding non-linearities into the learning process. We propose a new architecture for blind motion deblurring that consists of an autoencoder that learns the data prior, and an adversarial network that attempts to generate and discriminate between clean and blurred features. Once the network is trained, the generator learns a blur-invariant data representation which when fed through the decoder results in the final deblurred output.", "title": "" }, { "docid": "587ee07095b4bd1189e3bb0af215fa95", "text": "This paper discusses dynamic factor analysis, a technique for estimating common trends in multivariate time series. Unlike more common time series techniques such as spectral analysis and ARIMA models, dynamic factor analysis can analyse short, non-stationary time series containing missing values. Typically, the parameters in dynamic factor analysis are estimated by direct optimisation, which means that only small data sets can be analysed if computing time is not to become prohibitively long and the chances of obtaining sub-optimal estimates are to be avoided. This paper shows how the parameters of dynamic factor analysis can be estimated using the EM algorithm, allowing larger data sets to be analysed. The technique is illustrated on a marine environmental data set.", "title": "" }, { "docid": "542bf63a4c97cbbfe91c39e32fbaf9dd", "text": "Vision is the most versatile and efficient sensory system. So, it is not surprising that images contribute an important role in human perception. This is analogous to machine vision such as shape recognition application which is an important field nowadays. This paper describes implementation of image processing on embedded platform and an embedded application, a robot capable of tracking an object in 3-dimensional environment. It is a real time operating system (RTOS) based embedded system which will run the Digital Image Processing Algorithms to extract the information from the images. The camera connected on USB bus is used to capture images on the ARM9 core running RTOS. Depending upon the information extracted, the locomotion is carried out. The camera is a simple CMOS USB-camera module which has a resolution about 0.3MP. Video4Linux API’s provided by kernel are used to capture the image, and then it is decoded, and the required object location is detected using image processing algorithms. The actuations are made so as to track the object. The embedded Linux kernel provides support for multitasking and ensures that the task is performed within the real time constraints. The OS makes system flexible for changes such as interfacing new devices, handling the file system and memory management for storage of data. KeywordsEmbedded Linux, ARM, Video4Linux, YUYV, Embedded C, Object detection, CMOS, USB, SOC, Kerne", "title": "" }, { "docid": "a20a03fcb848c310cb966f6e6bc37c86", "text": "A broad class of problems at the core of computational imaging, sensing, and low-level computer vision reduces to the inverse problem of extracting latent images that follow a prior distribution, from measurements taken under a known physical image formation model. Traditionally, hand-crafted priors along with iterative optimization methods have been used to solve such problems. In this paper we present unrolled optimization with deep priors, a principled framework for infusing knowledge of the image formation into deep networks that solve inverse problems in imaging, inspired by classical iterative methods. We show that instances of the framework outperform the state-of-the-art by a substantial margin for a wide variety of imaging problems, such as denoising, deblurring, and compressed sensing magnetic resonance imaging (MRI). Moreover, we conduct experiments that explain how the framework is best used and why it outperforms previous methods.", "title": "" }, { "docid": "a30a40f97b688cd59005434bc936e4ef", "text": "The Semantic Web works on the existing Web which presents the meaning of information as well-defined vocabularies understood by the people. Semantic Search, at the same time, works on improving the accuracy of a search by understanding the intent of the search and providing contextually relevant results. The paper describes a semantic approach towards web search through a PHP application. The goal was to parse through a user’s browsing history and return semantically relevant web pages for the search query provided. The browser used for this purpose was Mozilla Firefox. The user’s history was stored in a MySQL database, which, in turn, was accessed using PHP. The ontology, created from the browsing history, was then parsed for the entered search query and the corresponding results were returned to the user providing a semantically organized and relevant output.", "title": "" }, { "docid": "1ebd58c4d2cf14b7a674ec64370694c7", "text": "Tarlov cysts, which develop between the endoneurium and perineurium, are perineural cysts that are defined as cerebrospinal fluid (CSF)-filled saccular lesions commonly located in the extradural space of the sacral spinal canal. They are rare, showing up in 1.5% to 4.6% of patients receiving magnetic resonance imaging (MRI) for their lumbosacral symptoms, and only 1% or less of Tarlov cysts are considered to be symptomatic. Clinical manifestation of symptomatic Tarlov cyst is non-specific and can mimic other spinal disorders: localised pain, radiculopathy, weakness, sensory disturbance, and bladder and bowel dysfunction. Although surgical interventions are proven to be effective for treating Tarlov cyst, a conservative approach is clinically preferred to avoid invasive surgery. Some clinicians reported good results with the use of steroid therapy. To the best of my knowledge, this case report is the first of its kind to use a medical acupuncture approach to manage this condition.", "title": "" }, { "docid": "6e2d7dae0891a2f3a8f02fdb81af9dc6", "text": "Wireless Sensor Networks (WSNs) are charac-terized by multi-hop wireless connectivity, frequently changing network topology and need for efficient routing protocols. The purpose of this paper is to evaluate performance of routing protocol DSDV in wireless sensor network (WSN) scales regarding the End-to-End delay and throughput PDR with mobility factor .Routing protocols are a critical aspect to performance in mobile wireless networks and play crucial role in determining network performance in terms of packet delivery fraction, end-to-end delay and packet loss. Destination-sequenced distance vector (DSDV) protocol is a proactive protocol depending on routing tables which are maintained at each node. The routing protocol should detect and maintain optimal route(s) between source and destination nodes. In this paper, we present application of DSDV in WSN as extend to our pervious study to the design and impleme-ntation the details of the DSDV routing protocol in MANET using the ns-2 network simulator.", "title": "" }, { "docid": "d345ad3f47376e7ae9b966eb8ad42dc9", "text": "The Wisconsin Card Sorting Task (WCST) has been used to assess dysfunction of the prefrontal cortex and basal ganglia. Previous brain imaging studies have focused on identifying activity related to the set-shifting requirement of the WCST. The present study used event-related functional magnetic resonance imaging (fMRI) to study the pattern of activation during four distinct stages in the performance of this task. Eleven subjects were scanned while performing the WCST and a control task involving matching two identical cards. The results demonstrated specific involvement of different prefrontal areas during different stages of task performance. The mid-dorsolateral prefrontal cortex (area 9/46) increased activity while subjects received either positive or negative feedback, that is at the point when the current information must be related to earlier events stored in working memory. This is consistent with the proposed role of the mid-dorsolateral prefrontal cortex in the monitoring of events in working memory. By contrast, a cortical basal ganglia loop involving the mid-ventrolateral prefrontal cortex (area 47/12), caudate nucleus, and mediodorsal thalamus increased activity specifically during the reception of negative feedback, which signals the need for a mental shift to a new response set. The posterior prefrontal cortex response was less specific; increases in activity occurred during both the reception of feedback and the response period, indicating a role in the association of specific actions to stimuli. The putamen exhibited increased activity while matching after negative feedback but not while matching after positive feedback, implying greater involvement during novel than routine actions.", "title": "" }, { "docid": "2c5eb3fb74c6379dfd38c1594ebe85f4", "text": "Accurately recognizing speaker emotion and age/gender from speech can provide better user experience for many spoken dialogue systems. In this study, we propose to use deep neural networks (DNNs) to encode each utterance into a fixed-length vector by pooling the activations of the last hidden layer over time. The feature encoding process is designed to be jointly trained with the utterance-level classifier for better classification. A kernel extreme learning machine (ELM) is further trained on the encoded vectors for better utterance-level classification. Experiments on a Mandarin dataset demonstrate the effectiveness of our proposed methods on speech emotion and age/gender recognition tasks.", "title": "" }, { "docid": "e5c4870acea1c7315cce0561f583626c", "text": "A discussion of CMOS readout technologies for infrared (IR) imaging systems is presented. First, the description of various types of IR detector materials and structures is given. The advances of detector fabrication technology and microelectronics process technology have led to the development of large format array of IR imaging detectors. For such large IR FPA’s which is the critical component of the advanced infrared imaging system, general requirement and specifications are described. To support a good interface between FPA and downstream signal processing stage, both conventional and recently developed CMOS readout techniques are presented and discussed. Finally, future development directions including the smart focal plane concept are also introduced.", "title": "" }, { "docid": "c33a4c60281f80ae9f1105b81b429af2", "text": "As virtual machines become increasingly commonplace as a method of separating hostile or hazardous code from commodity systems, the potential security exposure from implementation flaws has increased dramatically. This paper investigates the state of popular virtual machine implementations for x86 systems, employing a combination of source code auditing and blackbox random testing to assess the security exposure to the hosts of hostile virtualized environments.", "title": "" }, { "docid": "37a91db42be93afebb02a60cd9a7b339", "text": "We present novel method for image-text multi-modal representation learning. In our knowledge, this work is the first approach of applying adversarial learning concept to multi-modal learning and not exploiting image-text pair information to learn multi-modal feature. We only use category information in contrast with most previous methods using image-text pair information for multi-modal embedding. In this paper, we show that multi-modal feature can be achieved without image-text pair information and our method makes more similar distribution with image and text in multi-modal feature space than other methods which use image-text pair information. And we show our multi-modal feature has universal semantic information, even though it was trained for category prediction. Our model is end-to-end backpropagation, intuitive and easily extended to other multimodal learning work.", "title": "" }, { "docid": "8d890dba24fc248ee37653aad471713f", "text": "We consider the problem of constructing a spanning tree for a graph G = (V,E) with n vertices whose maximal degree is the smallest among all spanning trees of G. This problem is easily shown to be NP-hard. We describe an iterative polynomial time approximation algorithm for this problem. This algorithm computes a spanning tree whose maximal degree is at most O(Δ + log n), where Δ is the degree of some optimal tree. The result is generalized to the case where only some vertices need to be connected (Steiner case) and to the case of directed graphs. It is then shown that our algorithm can be refined to produce a spanning tree of degree at most Δ + 1. Unless P = NP, this is the best bound achievable in polynomial time.", "title": "" }, { "docid": "bb0ac3d88646bf94710a4452ddf50e51", "text": "Everyday knowledge about living things, physical objects and the beliefs and desires of other people appears to be organized into sophisticated systems that are often called intuitive theories. Two long term goals for psychological research are to understand how these theories are mentally represented and how they are acquired. We argue that the language of thought hypothesis can help to address both questions. First, compositional languages can capture the content of intuitive theories. Second, any compositional language will generate an account of theory learning which predicts that theories with short descriptions tend to be preferred. We describe a computational framework that captures both ideas, and compare its predictions to behavioral data from a simple theory learning task. Any comprehensive account of human knowledge must acknowledge two principles. First, everyday knowledge is more than a list of isolated facts, and much of it appears to be organized into richly structured systems that are sometimes called intuitive theories. Even young children, for instance, have systematic beliefs about domains including folk physics, folk biology, and folk psychology [10]. Second, some aspects of these theories appear to be learned. Developmental psychologists have explored how intuitive theories emerge over the first decade of life, and at least some of these changes appear to result from learning. Although theory learning raises some challenging problems, two computational principles that may support this ability have been known for many years. First, a theory-learning system must be able to represent the content of any theory that it acquires. A learner that cannot represent a given system of concepts is clearly unable to learn this system from data. Second, there will always be many systems of concepts that are compatible with any given data set, and a learner must rely on some a priori ordering of the set of possible theories to decide which candidate is best [5, 9]. Loosely speaking, this ordering can be identified with a simplicity measure, or a prior distribution over the space of possible theories. There is at least one natural way to connect these two computational principles. Suppose that intuitive theories are represented in a “language of thought:” a language that allows complex concepts to be represented as combinations of simpler concepts [5]. A compositional language provides a straightforward way to construct sophisticated theories, but also provides a natural ordering over the resulting space of theories: the a priori probability of a theory can be identified with its length in this representation language [3, 7]. Combining this prior distribution with an engine for Bayesian inference leads immediately to a computational account of theory learning. There may be other ways to explain how people represent and acquire complex systems of knowledge, but it is striking that the “language of thought” hypothesis can address both questions. This paper describes a computational framework that helps to explain how theories are acquired, and that can be used to evaluate different proposals about the language of thought. Our approach builds on previous discussions of concept learning that have explored the link between compositional representations and inductive inference. Two recent approaches propose that concepts are represented in a form of propositional logic, and that the a priori plausibility of an inductive hypothesis is related to the length of its representation in this language [4, 6]. Our approach is similar in spirit, but is motivated in part by the need for languages richer than propositional logic. The framework we present is extremely general, and is compatible with virtually any representation language, including various forms of predicate logic. Methods for learning theories expressed in predicate logic have previously been explored in the field of Inductive Logic Programming, and we recently proposed a theory-learning model that is inspired by this tradition [7]. Our current approach is motivated by similar goals, but is better able to account for the discovery of abstract theoretical laws. The next section describes our computational framework and introduces the specific logical language that we will consider throughout. Our framework allows relatively sophisticated theories to be represented and learned, but we evaluate it here by applying it to a simple learning problem and comparing its predictions with human inductive inferences. A Bayesian approach to theory discovery Suppose that a learner observes some of the relationships that hold among a fixed, finite set of entities, and wishes to discover a theory that accounts for these data. Suppose, for instance, that the entities are thirteen adults from a remote tribe (a through m), and that the data specify that the spouse relation (S(·, ·)) is true of some pairs (Figure 1). One candidate theory states that S(·, ·) is a symmetric relation, that some of the individuals are male (M(·)), that marriages are permitted only between males and non-males, and that males may take multiple spouses but non-males may have only one spouse (Figure 1b). Other theories are possible, including the theory which states only that S(·, ·) is symmetric. Accounts of theory learning should distinguish between at least three kinds of entities: theories, models, and data. A theory is a set of statements that captures constraints on possible configurations of the world. For instance, the theory in Figure 1b rules out configurations where the spouse relation is asymmetric. A model of a theory specifies the extension", "title": "" }, { "docid": "2b0969dd0089bd2a2054957477ea4ce1", "text": "A self-signaling action is an action chosen partly to secure good news about one’s traits or abilities, even when the action has no causal impact on these traits and abilities. We discuss some of the odd things that happen when self-signaling is introduced into an otherwise rational conception of action. We employ a signaling game perspective in which the diagnostic signals are an endogenous part of the equilibrium choice. We are interested (1) in pure self-signaling, separate from any desire to be regarded well by others, and (2) purely diagnostic motivation, that is, caring about what an action might reveal about a trait even when that action has no causal impact on it. When diagnostic motivation is strong, the person’s actions exhibit a rigidity characteristic of personal rules. Our model also predicts that a boost in self-image positively affects actions even though it leaves true preferences unchanged — we call this a “moral placebo effect.” 1 The chapter draws on (co-authored) Chapter 3 of Bodner’s doctoral dissertation (Bodner, 1995) and an unpublished MIT working paper (Bodner and Prelec, 1997). The authors thank Bodner’s dissertation advisors France Leclerc and Richard Thaler, workshop discussants Thomas Schelling, Russell Winer, and Mathias Dewatripont, and George Ainslie, Michael Bratman, Juan Carillo, Itzakh Gilboa, George Loewenstein, Al Mela, Matthew Rabin, Duncan Simester and Florian Zettelmeyer for comments on these ideas (with the usual disclaimer). We are grateful to Birger Wernerfelt for drawing attention to Bernheim's work on social conformity. Author addresses: Bodner – Director, Learning Innovations, 13\\4 Shimshon St., Jerusalem, 93501, Israel, learning@netvision.net.il; Prelec — E56-320, MIT, Sloan School, 38 Memorial Drive, Cambridge, MA 02139, dprelec@mit.edu. 1 Psychological evidence When we make a choice we reveal something of our inner traits or dispositions, not only to others, but also to ourselves. After the fact, this can be a source of pleasure or pain, depending on whether we were impressed or disappointed by our actions. Before the fact, the anticipation of future pride or remorse can influence what we choose to do. In a previous paper (Bodner and Prelec, 1997), we described how the model of a utility maximizing individual could be expanded to include diagnostic utility as a separate motive for action. We review the basic elements of that proposal here. The inspiration comes directly from signaling games in which actions of one person provide an informative signal to others, which in turn affects esteem (Bernheim, 1994). Here, however, actions provide a signal to ourselves, that is, actions are selfsignaling. For example, a person who takes the daily jog in spite of the rain may see that as a gratifying signal of willpower, dedication, or future well being. For someone uncertain about where he or she stands with respect to these dispositions, each new choice can provide a bit of good or bad \"news.” We incorporate the value of such \"news\" into the person's utility function. The notion that a person may draw inferences from an action he enacted partially in order to gain that inference has been posed as a philosophical paradox (e.g. Campbell and Sawden, 1985; Elster, 1985, 1989). A key problem is the following: Suppose that the disposition in question is altruism, and a person interprets a 25¢ donation to a panhandler as evidence of altruism. If the boost in self-esteem makes it worth giving the quarter even when there is no concern for the poor, than clearly, such a donation is not valid evidence of altruism. Logically, giving is valid evidence of high altruism only if a person with low altruism would not have given the quarter. This reasoning motivates our equilibrium approach, in which inferences from actions are an endogenous part of the equilibrium choice. As an empirical matter several studies have demonstrated that diagnostic considerations do indeed affect behavior (Quattrone and Tversky, 1984; Shafir and Tversky, 1992; Bodner, 1995). An elegant experiment by Quattrone and Tversky (1984) both defines the self-signaling phenomenon and demonstrates its existence. Quattrone and Tversky first asked each subject to take a cold pressor pain test in which the subject's arm is submerged in a container of cold water until the subject can no longer tolerate the pain. Subsequently the subject was told that recent medical studies had discovered a certain inborn heart condition, and that people with this condition are “frequently ill, prone to heart-disease, and have shorter-than-average life expectancy.” Subjects were also told that this type could be identified by the effect of exercise on the cold pressor test. Subjects were randomly assigned to one of two conditions in which they were told that the bad type of heart was associated with either increases or with decreases in tolerance to the cold water after exercise. Subjects then repeated the cold pressor test, after riding an Exercycle for one minute. As predicted, the vast majority of subjects showed changes in tolerance on the second cold pressor trial in the direction correlated of “good news”—if told that decreased tolerance is diagnostic of a bad heart they endured the near-freezing water longer (and vice versa). The result shows that people are willing to bear painful consequences for a behavior that is a signal, though not a cause, of a medical diagnosis. An experiment by Shafir and Tversky (1992) on \"Newcomb's paradox\" reinforces the same point. In the philosophical version of the paradox, a person is (hypothetically) presented with two boxes, A and B. Box A contains either nothing or some large amount of money deposited by an \"omniscient being.\" Box B contains a small amount of money for sure. The decision-maker doesn’t know what Box A contains choice, and has to choose whether to take the contents of that box (A) or of both boxes (A+B). What makes the problem a paradox is that the person is asked to believe that the omniscient being has already predicted her choice, and on that basis has already either \"punished\" a greedy choice of (A+B) with no deposit in A or \"rewarded\" a choice of (A) with a large deposit. The dominance principle argues in favor of choosing both boxes, because the deposits are fixed at the moment of choice. This is the philosophical statement of the problem. In the actual experiment, Shafir and Tversky presented a variant of Newcomb’s problem at the end of another, longer experiment, in which subjects repeatedly played a Prisoner’s Dilemma game against (virtual) opponents via computer terminals. After finishing these games, a final “bonus” problem appeared, with the two Newcomb boxes, and subjects had to choose whether to take money from one box or from both boxes. The experimental cover story did not mention an omniscient being but instead informed the subjects that \"a program developed at MIT recently was applied during the entire session [of Prisoner’s Dilemma choices] to analyze the pattern of your preference.” Ostensibly, this mighty program could predict choices, one or two boxes, with 85% accuracy, and, of course, if the program predicted a choice of both boxes it would then put nothing in Box A. Although it was evident that the money amounts were already set at the moment of choice, most experimental subjects opted for the single box. It is “as if” they believed that by declining to take the money in Box B, they could change the amount of money already deposited in box A. Although these are relatively recent experiments, their results are consistent with a long stream of psychological research, going back at least to the James-Lange theory of emotions which claimed that people infer their own states from behavior (e.g., they feel afraid if they see themselves running). The notion that people adopt the perspective of an outside observer when interpreting their own actions has been extensively explored in the research on self-perception (Bem, 1972). In a similar vein, there is an extensive literature confirming the existence of “self-handicapping” strategies, where a person might get too little sleep or under-prepare for an examination. In such a case, a successful performance could be attributed to ability while unsuccessful performance could be externalized as due to the lack of proper preparation (e.g. Berglas and Jones, 1978; Berglas and Baumeister, 1993). This broader context of psychological research suggests that we should view the results of Quattrone and Tversky, and Shafir and Tversky not as mere curiosities, applying to only contrived experimental situations, but instead as evidence of a general motivational “short circuit.” Motivation does not require causality, even when the lack of causality is utterly transparent. If anything, these experiments probably underestimate the impact of diagnosticity in realistic decisions, where the absence of causal links between actions and dispositions is less evident. Formally, our model distinguishes between outcome utility — the utility of the anticipated causal consequences of choice — and diagnostic utility — the value of the adjusted estimate of one’s disposition, adjusted in light of the choice. Individuals act so as to maximize some combination of the two sources of utility, and (in one version of the model) make correct inferences about what their choices imply about their dispositions. When diagnostic utility is sufficiently important, the individual chooses the same action independent of disposition. We interpret this as a personal rule. We describe other ways in which the behavior of self-signaling individuals is qualitatively different from that of standard economic agents. First, a self-signaling person will be more likely to reveal discrepancies between resolutions and actions when resolutions pertain to actions that are contingent or delayed. Thus she might honestly commit to do some worthy action if the circumstances requiring t", "title": "" }, { "docid": "f891a454b463d130bbe6306d92d05587", "text": "We examine the employment of word embeddings for machine translation (MT) of phrasal verbs (PVs), a linguistic phenomenon with challenging semantics. Using word embeddings, we augment the translation model with two features: one modelling distributional semantic properties of the source and target phrase and another modelling the degree of compositionality of PVs. We also obtain paraphrases to increase the amount of relevant training data. Our method leads to improved translation quality for PVs in a case study with English to Bulgarian MT system.", "title": "" } ]
scidocsrr
19266307e86f4bb129cf5b2b65e59652
Optic-Flow Based Control of a 46 g Quadrotor
[ { "docid": "dd37e97635b0ded2751d64cafcaa1aa4", "text": "DEVICES, AND STRUCTURES By S.E. Lyshevshi, CRC Press, 2002. This book is the first of the CRC Press “Nanoand Microscience, Engineering, Technology, and Medicine Series,” of which the author of this book is also the editor. This book could be a textbook of a semester course on microelectro mechanical systems (MEMS) and nanoelectromechanical systems (NEMS). The objective is to cover the topic from basic theory to the design and development of structures of practical devices and systems. The idea of MEMS and NEMS is to utilize and further extend the technology of integrated circuits (VLSI) to nanometer structures of mechanical and biological devices for potential applications in molecular biology and medicine. MEMS and NEMS (nanotechnology) are hot topics in the future development of electronics. The interest is not limited to electrical engineers. In fact, many scientists and researchers are interested in developing MEMS and NEMS for biological and medical applications. Thus, this field has attracted researchers from many different fields. Many new books are coming out. This book seems to be the first one aimed to be a textbook for this field, but it is very hard to write a book for readers with such different backgrounds. The author of this book has emphasized computer modeling, mostly due to his research interest in this field. It would be good to provide coverage on biological and medical MEMS, for example, by reporting a few gen or DNA-related cases. Furthermore, the mathematical modeling in term of a large number of nonlinear coupled differential equations, as used in many places in the book, does not appear to have any practical value to the actual physical structures.", "title": "" }, { "docid": "5cdcb7073bd0f8e1b0affe5ffb4adfc7", "text": "This paper presents a nonlinear controller for hovering flight and touchdown control for a vertical take-off and landing (VTOL) unmanned aerial vehicle (UAV) using inertial optical flow. The VTOL vehicle is assumed to be a rigid body, equipped with a minimum sensor suite (camera and IMU), manoeuvring over a textured flat target plane. Two different tasks are considered in this paper: the first concerns the stability of hovering flight and the second one concerns regulation of automatic landing using the divergent optical flow as feedback information. Experimental results on a quad-rotor UAV demonstrate the performance of the proposed control strategy.", "title": "" } ]
[ { "docid": "b01028ef40b1fda74d0621c430ce9141", "text": "ETRI Journal, Volume 29, Number 2, April 2007 A novel low-voltage CMOS current feedback operational amplifier (CFOA) is presented. This realization nearly allows rail-to-rail input/output operations. Also, it provides high driving current capabilities. The CFOA operates at supply voltages of ±0.75 V with a total standby current of 304 μA. The circuit exhibits a bandwidth better than 120 MHz and a current drive capability of ±1 mA. An application of the CFOA to realize a new all-pass filter is given. PSpice simulation results using 0.25 μm CMOS technology parameters for the proposed CFOA and its application are given.", "title": "" }, { "docid": "a454b5a912c4b74a563f09249edecc34", "text": "There is great interest in assessing student learning in unscripted, open-ended environments, but students' work can evolve in ways that are too subtle or too complex to be detected by the human eye. In this paper, I describe an automated technique to assess, analyze and visualize students learning computer programming. I logged hundreds of snapshots of students' code during a programming assignment, and I employ different quantitative techniques to extract students' behaviors and categorize them in terms of programming experience. First I review the literature on educational data mining, learning analytics, computer vision applied to assessment, and emotion detection, discuss the relevance of the work, and describe one case study with a group undergraduate engineering students", "title": "" }, { "docid": "36d5ba974945cba3bf9120f3ab9aa7a0", "text": "In this paper, we analyze the spectral efficiency of multicell massive multiple-input-multiple-output (MIMO) systems with downlink training and a new pilot contamination precoding (PCP) scheme. First, we analyze the spectral efficiency of the beamforming training (BT) scheme with maximum-ratio transmission (MRT) precoding. Then, we derive an approximate closed-form expression of the spectral efficiency to find the optimal lengths of uplink and downlink pilots. Simulation results show that the achieved spectral efficiency can be improved due to channel estimation at the user side, but in comparison with a single-cell scenario, the spectral efficiency per cell in multicell scenario degrades because of pilot contamination. We focus on the practical case where the number of base station (BS) antennas is large but still finite and propose the BT and PCP (BT-PCP) transmission scheme to mitigate the pilot contamination with limited cooperation between BSs. We confirm the effectiveness of the proposed BT-PCP scheme with simulation, and we show that the proposed BT-PCP scheme achieves higher spectral efficiency than the conventional PCP method and that the performance gap from the perfect channel state information (CSI) scenario without pilot contamination is small.", "title": "" }, { "docid": "2107e4efdf7de92a850fc0142bf8c8c3", "text": "Throughout the wide range of aerial robot related applications, selecting a particular airframe is often a trade-off. Fixed-wing small-scale unmanned aerial vehicles (UAVs) typically have difficulty surveying at low altitudes while quadrotor UAVs, having more maneuverability, suffer from limited flight time. Recent prior work [1] proposes a solar-powered small-scale aerial vehicle designed to transform between fixed-wing and quad-rotor configurations. Surplus energy collected and stored while in a fixed-wing configuration is utilized while in a quad-rotor configuration. This paper presents an improvement to the robot's design in [1] by pursuing a modular airframe, an optimization of the hybrid propulsion system, and solar power electronics. Two prototypes of the robot have been fabricated for independent testing of the airframe in fixed-wing and quad-rotor states. Validation of the solar power electronics and hybrid propulsion system designs were demonstrated through a combination of simulation and empirical data from prototype hardware.", "title": "" }, { "docid": "4726381f2636acc8bebe881dc25316f8", "text": "Optimized hardware for propagating and checking software-programmable metadata tags can achieve low runtime overhead. We generalize prior work on hardware tagging by considering a generic architecture that supports software-defined policies over metadata of arbitrary size and complexity; we introduce several novel microarchitectural optimizations that keep the overhead of this rich processing low. Our model thus achieves the efficiency of previous hardware-based approaches with the flexibility of the software-based ones. We demonstrate this by using it to enforce four diverse safety and security policies---spatial and temporal memory safety, taint tracking, control-flow integrity, and code and data separation---plus a composite policy that enforces all of them simultaneously. Experiments on SPEC CPU2006 benchmarks with a PUMP-enhanced RISC processor show modest impact on runtime (typically under 10%) and power ceiling (less than 10%), in return for some increase in energy usage (typically under 60%) and area for on-chip memory structures (110%).", "title": "" }, { "docid": "a0a9fc47ba3694864e64e4f29c3c5735", "text": "Severe cases of traumatic brain injury (TBI) require neurocritical care, the goal being to stabilize hemodynamics and systemic oxygenation to prevent secondary brain injury. It is reported that approximately 45 % of dysoxygenation episodes during critical care have both extracranial and intracranial causes, such as intracranial hypertension and brain edema. For this reason, neurocritical care is incomplete if it only focuses on prevention of increased intracranial pressure (ICP) or decreased cerebral perfusion pressure (CPP). Arterial hypotension is a major risk factor for secondary brain injury, but hypertension with a loss of autoregulation response or excess hyperventilation to reduce ICP can also result in a critical condition in the brain and is associated with a poor outcome after TBI. Moreover, brain injury itself stimulates systemic inflammation, leading to increased permeability of the blood-brain barrier, exacerbated by secondary brain injury and resulting in increased ICP. Indeed, systemic inflammatory response syndrome after TBI reflects the extent of tissue damage at onset and predicts further tissue disruption, producing a worsening clinical condition and ultimately a poor outcome. Elevation of blood catecholamine levels after severe brain damage has been reported to contribute to the regulation of the cytokine network, but this phenomenon is a systemic protective response against systemic insults. Catecholamines are directly involved in the regulation of cytokines, and elevated levels appear to influence the immune system during stress. Medical complications are the leading cause of late morbidity and mortality in many types of brain damage. Neurocritical care after severe TBI has therefore been refined to focus not only on secondary brain injury but also on systemic organ damage after excitation of sympathetic nerves following a stress reaction.", "title": "" }, { "docid": "a28199159d7508a7ef57cd20adf084c2", "text": "Brain-computer interfaces (BCIs) translate brain activity into signals controlling external devices. BCIs based on visual stimuli can maintain communication in severely paralyzed patients, but only if intact vision is available. Debilitating neurological disorders however, may lead to loss of intact vision. The current study explores the feasibility of an auditory BCI. Sixteen healthy volunteers participated in three training sessions consisting of 30 2-3 min runs in which they learned to increase or decrease the amplitude of sensorimotor rhythms (SMR) of the EEG. Half of the participants were presented with visual and half with auditory feedback. Mood and motivation were assessed prior to each session. Although BCI performance in the visual feedback group was superior to the auditory feedback group there was no difference in performance at the end of the third session. Participants in the auditory feedback group learned slower, but four out of eight reached an accuracy of over 70% correct in the last session comparable to the visual feedback group. Decreasing performance of some participants in the visual feedback group is related to mood and motivation. We conclude that with sufficient training time an auditory BCI may be as efficient as a visual BCI. Mood and motivation play a role in learning to use a BCI.", "title": "" }, { "docid": "8d19d251e31dd3564f7bcab33cc3c9b7", "text": "The visual appearance of a person is easily affected by many factors like pose variations, viewpoint changes and camera parameter differences. This makes person Re-Identification (ReID) among multiple cameras a very challenging task. This work is motivated to learn mid-level human attributes which are robust to such visual appearance variations. And we propose a semi-supervised attribute learning framework which progressively boosts the accuracy of attributes only using a limited number of labeled data. Specifically, this framework involves a three-stage training. A deep Convolutional Neural Network (dCNN) is first trained on an independent dataset labeled with attributes. Then it is fine-tuned on another dataset only labeled with person IDs using our defined triplet loss. Finally, the updated dCNN predicts attribute labels for the target dataset, which is combined with the independent dataset for the final round of fine-tuning. The predicted attributes, namely deep attributes exhibit superior generalization ability across different datasets. By directly using the deep attributes with simple Cosine distance, we have obtained surprisingly good accuracy on four person ReID datasets. Experiments also show that a simple metric learning modular further boosts our method, making it significantly outperform many recent works.", "title": "" }, { "docid": "4cb66593d4f9ddb30cb7e470db22f0f7", "text": "Image fusion is the process of combining two or more images for providing more information. Medical image fusion refers to the fusion of medical images obtained from different modalities. Medical Image Fusion helps in medical diagnosis by way of improving the quality of the images. In diagnosis, images obtained from a single modality like MRI, CT etc, may not be able to provide all the required information. It is needed to combine information obtained from other modalities also to improve the information acquired. For example combination of information from MRI and CT modalities gives more information than the individual modalities separately. The aim is to provide a method for fusing the images from the individual modalities in such a way that the fusion results in an image that gives more information without any loss of the input information and without any redundancy or artifacts. In the fusion of medical images obtained from different modalities they might be in different coordinate systems and they have to be aligned properly for efficient fusion. The aligning of the input images before proceeding with the fusion is called image registration. The intensity based registration and Mutual information based image registration procedures are carried out before decomposing the images. The two imaging modalities CT and MRI are considered for this study. The results on CT and MR images demonstrate the performance of the fusion algorithms in comparison with registration schemes.", "title": "" }, { "docid": "ef36ed423a1834272684cf39d06453c1", "text": "Abstract In general two basic methods are used for controlling the velocity of a hydraulic cylinder. First by an axial variable-displacement pump for controls flow to the cylinder. This configuration is commonly known as a hydrostatic transmission. Second by proportional valve powered by a constant-pressure source, such as a pressure compensated pump, drives the hydraulic cylinder. In this study, the electro-hydraulic servo system (EHSS) for velocity control of hydraulic cylinder is investigated experimentally and its analysis theoretically. Where the controlled hydraulic cylinder is altered by a swashplate axial piston pump or by proportional valve to achieve velocity control. The theoretical part includes the derivation of the mathematical model equations of combination system. Velocity control system for hydraulic cylinder using simple (PID) controller to get constant velocity range of hydraulic cylinder under applied external variable loads . An experimental set-up is constructed, which consists of the hydraulic test pump unit, the electro-hydraulic proportional valve unit, the hydraulic actuator unit , the external load control unit and interfacing electronic unit. The experimental results show that PID controller can be achieve good velocity control by variable displacement axial piston pump and also by proportional valve under external loads variations.", "title": "" }, { "docid": "1f972cc136f47288888657e84464412e", "text": "This paper evaluates the impact of machine translation on the software localization process and the daily work of professional translators when SMT is applied to low-resourced languages with rich morphology. Translation from English into six low-resourced languages (Czech, Estonian, Hungarian, Latvian, Lithuanian and Polish) from different language groups are examined. Quality, usability and applicability of SMT for professional translation were evaluated. The building of domain and project tailored SMT systems for localization purposes was evaluated in two setups. The results of the first evaluation were used to improve SMT systems and MT platform. The second evaluation analysed a more complex situation considering tag translation and its effects on the translator’s productivity.", "title": "" }, { "docid": "b610e9bef08ef2c133a02e887b89b196", "text": "We propose to use question answering (QA) data from Web forums to train chatbots from scratch, i.e., without dialog training data. First, we extract pairs of question and answer sentences from the typically much longer texts of questions and answers in a forum. We then use these shorter texts to train seq2seq models in a more efficient way. We further improve the parameter optimization using a new model selection strategy based on QA measures. Finally, we propose to use extrinsic evaluation with respect to a QA task as an automatic evaluation method for chatbots. The evaluation shows that the model achieves a MAP of 63.5% on the extrinsic task. Moreover, it can answer correctly 49.5% of the questions when they are similar to questions asked in the forum, and 47.3% of the questions when they are more conversational in style.", "title": "" }, { "docid": "7bd7b0b85ae68f0ccd82d597667d8acb", "text": "Trust evaluation plays an important role in securing wireless sensor networks (WSNs), which is one of the most popular network technologies for the Internet of Things (IoT). The efficiency of the trust evaluation process is largely governed by the trust derivation, as it dominates the overhead in the process, and performance of WSNs is particularly sensitive to overhead due to the limited bandwidth and power. This paper proposes an energy-aware trust derivation scheme using game theoretic approach, which manages overhead while maintaining adequate security of WSNs. A risk strategy model is first presented to stimulate WSN nodes' cooperation. Then, a game theoretic approach is applied to the trust derivation process to reduce the overhead of the process. We show with the help of simulations that our trust derivation scheme can achieve both intended security and high efficiency suitable for WSN-based IoT networks.", "title": "" }, { "docid": "584d2858178e4e33855103a71d7fdce4", "text": "This paper presents 5G mm-wave phased-array antenna for 3D-hybrid beamforming. This uses MFC to steer beam for the elevation, and uses butler matrix network for the azimuth. In case of butler matrix network, this, using 180° ring hybrid coupler switch network, is proposed to get additional beam pattern and improved SRR in comparison with conventional structure. Also, it can be selected 15 of the azimuth beam pattern. When using the chip of proposed structure, it is possible to get variable kind of beam-forming over 1000. In addition, it is suitable 5G system or a satellite communication system that requires a beamforming.", "title": "" }, { "docid": "e3eae34f1ad48264f5b5913a65bf1247", "text": "Double spending and blockchain forks are two main issues that the Bitcoin crypto-system is confronted with. The former refers to an adversary's ability to use the very same coin more than once while the latter reflects the occurrence of transient inconsistencies in the history of the blockchain distributed data structure. We present a new approach to tackle these issues: it consists in adding some local synchronization constraints on Bitcoin's validation operations, and in making these constraints independent from the native blockchain protocol. Synchronization constraints are handled by nodes which are randomly and dynamically chosen in the Bitcoin system. We show that with such an approach, content of the blockchain is consistent with all validated transactions and blocks which guarantees the absence of both double-spending attacks and blockchain forks.", "title": "" }, { "docid": "119696bc950e1c36fa9d09ee8c1aa6fb", "text": "A smart grid is an intelligent electricity grid that optimizes the generation, distribution and consumption of electricity through the introduction of Information and Communication Technologies on the electricity grid. In essence, smart grids bring profound changes in the information systems that drive them: new information flows coming from the electricity grid, new players such as decentralized producers of renewable energies, new uses such as electric vehicles and connected houses and new communicating equipments such as smart meters, sensors and remote control points. All this will cause a deluge of data that the energy companies will have to face. Big Data technologies offers suitable solutions for utilities, but the decision about which Big Data technology to use is critical. In this paper, we provide an overview of data management for smart grids, summarise the added value of Big Data technologies for this kind of data, and discuss the technical requirements, the tools and the main steps to implement Big Data solutions in the smart grid context.", "title": "" }, { "docid": "cd7fa5de19b12bdded98f197c1d9cd22", "text": "Many event monitoring systems rely on counting known keywords in streaming text data to detect sudden spikes in frequency. But the dynamic and conversational nature of Twitter makes it hard to select known keywords for monitoring. Here we consider a method of automatically finding noun phrases (NPs) as keywords for event monitoring in Twitter. Finding NPs has two aspects, identifying the boundaries for the subsequence of words which represent the NP, and classifying the NP to a specific broad category such as politics, sports, etc. To classify an NP, we define the feature vector for the NP using not just the words but also the author's behavior and social activities. Our results show that we can classify many NPs by using a sample of training data from a knowledge-base.", "title": "" }, { "docid": "7a7fedfeaa85536028113c65d5650957", "text": "In this work we propose a novel framework named Dual-Net aiming at learning more accurate representation for image recognition. Here two parallel neural networks are coordinated to learn complementary features and thus a wider network is constructed. Specifically, we logically divide an end-to-end deep convolutional neural network into two functional parts, i.e., feature extractor and image classifier. The extractors of two subnetworks are placed side by side, which exactly form the feature extractor of DualNet. Then the two-stream features are aggregated to the final classifier for overall classification, while two auxiliary classifiers are appended behind the feature extractor of each subnetwork to make the separately learned features discriminative alone. The complementary constraint is imposed by weighting the three classifiers, which is indeed the key of DualNet. The corresponding training strategy is also proposed, consisting of iterative training and joint finetuning, to make the two subnetworks cooperate well with each other. Finally, DualNet based on the well-known CaffeNet, VGGNet, NIN and ResNet are thoroughly investigated and experimentally evaluated on multiple datasets including CIFAR-100, Stanford Dogs and UEC FOOD-100. The results demonstrate that DualNet can really help learn more accurate image representation, and thus result in higher accuracy for recognition. In particular, the performance on CIFAR-100 is state-of-the-art compared to the recent works.", "title": "" }, { "docid": "81a3def63addf898b91f4d7217f6298a", "text": "Cloud computing is a new form of technology, which infrastructure, developing platform, software, and storage can be delivered as a service in a pay as you use cost model. However, for critical business application and more sensitive information, cloud providers must be selected based on high level of trustworthiness. In this paper, we present a trust model to evaluate cloud services in order to help cloud users select the most reliable resources. We integrate our previous work “conceptual SLA framework for cloud computing” with the proposed trust management model to present a new solution of defining the reliable criteria for the selection process of cloud providers", "title": "" }, { "docid": "6eca7ba1607a1d7d6697af6127a92c4b", "text": "Cluster analysis is one of attractive data mining technique that use in many fields. One popular class of data clustering algorithms is the center based clustering algorithm. K-means used as a popular clustering method due to its simplicity and high speed in clustering large datasets. However, K-means has two shortcomings: dependency on the initial state and convergence to local optima and global solutions of large problems cannot found with reasonable amount of computation effort. In order to overcome local optima problem lots of studies done in clustering. Over the last decade, modeling the behavior of social insects, such as ants and bees, for the purpose of search and problem solving has been the context of the emerging area of swarm intelligence. Honey-bees are among the most closely studied social insects. Honey-bee mating may also be considered as a typical swarm-based approach to optimization, in which the search algorithm is inspired by the process of marriage in real honey-bee. Honey-bee has been used to model agent-based systems. In this paper, we proposed application of honeybee mating optimization in clustering (HBMK-means). We compared HBMK-means with other heuristics algorithm in clustering, such as GA, SA, TS, and ACO, by implementing them on several well-known datasets. Our finding shows that the proposed algorithm works than the best one. 2007 Elsevier Inc. All rights reserved.", "title": "" } ]
scidocsrr
7ae82c8c9a24e86e496993d96498043d
A 80 nW, 32 kHz charge-pump based ultra low power oscillator with temperature compensation
[ { "docid": "8b33ce7ccfdd87dc9f1da56157b7331f", "text": "This work presents an ultra-low power oscillator designed for wake-up timers in compact wireless sensors. A constant charge subtraction scheme removes continuous comparator delay from the oscillation period, which is the source of temperature dependence in conventional RC relaxation oscillators. This relaxes comparator design constraints, enabling low power operation. In 0.18μm CMOS, the oscillator consumes 5.8nW at room temperature with temperature stability of 45ppm/°C (-10°C to 90°C) and 1%V line sensitivity.", "title": "" }, { "docid": "7579ea317e216e80bcd08eabb4615711", "text": "This paper presents an ultra low power clock source using a 1μW temperature compensated on-chip digitally controlled oscillator (Osc<sub>CMP</sub>) and a 100nW uncompensated oscillator (Osc<sub>UCMP</sub>) with respective temperature stabilities of 5ppm/°C and 1.67%/°C. A fast locking circuit re-locks Osc<sub>UCMP</sub> to Osc<sub>CMP</sub> often enough to achieve a high effective temperature stability. Measurements of a 130nm CMOS chip show that this combination gives a stability of 5ppm/°C from 20°C to 40°C (14ppm/°C from 20°C to 70°C) at 150nW if temperature changes by 1°C or less every second. This result is 7X lower power than typical XTALs and 6X more stable than prior on-chip solutions.", "title": "" } ]
[ { "docid": "14049dd7ee7a07107702c531fec4ff61", "text": "Reducing errors and improving quality are an integral part of Pathology and Laboratory Medicine. The rate of errors is reviewed for the pre-analytical, analytical, and post-analytical phases for a specimen. The quality systems in place in pathology today are identified and compared with benchmarks for quality. The types and frequency of errors and quality systems are reviewed for surgical pathology, cytopathology, clinical chemistry, hematology, microbiology, molecular biology, and transfusion medicine. Seven recommendations are made to reduce errors in future for Pathology and Laboratory Medicine.", "title": "" }, { "docid": "10fff590f9c8e99ebfd1b4b4e453241f", "text": "Object-oriented programming has many advantages over conventional procedural programming languages for constructing highly flexible, adaptable, and extensible systems. Therefore a transformation of procedural programs to object-oriented architectures becomes an important process to enhance the reuse of procedural programs. Moreover, it would be useful to assist by automatic methods the software developers in transforming procedural code into an equivalent object-oriented one. In this paper we aim at introducing an agglomerative hierarchical clustering algorithm that can be used for assisting software developers in the process of transforming procedural code into an object-oriented architecture. We also provide a code example showing how our approach works, emphasizing, this way, the potential of our proposal.", "title": "" }, { "docid": "1667c7e872bac649051bb45fc85e9921", "text": "Mobile devices are becoming increasingly sophisticated and now incorporate many diverse and powerful sensors. The latest generation of smart phones is especially laden with sensors, including GPS sensors, vision sensors (cameras), audio sensors (microphones), light sensors, temperature sensors, direction sensors (compasses), and acceleration sensors. In this paper we describe and evaluate a system that uses phone-based acceleration sensors, called accelerometers, to identify and authenticate cell phone users. This form of behavioral biométrie identification is possible because a person's movements form a unique signature and this is reflected in the accelerometer data that they generate. To implement our system we collected accelerometer data from thirty-six users as they performed normal daily activities such as walking, jogging, and climbing stairs, aggregated this time series data into examples, and then applied standard classification algorithms to the resulting data to generate predictive models. These models either predict the identity of the individual from the set of thirty-six users, a task we call user identification, or predict whether (or not) the user is a specific user, a task we call user authentication. This work is notable because it enables identification and authentication to occur unobtrusively, without the users taking any extra actions-all they need to do is carry their cell phones. There are many uses for this work. For example, in environments where sharing may take place, our work can be used to automatically customize a mobile device to a user. It can also be used to provide device security by enabling usage for only specific users and can provide an extra level of identity verification.", "title": "" }, { "docid": "344be59c5bb605dec77e4d7bd105d899", "text": "Recently, style transfer has received a lot of attention. While much of this research has aimed at speeding up processing, the approaches are still lacking from a principled, art historical standpoint: a style is more than just a single image or an artist, but previous work is limited to only a single instance of a style or shows no benefit from more images. Moreover, previous work has relied on a direct comparison of art in the domain of RGB images or on CNNs pre-trained on ImageNet, which requires millions of labeled object bounding boxes and can introduce an extra bias, since it has been assembled without artistic consideration. To circumvent these issues, we propose a style-aware content loss, which is trained jointly with a deep encoder-decoder network for real-time, high-resolution stylization of images and videos. We propose a quantitative measure for evaluating the quality of a stylized image and also have art historians rank patches from our approach against those from previous work. These and our qualitative results ranging from small image patches to megapixel stylistic images and videos show that our approach better captures the subtle nature in which a style affects content.", "title": "" }, { "docid": "8bb30efa3f14fa0860d1e5bc1265c988", "text": "The introduction of microgrids in distribution networks based on power electronics facilitates the use of renewable energy resources, distributed generation (DG) and storage systems while improving the quality of electric power and reducing losses thus increasing the performance and reliability of the electrical system, opens new horizons for microgrid applications integrated into electrical power systems. The hierarchical control structure consists of primary, secondary, and tertiary levels for microgrids that mimic the behavior of the mains grid is reviewed. The main objective of this paper is to give a description of state of the art for the distributed power generation systems (DPGS) based on renewable energy and explores the power converter connected in parallel to the grid which are distinguished by their contribution to the formation of the grid voltage and frequency and are accordingly classified in three classes. This analysis is extended focusing mainly on the three classes of configurations grid-forming, grid-feeding, and gridsupporting. The paper ends up with an overview and a discussion of the control structures and strategies to control distribution power generation system (DPGS) units connected to the network. Keywords— Distributed power generation system (DPGS); hierarchical control; grid-forming; grid-feeding; grid-supporting. Nomenclature Symbols id − iq Vd − Vq P Q ω E f U", "title": "" }, { "docid": "ba4faa0390c2c75aab79822a1e523e71", "text": "The number of linked data sources and the size of the linked open data graph keep growing every day. As a consequence, semantic RDF services are more and more confronted to various “big data” problems. Query processing is one of them and needs to be efficiently addressed with executions over scalable, highly available and fault tolerant frameworks. Data management systems requiring these properties are rarely built from scratch but are rather designed on top of an existing cluster computing engine. In this work, we consider the processing of SPARQL queries with Apache Spark. We propose and compare five different query processing approaches based on different join execution models and Spark components. A detailed experimentation, on real-world and synthetic data sets, emphasizes that two approaches tailored for the RDF data model outperform the other ones on all major query shapes, i.e., star, snowflake, chain and hybrid.", "title": "" }, { "docid": "5a248466c2e82b8453baa483a05bc25b", "text": "Early severe stress and maltreatment produces a cascade of neurobiological events that have the potential to cause enduring changes in brain development. These changes occur on multiple levels, from neurohumoral (especially the hypothalamic-pituitary-adrenal [HPA] axis) to structural and functional. The major structural consequences of early stress include reduced size of the mid-portions of the corpus callosum and attenuated development of the left neocortex, hippocampus, and amygdala. Major functional consequences include increased electrical irritability in limbic structures and reduced functional activity of the cerebellar vermis. There are also gender differences in vulnerability and functional consequences. The neurobiological sequelae of early stress and maltreatment may play a significant role in the emergence of psychiatric disorders during development.", "title": "" }, { "docid": "09380650b0af3851e19f18de4a2eacb2", "text": "This paper presents a novel self-assembly modular robot (Sambot) that also shares characteristics with self-reconfigurable and self-assembly and swarm robots. Each Sambot can move autonomously and connect with the others. Multiple Sambot can be self-assembled to form a robotic structure, which can be reconfigured into different configurable robots and can locomote. A novel mechanical design is described to realize function of autonomous motion and docking. Introducing embedded mechatronics integrated technology, whole actuators, sensors, microprocessors, power and communication unit are embedded in the module. The Sambot is compact and flexble, the overall size is 80×80×102mm. The preliminary self-assembly and self-reconfiguration of Sambot is discussed, and several possible configurations consisting of multiple Sambot are designed in simulation environment. At last, the experiment of self-assembly and self-reconfiguration and locomotion of multiple Sambot has been implemented.", "title": "" }, { "docid": "50df49f3c9de66798f89fdeab9d2ae85", "text": "Predictive modeling is increasingly being employed to assist human decision-makers. One purported advantage of replacing or augmenting human judgment with computer models in high stakes settings– such as sentencing, hiring, policing, college admissions, and parole decisions– is the perceived “neutrality” of computers. It is argued that because computer models do not hold personal prejudice, the predictions they produce will be equally free from prejudice. There is growing recognition that employing algorithms does not remove the potential for bias, and can even amplify it if the training data were generated by a process that is itself biased. In this paper, we provide a probabilistic notion of algorithmic bias. We propose a method to eliminate bias from predictive models by removing all information regarding protected variables from the data to which the models will ultimately be trained. Unlike previous work in this area, our procedure accommodates data on any measurement scale. Motivated by models currently in use in the criminal justice system that inform decisions on pre-trial release and parole, we apply our proposed method to a dataset on the criminal histories of individuals at the time of sentencing to produce “race-neutral” predictions of re-arrest. In the process, we demonstrate that a common approach to creating “race-neutral” models– omitting race as a covariate– still results in racially disparate predictions. We then demonstrate that the application of our proposed method to these data removes racial disparities from predictions with minimal impact on predictive accuracy.", "title": "" }, { "docid": "f119b00fd10eeb9e4bfa0441a5534933", "text": "Network intrusion detection systems (NIDS) are essential security building-blocks for today's organizations to ensure safe and trusted communication of information. In this paper, we study the feasibility of off-line deep learning based NIDSes by constructing the detection engine with multiple advanced deep learning models and conducting a quantitative and comparative evaluation of those models. We first introduce the general deep learning methodology and its potential implication on the network intrusion detection problem. We then review multiple machine learning solutions to two network intrusion detection tasks (NSL-KDD and UNSW-NB15 datasets). We develop a TensorFlow-based deep learning library, called NetLearner, and implement a handful of cutting-edge deep learning models for NIDS. Finally, we conduct a quantitative and comparative performance evaluation of those models using NetLearner.", "title": "" }, { "docid": "30e89edb65cbf54b27115c037ee9c322", "text": "AbstructIGBT’s are available with short-circuit withstand times approaching those of bipolar transistors. These IGBT’s can therefore be protected by the same relatively slow-acting circuitry. The more efficient IGBT’s, however, have lower shortcircuit withstand times. While protection of these types of IGBT’s is not difficult, it does require a reassessment of the traditional protection methods used for the bipolar transistors. An in-depth discussion on the behavior of IGBT’s under different short-circuit conditions is carried out and the effects of various parameters on permissible short-circuit time are analyzed. The paper also rethinks the problem of providing short-circuit protection in relation to the special characteristics of the most efficient IGBT’s. The pros and cons of some of the existing protection circuits are discussed and, based on the recommendations, a protection scheme is implemented to demonstrate that reliable short-circuit protection of these types of IGBT’s can be achieved without difficulty in a PWM motor-drive application. volts", "title": "" }, { "docid": "bf9910e87c2294e307f142e0be4ed4f6", "text": "The rapidly developing cloud computing and virtualization techniques provide mobile devices with battery energy saving opportunities by allowing them to offload computation and execute applications remotely. A mobile device should judiciously decide whether to offload computation and which portion of application should be offloaded to the cloud. In this paper, we consider a mobile cloud computing (MCC) interaction system consisting of multiple mobile devices and the cloud computing facilities. We provide a nested two stage game formulation for the MCC interaction system. In the first stage, each mobile device determines the portion of its service requests for remote processing in the cloud. In the second stage, the cloud computing facilities allocate a portion of its total resources for service request processing depending on the request arrival rate from all the mobile devices. The objective of each mobile device is to minimize its power consumption as well as the service request response time. The objective of the cloud computing controller is to maximize its own profit. Based on the backward induction principle, we derive the optimal or near-optimal strategy for all the mobile devices as well as the cloud computing controller in the nested two stage game using convex optimization technique. Experimental results demonstrate the effectiveness of the proposed nested two stage game-based optimization framework on the MCC interaction system. The mobile devices can achieve simultaneous reduction in average power consumption and average service request response time, by 21.8% and 31.9%, respectively, compared with baseline methods.", "title": "" }, { "docid": "914d17433df678e9ace1c9edd1c968d3", "text": "We propose a Deep Learning approach to the visual question answering task, where machines answer to questions about real-world images. By combining latest advances in image representation and natural language processing, we propose Ask Your Neurons, a scalable, jointly trained, end-to-end formulation to this problem. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language inputs (image and question). We evaluate our approaches on the DAQUAR as well as the VQA dataset where we also report various baselines, including an analysis how much information is contained in the language part only. To study human consensus, we propose two novel metrics and collect additional answers which extend the original DAQUAR dataset to DAQUAR-Consensus. Finally, we evaluate a rich set of design choices how to encode, combine and decode information in our proposed Deep Learning formulation.", "title": "" }, { "docid": "a1f05b8954434a782f9be3d9cd10bb8b", "text": "Because of their avid use of new media and their increased spending power, children and teens have become primary targets of a new \"media and marketing ecosystem.\" The digital marketplace is undergoing rapid innovation as new technologies and software applications continue to reshape the media landscape and user behaviors. The advertising industry, in many instances led by food and beverage marketers, is purposefully exploiting the special relationship that youth have with new media, as online marketing campaigns create unprecedented intimacies between adolescents and the brands and products that now literally surround them.", "title": "" }, { "docid": "df35b679204e0729266a1076685600a1", "text": "A new innovations state space modeling framework, incorporating Box-Cox transformations, Fourier series with time varying coefficients and ARMA error correction, is introduced for forecasting complex seasonal time series that cannot be handled using existing forecasting models. Such complex time series include time series with multiple seasonal periods, high frequency seasonality, non-integer seasonality and dual-calendar effects. Our new modelling framework provides an alternative to existing exponential smoothing models, and is shown to have many advantages. The methods for initialization and estimation, including likelihood evaluation, are presented, and analytical expressions for point forecasts and interval predictions under the assumption of Gaussian errors are derived, leading to a simple, comprehensible approach to forecasting complex seasonal time series. Our trigonometric formulation is also presented as a means of decomposing complex seasonal time series, which cannot be decomposed using any of the existing decomposition methods. The approach is useful in a broad range of applications, and we illustrate its versatility in three empirical studies where it demonstrates excellent forecasting performance over a range of prediction horizons. In addition, we show that our trigonometric decomposition leads to the identification and extraction of seasonal components, which are otherwise not apparent in the time series plot itself.", "title": "" }, { "docid": "e13d935c4950323a589dce7fd5bce067", "text": "Worker reliability is a longstanding issue in crowdsourcing, and the automatic discovery of high quality workers is an important practical problem. Most previous work on this problem mainly focuses on estimating the quality of each individual worker jointly with the true answer of each task. However, in practice, for some tasks, worker quality could be associated with some explicit characteristics of the worker, such as education level, major and age. So the following question arises: how do we automatically discover related worker attributes for a given task, and further utilize the findings to improve data quality? In this paper, we propose a general crowd targeting framework that can automatically discover, for a given task, if any group of workers based on their attributes have higher quality on average; and target such groups, if they exist, for future work on the same task. Our crowd targeting framework is complementary to traditional worker quality estimation approaches. Furthermore, an advantage of our framework is that it is more budget efficient because we are able to target potentially good workers before they actually do the task. Experiments on real datasets show that the accuracy of final prediction can be improved significantly for the same budget (or even less budget in some cases). Our framework can be applied to many real word tasks and can be easily integrated in current crowdsourcing platforms.", "title": "" }, { "docid": "ba8d73938ea51f1b41add8c572c1667b", "text": "Traditionally, when storage systems employ erasure codes, they are designed to tolerate the failures of entire disks. However, the most common types of failures are latent sector failures, which only affect individual disk sectors, and block failures which arise through wear on SSD’s. This paper introduces SD codes, which are designed to tolerate combinations of disk and sector failures. As such, they consume far less storage resources than traditional erasure codes. We specify the codes with enough detail for the storage practitioner to employ them, discuss their practical properties, and detail an open-source implementation.", "title": "" }, { "docid": "152182336e620ee94f24e3865b7b377f", "text": "In Theory III we characterize with a mix of theory and experiments the generalization properties of Stochastic Gradient Descent in overparametrized deep convolutional networks. We show that Stochastic Gradient Descent (SGD) selects with high probability solutions that 1) have zero (or small) empirical error, 2) are degenerate as shown in Theory II and 3) have maximum generalization. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 123 1216. H.M. is supported in part by ARO Grant W911NF-15-10385.", "title": "" }, { "docid": "0c025ec05a1f98d71c9db5bfded0a607", "text": "Many organizations, such as banks, airlines, telecommunications companies, and police departments, routinely use queueing models to help determine capacity levels needed to respond to experienced demands in a timely fashion. Though queueing analysis has been used in hospitals and other healthcare settings, its use in this sector is not widespread. Yet, given the pervasiveness of delays in healthcare and the fact that many healthcare facilities are trying to meet increasing demands with tightly constrained resources, queueing models can be very useful in developing more effective policies for bed allocation and staffing, and in identifying other opportunities for improving service. Queueing analysis is also a key tool in estimating capacity requirements for possible future scenarios, including demand surges due to new diseases or acts of terrorism. This chapter describes basic queueing models as well as some simple modifications and extensions that are particularly useful in the healthcare setting, and give examples of their use. The critical issue of data requirements is also be discussed as well as model choice, modelbuilding and the interpretation and use of results.", "title": "" }, { "docid": "0d1da055e444a90ec298a2926de9fe7b", "text": "Cryptocurrencies have experienced recent surges in interest and price. It has been discovered that there are time intervals where cryptocurrency prices and certain online and social media factors appear related. In addition it has been noted that cryptocurrencies are prone to experience intervals of bubble-like price growth. The hypothesis investigated here is that relationships between online factors and price are dependent on market regime. In this paper, wavelet coherence is used to study co-movement between a cryptocurrency price and its related factors, for a number of examples. This is used alongside a well-known test for financial asset bubbles to explore whether relationships change dependent on regime. The primary finding of this work is that medium-term positive correlations between online factors and price strengthen significantly during bubble-like regimes of the price series; this explains why these relationships have previously been seen to appear and disappear over time. A secondary finding is that short-term relationships between the chosen factors and price appear to be caused by particular market events (such as hacks / security breaches), and are not consistent from one time interval to another in the effect of the factor upon the price. In addition, for the first time, wavelet coherence is used to explore the relationships between different cryptocurrencies.", "title": "" } ]
scidocsrr
586b8f0da04528aec5988e977a916bbf
Concept Development and Design of a Spherical Wheel Motor (SWM)
[ { "docid": "b6f09a89a16474860091ddb325d49017", "text": "This paper addresses the design and commutation of a novel kind of spherical stepper motor in which the poles of the stator are electromagnets and the poles of the rotor (rotating ball) are permanent magnets. Due to the fact that points on a sphere can only be arranged with equal spacing in a limited number of cases (corresponding to the Platonic solids), design of spherical stepper motors with fine rotational increments is fundamentally geometrical in nature. We address this problem and the related problem of how rotor and stator poles should be arranged in order to interact to cause motion. The resulting design has a much wider range of unhindered motion than other spherical stepper motor designs in the literature. We also address the problem of commutation, i.e., we determine the sequence of stator polarities in time that approximate a desired spherical motion.", "title": "" } ]
[ { "docid": "ad5943b20597be07646cca1af9d23660", "text": "Defects in safety critical processes can lead to accidents that result in harm to people or damage to property. Therefore, it is important to find ways to detect and remove defects from such processes. Earlier work has shown that Fault Tree Analysis (FTA) [3] can be effective in detecting safety critical process defects. Unfortunately, it is difficult to build a comprehensive set of Fault Trees for a complex process, especially if this process is not completely welldefined. The Little-JIL process definition language has been shown to be effective for defining complex processes clearly and precisely at whatever level of granularity is desired [1]. In this work, we present an algorithm for generating Fault Trees from Little-JIL process definitions. We demonstrate the value of this work by showing how FTA can identify safety defects in the process from which the Fault Trees were automatically derived.", "title": "" }, { "docid": "f79090002d75e922e272c44391ddb6f0", "text": "Nowadays, coffee beans are almost exclusively used for the preparation of the beverage. The sustainability of coffee production can be achieved introducing new applications for the valorization of coffee by-products. Coffee silverskin is the by-product generated during roasting, and because of its powerful antioxidant capacity, coffee silverskin aqueous extract (CSE) may be used for other applications, such as antiaging cosmetics and dermaceutics. This study aims to contribute to the coffee sector's sustainability through the application of CSE to preserve skin health. Preclinical data regarding the antiaging properties of CSE employing human keratinocytes and Caenorhabditis elegans are collected during the present study. Accelerated aging was induced by tert-butyl hydroperoxide (t-BOOH) in HaCaT cells and by ultraviolet radiation C (UVC) in C. elegans. Results suggest that the tested concentrations of coffee extracts were not cytotoxic, and CSE 1 mg/mL gave resistance to skin cells when oxidative damage was induced by t-BOOH. On the other hand, nematodes treated with CSE (1 mg/mL) showed a significant increased longevity compared to those cultured on a standard diet. In conclusion, our results support the antiaging properties of the CSE and its great potential for improving skin health due to its antioxidant character associated with phenols among other bioactive compounds present in the botanical material.", "title": "" }, { "docid": "aa4d12547a6b85a34ee818f1cc71d1da", "text": "OBJECTIVE\nDevelopment of a new framework for the National Institute on Aging (NIA) to assess progress and opportunities toward stimulating and supporting rigorous research to address health disparities.\n\n\nDESIGN\nPortfolio review of NIA's health disparities research portfolio to evaluate NIA's progress in addressing priority health disparities areas.\n\n\nRESULTS\nThe NIA Health Disparities Research Framework highlights important factors for health disparities research related to aging, provides an organizing structure for tracking progress, stimulates opportunities to better delineate causal pathways and broadens the scope for malleable targets for intervention, aiding in our efforts to address health disparities in the aging population.\n\n\nCONCLUSIONS\nThe promise of health disparities research depends largely on scientific rigor that builds on past findings and aggressively pursues new approaches. The NIA Health Disparities Framework provides a landscape for stimulating interdisciplinary approaches, evaluating research productivity and identifying opportunities for innovative health disparities research related to aging.", "title": "" }, { "docid": "a094fe8de029646a408bbb685824581c", "text": "Will reading habit influence your life? Many say yes. Reading computational intelligence principles techniques and applications is a good habit; you can develop this habit to be such interesting way. Yeah, reading habit will not only make you have any favourite activity. It will be one of guidance of your life. When reading has become a habit, you will not make it as disturbing activities or as boring activity. You can gain many benefits and importances of reading.", "title": "" }, { "docid": "284c7292bd7e79c5c907fc2aa21fb52c", "text": "Monte Carlo Tree Search (MCTS) is an AI technique that has been successfully applied to many deterministic games of perfect information, leading to large advances in a number of domains, such as Go and General Game Playing. Imperfect information games are less well studied in the field of AI despite being popular and of significant commercial interest, for example in the case of computer and mobile adaptations of turn based board and card games. This is largely because hidden information and uncertainty leads to a large increase in complexity compared to perfect information games. In this thesis MCTS is extended to games with hidden information and uncertainty through the introduction of the Information Set MCTS (ISMCTS) family of algorithms. It is demonstrated that ISMCTS can handle hidden information and uncertainty in a variety of complex board and card games. This is achieved whilst preserving the general applicability of MCTS and using computational budgets appropriate for use in a commercial game. The ISMCTS algorithm is shown to outperform the existing approach of Perfect Information Monte Carlo (PIMC) search. Additionally it is shown that ISMCTS can be used to solve two known issues with PIMC search, namely strategy fusion and non-locality. ISMCTS has been integrated into a commercial game, Spades by AI Factory, with over 2.5 million downloads. The Information Capture And ReUSe (ICARUS) framework is also introduced in this thesis. The ICARUS framework generalises MCTS enhancements in terms of information capture (from MCTS simulations) and reuse (to improve MCTS tree and simulation policies). The ICARUS framework is used to express existing enhancements, to provide a tool to design new ones, and to rigorously define how MCTS enhancements can be combined. The ICARUS framework is tested across a wide variety of games.", "title": "" }, { "docid": "f7276b8fee4bc0633348ce64594817b2", "text": "Meta-modelling is at the core of Model-Driven Engineering, where it is used for language engineering and domain modelling. The OMG’s Meta-Object Facility is the standard framework for building and instantiating meta-models. However, in the last few years, several researchers have identified limitations and rigidities in such scheme, most notably concerning the consideration of only two meta-modelling levels at the same time. In this paper we present MetaDepth, a novel framework that supports a dual linguistic/ontological instantiation and permits building systems with an arbitrary number of meta-levels through deep meta-modelling. The framework implements advanced modelling concepts allowing the specification and evaluation of derived attributes and constraints across multiple meta-levels, linguistic extensions of ontological instance models, transactions, and hosting different constraint and action languages.", "title": "" }, { "docid": "33431760dfc16c095a4f0b8d4ed94790", "text": "Millions of individuals worldwide are afflicted with acute and chronic respiratory diseases, causing temporary and permanent disabilities and even death. Oftentimes, these diseases occur as a result of altered immune responses. The aryl hydrocarbon receptor (AhR), a ligand-activated transcription factor, acts as a regulator of mucosal barrier function and may influence immune responsiveness in the lungs through changes in gene expression, cell–cell adhesion, mucin production, and cytokine expression. This review updates the basic immunobiology of the AhR signaling pathway with regards to inflammatory lung diseases such as asthma, chronic obstructive pulmonary disease, and silicosis following data in rodent models and humans. Finally, we address the therapeutic potential of targeting the AhR in regulating inflammation during acute and chronic respiratory diseases.", "title": "" }, { "docid": "9a7016a02eda7fcae628197b0625832b", "text": "We present a vertical-silicon-nanowire-based p-type tunneling field-effect transistor (TFET) using CMOS-compatible process flow. Following our recently reported n-TFET , a low-temperature dopant segregation technique was employed on the source side to achieve steep dopant gradient, leading to excellent tunneling performance. The fabricated p-TFET devices demonstrate a subthreshold swing (SS) of 30 mV/decade averaged over a decade of drain current and an Ion/Ioff ratio of >; 105. Moreover, an SS of 50 mV/decade is maintained for three orders of drain current. This demonstration completes the complementary pair of TFETs to implement CMOS-like circuits.", "title": "" }, { "docid": "f24fb451d6ee013a6bbc8737c0eae689", "text": "Data on health literacy (HL) in the population is limited for Asian countries. This study aimed to test the validity of the Mandarin version of the European Health Literacy Survey Questionnaire (HLS-EU-Q) for use in the general public in Taiwan. Multistage stratification random sampling resulted in a sample of 2989 people aged 15 years and above. The HLS-EU-Q was validated by confirmatory factor analysis with excellent model data fit indices. The general HL of the Taiwanese population was 34.4 ± 6.6 on a scale of 50. Multivariate regression analysis showed that higher general HL is significantly associated with the higher ability to pay for medication, higher self-perceived social status, higher frequency of watching health-related TV, and community involvement but associated with younger age. HL is also associated with health status, health behaviors, and health care accessibility and use. The HLS-EU-Q was found to be a useful tool to assess HL and its associated factors in the general population.", "title": "" }, { "docid": "b5e66fbded6c7be46a8d7c724fd18be9", "text": "In augmented reality (AR), virtual objects and information are overlaid onto the user’s view of the physical world and can appear to become part of the real-world. Accurate registration of virtual objects is a key requirement for an effective and natural AR system, but misregistration can break the illusion of virtual objects being part of the real-world and disrupt immersion. End-to-end system latency severely impacts the quality of AR registration. In this research, we present a controlled study that aims at a deeper understanding of the effects of latency on virtual and real-world imagery and its influences on task performance in an AR training task. We utilize an AR simulation approach, in which an outdoor AR training task is simulated in a high-fidelity virtual reality (VR) system. The real and augmented portions of the AR training scenarios are simulated in VR, affording us detailed control over a variety of immersion parameters and the ability to explore the effects of different types of simulated latency. We utilized a representative task inspired by outdoor AR military training systems to compare various AR system configurations, including optical see-through and video see-through setups with both matched and unmatched levels of real and virtual objects latency. Our findings indicate that users are able to perform significantly better when virtual and real-world latencies are matched (as in the case of simulated video see-through AR with perfect augmentation-to-real-world registration). Unequal levels of latency led to reduction in performance, even when overall latency levels were lower compared to the matched case. The relative results hold up with increased overall latency.", "title": "" }, { "docid": "c25d5fbbf26956d25334f66dbae61c94", "text": "Roman seals associated with collyria (Latin expression for eye drops/washes and lotions for eye maintenance) provide valuable information about eye care in the antiquity. These small, usually stone-made pieces bore engravings with the names of eye doctors and also the collyria used to treat an eye disease. The collyria seals have been found all over the Roman empire and Celtic territories in particular and were usually associated with military camps. In Hispania (Iberian Peninsula), only three collyria seals have been found. These findings speak about eye care in this ancient Roman province as well as about of the life of the time. This article takes a look at the utility and social significance of the collyria seals and seeks to give an insight in the ophthalmological practice of in the Roman Empire.", "title": "" }, { "docid": "f9880427e28ddfd4877be78e613d603a", "text": "There is mounting evidence that mindfulness meditation is beneficial for the treatment of mood and anxiety disorders, yet little is known regarding the neural mechanisms through which mindfulness modulates emotional responses. Thus, a central objective of this functional magnetic resonance imaging study was to investigate the effects of mindfulness on the neural responses to emotionally laden stimuli. Another major goal of this study was to examine the impact of the extent of mindfulness training on the brain mechanisms supporting the processing of emotional stimuli. Twelve experienced (with over 1000 h of practice) and 10 beginner meditators were scanned as they viewed negative, positive, and neutral pictures in a mindful state and a non-mindful state of awareness. Results indicated that the Mindful condition attenuated emotional intensity perceived from pictures, while brain imaging data suggested that this effect was achieved through distinct neural mechanisms for each group of participants. For experienced meditators compared with beginners, mindfulness induced a deactivation of default mode network areas (medial prefrontal and posterior cingulate cortices) across all valence categories and did not influence responses in brain regions involved in emotional reactivity during emotional processing. On the other hand, for beginners relative to experienced meditators, mindfulness induced a down-regulation of the left amygdala during emotional processing. These findings suggest that the long-term practice of mindfulness leads to emotional stability by promoting acceptance of emotional states and enhanced present-moment awareness, rather than by eliciting control over low-level affective cerebral systems from higher-order cortical brain regions. These results have implications for affect-related psychological disorders.", "title": "" }, { "docid": "799573bf08fb91b1ac644c979741e7d2", "text": "This short paper reports the method and the evaluation results of Casio and Shinshu University joint team for the ISBI Challenge 2017 – Skin Lesion Analysis Towards Melanoma Detection – Part 3: Lesion Classification hosted by ISIC. Our online validation score was 0.958 with melanoma classifier AUC 0.924 and seborrheic keratosis classifier AUC 0.993.", "title": "" }, { "docid": "9548bd2e37fdd42d09dc6828ac4675f9", "text": "Recent years have seen increasing interest in ranking elite athletes and teams in professional sports leagues, and in predicting the outcomes of games. In this work, we draw an analogy between this problem and one in the field of search engine optimization, namely, that of ranking webpages on the Internet. Motivated by the famous PageRank algorithm, our TeamRank methods define directed graphs of sports teams based on the observed outcomes of individual games, and use these networks to infer the importance of teams that determines their rankings. In evaluating these methods on data from recent seasons in the National Football League (NFL) and National Basketball Association (NBA), we find that they can predict the outcomes of games with up to 70% accuracy, and that they provide useful rankings of teams that cluster by league divisions. We also propose some extensions to TeamRank that consider overall team win records and shifts in momentum over time.", "title": "" }, { "docid": "da3e64f908cf068b1af2e7492fe52ac4", "text": "Image tagging, also known as image annotation and image conception detection, has been extensively studied in the literature. However, most existing approaches can hardly achieve satisfactory performance owing to the deficiency and unreliability of the manually-labeled training data. In this paper, we propose a new image tagging scheme, termed social assisted media tagging (SAMT), which leverages the abundant user-generated images and the associated tags as the \"social assistance\" to learn the classifiers. We focus on addressing the following major challenges: (a) the noisy tags associated to the web images; and (b) the desirable robustness of the tagging model. We present a joint image tagging framework which simultaneously refines the erroneous tags of the web images as well as learns the reliable image classifiers. In particular, we devise a novel tag refinement module for identifying and eliminating the noisy tags by substantially exploring and preserving the low-rank nature of the tag matrix and the structured sparse property of the tag errors. We develop a robust image tagging module based on the l2,p-norm for learning the reliable image classifiers. The correlation of the two modules is well explored within the joint framework to reinforce each other. Extensive experiments on two real-world social image databases illustrate the superiority of the proposed approach as compared to the existing methods.", "title": "" }, { "docid": "01a649c8115810c8318e572742d9bd00", "text": "In this effort we propose a data-driven learning framework for reduced order modeling of fluid dynamics. Designing accurate and efficient reduced order models for nonlinear fluid dynamic problems is challenging for many practical engineering applications. Classical projection-based model reduction methods generate reduced systems by projecting full-order differential operators into low-dimensional subspaces. However, these techniques usually lead to severe instabilities in the presence of highly nonlinear dynamics, which dramatically deteriorates the accuracy of the reduced-order models. In contrast, our new framework exploits linear multistep networks, based on implicit Adams-Moulton schemes, to construct the reduced system. The advantage is that the method optimally approximates the full order model in the low-dimensional space with a given supervised learning task. Moreover, our approach is non-intrusive, such that it can be applied to other complex nonlinear dynamical systems with sophisticated legacy codes. We demonstrate the performance of our method through the numerical simulation of a twodimensional flow past a circular cylinder with Reynolds number Re = 100. The results reveal that the new data-driven model is significantly more accurate than standard projectionbased approaches.", "title": "" }, { "docid": "945dea6576c6131fc33cd14e5a2a0be8", "text": "■ This article recounts the development of radar signal processing at Lincoln Laboratory. The Laboratory’s significant efforts in this field were initially driven by the need to provide detected and processed signals for air and ballistic missile defense systems. The first processing work was on the Semi-Automatic Ground Environment (SAGE) air-defense system, which led to algorithms and techniques for detection of aircraft in the presence of clutter. This work was quickly followed by processing efforts in ballistic missile defense, first in surface-acoustic-wave technology, in concurrence with the initiation of radar measurements at the Kwajalein Missile Range, and then by exploitation of the newly evolving technology of digital signal processing, which led to important contributions for ballistic missile defense and Federal Aviation Administration applications. More recently, the Laboratory has pursued the computationally challenging application of adaptive processing for the suppression of jamming and clutter signals. This article discusses several important programs in these areas.", "title": "" }, { "docid": "e4ea761d48fafeeea1f143833d7362fe", "text": "This paper proposes a novel approach to help computing system administrators in monitoring the security of their systems. This approach is based on modeling the system as a privilege graph exhibiting operational security vulnerabilities and on transforming this privilege graph into a Markov chain corresponding to all possible successful attack scenarios. A set of tools has been developed to generate automatically the privilege graph of a Unix system, to transform it into the corresponding Markov chain and to compute characteristic measures of the operational system security.", "title": "" }, { "docid": "8ca8d0bb6ef41b10392e5d64ca96d2ab", "text": "This longitudinal study provides an analysis of the relationship between personality traits and work experiences with a special focus on the relationship between changes in personality and work experiences in young adulthood. Longitudinal analyses uncovered 3 findings. First, measures of personality taken at age 18 predicted both objective and subjective work experiences at age 26. Second, work experiences were related to changes in personality traits from age 18 to 26. Third, the predictive and change relations between personality traits and work experiences were corresponsive: Traits that \"selected\" people into specific work experiences were the same traits that changed in response to those same work experiences. The relevance of the findings to theories of personality development is discussed.", "title": "" } ]
scidocsrr
4b281284e4d4dfeee07a0cc439233c08
Reputation Inflation : Evidence from an Online Labor Market
[ { "docid": "7182814fb9304323a060242d36b10b8a", "text": "Consumer reviews are now part of everyday decision-making. Yet, the credibility of these reviews is fundamentally undermined when businesses commit review fraud, creating fake reviews for themselves or their competitors. We investigate the economic incentives to commit review fraud on the popular review platform Yelp, using two complementary approaches and datasets. We begin by analyzing restaurant reviews that are identified by Yelp’s filtering algorithm as suspicious, or fake – and treat these as a proxy for review fraud (an assumption we provide evidence for). We present four main findings. First, roughly 16% of restaurant reviews on Yelp are filtered. These reviews tend to be more extreme (favorable or unfavorable) than other reviews, and the prevalence of suspicious reviews has grown significantly over time. Second, a restaurant is more likely to commit review fraud when its reputation is weak, i.e., when it has few reviews, or it has recently received bad reviews. Third, chain restaurants – which benefit less from Yelp – are also less likely to commit review fraud. Fourth, when restaurants face increased competition, they become more likely to receive unfavorable fake reviews. Using a separate dataset, we analyze businesses that were caught soliciting fake reviews through a sting conducted by Yelp. These data support our main results, and shed further light on the economic incentives behind a business’s decision to leave fake reviews.", "title": "" } ]
[ { "docid": "8d9a02974ad85aa508dc0f7a85a669f1", "text": "The successful application of data mining in highly visible fields like e-business, marketing and retail has led to its application in other industries and sectors. Among these sectors just discovering is healthcare. The healthcare environment is still „information rich‟ but „knowledge poor‟. There is a wealth of data available within the healthcare systems. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. This research paper intends to provide a survey of current techniques of knowledge discovery in databases using data mining techniques that are in use in today‟s medical research particularly in Heart Disease Prediction. Number of experiment has been conducted to compare the performance of predictive data mining technique on the same dataset and the outcome reveals that Decision Tree outperforms and some time Bayesian classification is having similar accuracy as of decision tree but other predictive methods like KNN, Neural Networks, Classification based on clustering are not performing well. The second conclusion is that the accuracy of the Decision Tree and Bayesian Classification further improves after applying genetic algorithm to reduce the actual data size to get the optimal subset of attribute sufficient for heart disease prediction.", "title": "" }, { "docid": "3e9d7fed78af293ad6bce35ff34e1ddf", "text": "Ontology researches have been carried out in many diverse research areas in the past decade for numerous purposes especially in the eRecruitment domain. In this article, we would like to take a closer look on the current work of such domain of ontologies such as eRecruitment. Ontology application for e-Recruitment is becoming an important task for matching job postings and applicants semantically in a Semantic web technology using ontology and ontology matching techniques. Most of the reviewed papers used currently (existing) available widespread standards and classifications to build human resource ontology that provide a way of semantic representation for positions offered and candidates to fulfil, some of other researches have been done created their own HR ontologies to build recruitment prototype. We have reviewed number of articles and identified few purposes for which ontology matching", "title": "" }, { "docid": "cceb05e100fe8c9f9dab9f6525d435db", "text": "Conventional feedback control methods can solve various types of robot control problems very efficiently by capturing the structure with explicit models, such as rigid body equations of motion. However, many control problems in modern manufacturing deal with contacts and friction, which are difficult to capture with first-order physical modeling. Hence, applying control design methodologies to these kinds of problems often results in brittle and inaccurate controllers, which have to be manually tuned for deployment. Reinforcement learning (RL) methods have been demonstrated to be capable of learning continuous robot controllers from interactions with the environment, even for problems that include friction and contacts. In this paper, we study how we can solve difficult control problems in the real world by decomposing them into a part that is solved efficiently by conventional feedback control methods, and the residual which is solved with RL. The final control policy is a superposition of both control signals. We demonstrate our approach by training an agent to successfully perform a real-world block assembly task involving contacts and unstable objects.", "title": "" }, { "docid": "8ffc37aeacd3136d3a5801f87a3140df", "text": "Syndromic surveillance detects and monitors individual and population health indicators through sources such as emergency department records. Automated classification of these records can improve outbreak detection speed and diagnosis accuracy. Current syndromic systems rely on hand-coded keyword-based methods to parse written fields and may benefit from the use of modern supervised-learning classifier models. In this paper we implement two recurrent neural network models based on long short-term memory (LSTM) and gated recurrent unit (GRU) cells and compare them to two traditional bag-of-words classifiers: multinomial naïve Bayes (MNB) and a support vector machine (SVM). The MNB classifier is one of only two machine learning algorithms currently being used for syndromic surveillance. All four models are trained to predict diagnostic code groups as defined by Clinical Classification Software, first to predict from discharge diagnosis, then from chief complaint fields. The classifiers are trained on 3.6 million de-identified emergency department records from a single United States jurisdiction. We compare performance of these models primarily using the F1 score. We measure absolute model performance to determine which conditions are the most amenable to surveillance based on chief complaint alone. Using discharge diagnoses The LSTM classifier performs best, though all models exhibit an F1 score above 96.00. The GRU performs best on chief complaints (F1=47.38), and MNB with bigrams performs worst (F1=39.40). Certain syndrome types are easier to detect than others. For examples, chief complaints using the GRU model predicts alcohol-related disorders well (F1=78.91) but predicts influenza poorly (F1=14.80). In all instances the RNN models outperformed the bag-of-word classifiers suggesting deep learning models could substantially improve the automatic classification of unstructured text for syndromic surveillance. INTRODUCTION Syndromic surveillance—detection and monitoring individual and population health indicators that are discernible before confirmed diagnoses are made (Mandl et al.2004)—can draw from many data sources. Electronic health records of emergency department encounters, especially the free-text chief complaint field, are a common focus for syndromic surveillance (Yoon, Ising, & Gunn 2017). In practice, a computer algorithm associates the text of the chief complaint field with predefined syndromes, often by picking out keywords or parts of keywords or a machine learning algorithm based on mathematical representation of the chief complaint text. In this paper, we explore recurrent neural networks as an alternative to existing methods for associating chief complaint text with syndromes. Overview of Chief Complaint Classifiers In a recent overview of chief complaint classifiers (Conway et al., 2013), the authors divide chief complaint classifiers into 3 categories: keyword-based classifiers, linguistic classifiers, and statistical classifiers.", "title": "" }, { "docid": "c647b0b28c61da096b781b4aa3c89f03", "text": "This article concerns the real-world importance of leadership for the success or failure of organizations and social institutions. The authors propose conceptualizing leadership and evaluating leaders in terms of the performance of the team or organization for which they are responsible. The authors next offer a taxonomy of the dependent variables used as criteria in leadership studies. A review of research using this taxonomy suggests that the vast empirical literature on leadership may tell us more about the success of individual managerial careers than the success of these people in leading groups, teams, and organizations. The authors then summarize the evidence showing that leaders do indeed affect the performance of organizations--for better or for worse--and conclude by describing the mechanisms through which they do so.", "title": "" }, { "docid": "db9887ea5f96cd4439ca95ad3419407c", "text": "Light-field cameras have now become available in both consumer and industrial applications, and recent papers have demonstrated practical algorithms for depth recovery from a passive single-shot capture. However, current light-field depth estimation methods are designed for Lambertian objects and fail or degrade for glossy or specular surfaces. The standard Lambertian photo-consistency measure considers the variance of different views, effectively enforcing point-consistency, i.e., that all views map to the same point in RGB space. This variance or point-consistency condition is a poor metric for glossy surfaces. In this paper, we present a novel theory of the relationship between light-field data and reflectance from the dichromatic model. We present a physically-based and practical method to estimate the light source color and separate specularity. We present a new photo consistency metric, line-consistency, which represents how viewpoint changes affect specular points. We then show how the new metric can be used in combination with the standard Lambertian variance or point-consistency measure to give us results that are robust against scenes with glossy surfaces. With our analysis, we can also robustly estimate multiple light source colors and remove the specular component from glossy objects. We show that our method outperforms current state-of-the-art specular removal and depth estimation algorithms in multiple real world scenarios using the consumer Lytro and Lytro Illum light field cameras.", "title": "" }, { "docid": "858f6840881ae7b284149402f279185e", "text": "Voting in elections is the basis of democracy, but citizens may not be able or willing to go to polling stations to vote on election days. Remote e-voting via the Internet provides the convenience of voting on the voter's own computer or mobile device, but Internet voting systems are vulnerable to many common attacks, affecting the integrity of an election. Distributing the processing of votes over many web servers installed in tamper-resistant, secure environments can improve security: this is possible by using the Smart Card Web Server (SCWS) on a mobile phone Subscriber Identity Module (SIM). This paper proposes a generic model for a voting application installed in the SIM/SCWS, which uses standardised Mobile Network Operator (MNO) management procedures to communicate (via HTTPs) with a voting authority to vote. The generic SCWS voting model is then used with the e-voting system Prêt à Voter. A preliminary security analysis of the proposal is carried out, and further research areas are identified. As the SCWS voting application is used in a distributed processing architecture, e-voting security is enhanced because to compromise an election, an attacker must target many individual mobile devices rather than a centralised web server.", "title": "" }, { "docid": "06ab903f3de4c498e1977d7d0257f8f3", "text": "BACKGROUND\nthe analysis of microbial communities through dna sequencing brings many challenges: the integration of different types of data with methods from ecology, genetics, phylogenetics, multivariate statistics, visualization and testing. With the increased breadth of experimental designs now being pursued, project-specific statistical analyses are often needed, and these analyses are often difficult (or impossible) for peer researchers to independently reproduce. The vast majority of the requisite tools for performing these analyses reproducibly are already implemented in R and its extensions (packages), but with limited support for high throughput microbiome census data.\n\n\nRESULTS\nHere we describe a software project, phyloseq, dedicated to the object-oriented representation and analysis of microbiome census data in R. It supports importing data from a variety of common formats, as well as many analysis techniques. These include calibration, filtering, subsetting, agglomeration, multi-table comparisons, diversity analysis, parallelized Fast UniFrac, ordination methods, and production of publication-quality graphics; all in a manner that is easy to document, share, and modify. We show how to apply functions from other R packages to phyloseq-represented data, illustrating the availability of a large number of open source analysis techniques. We discuss the use of phyloseq with tools for reproducible research, a practice common in other fields but still rare in the analysis of highly parallel microbiome census data. We have made available all of the materials necessary to completely reproduce the analysis and figures included in this article, an example of best practices for reproducible research.\n\n\nCONCLUSIONS\nThe phyloseq project for R is a new open-source software package, freely available on the web from both GitHub and Bioconductor.", "title": "" }, { "docid": "cd8efcf02f3a84b6cf02f72ba85de323", "text": "There are a variety of grand challenges for text extraction in scene videos by robots and users, e.g., heterogeneous background, varied text, nonuniform illumination, arbitrary motion and poor contrast. Most previous video text detection methods are investigated with local information, i.e., within individual frames, with limited performance. In this paper, we propose a unified tracking based text detection system by learning locally and globally, which uniformly integrates detection, tracking, recognition and their interactions. In this system, scene text is first detected locally in individual frames. Second, an optimal tracking trajectory is learned and linked globally with all detection, recognition and prediction information by dynamic programming. With the tracking trajectory, final detection and tracking results are simultaneously and immediately obtained. Moreover, our proposed techniques are extensively evaluated on several public scene video text databases, and are much better than the state-of-the-art methods.", "title": "" }, { "docid": "6e90247455ac6a8e23504b1ec422b9f1", "text": "The paper deals with the wireless sensor-based remote control of mobile robots motion in an unknown environment with obstacles using the Bluetooth wireless transmission and Sun SPOT technology. The Sun SPOT is designed to be a flexible development platform, capable of hosting widely differing application modules. Web technologies are changing the education in robotics. A feature of remote control laboratories is that users can interact with real mobile robot motion processes through the Internet. Motion control of mobile robots is very important research field today, because mobile robots are a interesting subject both in scientific research and practical applications. In this paper the object of the remote control is the Boe-Bot mobile robot from Parallax. This Boe-Bot mobile robot is the simplest, low-cost platform and the most suitable for the small-sized, light, battery-driven autonomous vehicle. The vehicle has two driving wheels and the angular velocities of the two wheels are independently controlled. When the vehicle is moving towards the target in an unknown environment with obstacles, an avoiding strategy is necessary. A remote control program has been implemented.", "title": "" }, { "docid": "fed23432144a6929c4f3442b10157771", "text": "Knowledge has widely been acknowledged as one of the most important factors for corporate competitiveness, and we have witnessed an explosion of IS/IT solutions claiming to provide support for knowledge management (KM). A relevant question to ask, though, is how systems and technology intended for information such as the intranet can be able to assist in the managing of knowledge. To understand this, we must examine the relationship between information and knowledge. Building on Polanyi’s theories, I argue that all knowledge is tacit, and what can be articulated and made tangible outside the human mind is merely information. However, information and knowledge affect one another. By adopting a multi-perspective of the intranet where information, awareness, and communication are all considered, this interaction can best be supported and the intranet can become a useful and people-inclusive KM environment. 1. From philosophy to IT Ever since the ancient Greek period, philosophers have discussed what knowledge is. Early thinkers such as Plato and Aristotle where followed by Hobbes and Locke, Kant and Hegel, and into the 20th century by the likes of Wittgenstein, Popper, and Kuhn, to name but a few of the more prominent western philosophers. In recent years, we have witnessed a booming interest in knowledge also from other disciplines; organisation theorists, information system developers, and economists have all been swept away by the knowledge management avalanche. It seems, though, that the interest is particularly strong within the IS/IT community, where new opportunities to develop computer systems are welcomed. A plausible question to ask then is how knowledge relates to information technology (IT). Can IT at all be used to handle 0-7695-1435-9/02 $ knowledge, and if so, what sort of knowledge? What sorts of knowledge are there? What is knowledge? It seems we have little choice but to return to these eternal questions, but belonging to the IS/IT community, we should not approach knowledge from a philosophical perspective. As observed by Alavi and Leidner, the knowledge-based theory of the firm was never built on a universal truth of what knowledge really is but on a pragmatic interest in being able to manage organisational knowledge [2]. The discussion in this paper shall therefore be aimed at addressing knowledge from an IS/IT perspective, trying to answer two overarching questions: “What does the relationship between information and knowledge look like?” and “What role does an intranet have in this relationship?” The purpose is to critically review the contemporary KM literature in order to clarify the relationships between information and knowledge that commonly and implicitly are assumed within the IS/IT community. Epistemologically, this paper shall address the difference between tacit and explicit knowledge by accounting for some of the views more commonly found in the KM literature. Some of these views shall also be questioned, and the prevailing assump tion that tacit and explicit are two forms of knowledge shall be criticised by returning to Polanyi’s original work. My interest in the tacit side of knowledge, i.e. the aspects of knowledge that is omnipresent, taken for granted, and affecting our understanding without us being aware of it, has strongly influenced the content of this paper. Ontologywise, knowledge may be seen to exist on different levels, i.e. individual, group, organisation and inter-organisational [23]. Here, my primary interest is on the group and organisational levels. However, these two levels are obviously made up of individuals and we are thus bound to examine the personal aspects of knowledge as well, though be it from a macro perspective. 17.00 (c) 2002 IEEE 1 Proceedings of the 35th Hawaii International Conference on System Sciences 2002 2. Opposite traditions – and a middle way? When examining the knowledge literature, two separate tracks can be identified: the commodity view and the community view [35]. The commodity view of or the objective approach to knowledge as some absolute and universal truth has since long been the dominating view within science. Rooted in the positivism of the mid-19th century, the commodity view is still especially strong in the natural sciences. Disciples of this tradition understand knowledge as an artefact that can be handled in discrete units and that people may possess. Knowledge is a thing for which we can gain evidence, and knowledge as such is separated from the knower [33]. Metaphors such as drilling, mining, and harvesting are used to describe how knowledge is being managed. There is also another tradition that can be labelled the community view or the constructivist approach. This tradition can be traced back to Locke and Hume but is in its modern form rooted in the critique of the established quantitative approach to science that emerged primarily amongst social scientists during the 1960’s, and resulted in the publication of books by Garfinkel, Bourdieu, Habermas, Berger and Luckmann, and Glaser and Strauss. These authors argued that reality (and hence also knowledge) should be understood as socially constructed. According to this tradition, it is impossible to define knowledge universally; it can only be defined in practice, in the activities of and interactions between individuals. Thus, some understand knowledge to be universal and context-independent while others conceive it as situated and based on individual experiences. Maybe it is a little bit Author(s) Data Informa", "title": "" }, { "docid": "04fceca06c8f54b7d3eef1a4244e6c2a", "text": "Unexpected shortage of water supply is common phenomena especially in dense population such as in hostels. Water supply at the students’ hostels is usually drawn from tank at the roof top of the building. Apparently there is no early warning system to monitor the tank water level when it has reached the critical level. The situation worsened when there is no personnel or technician in-charge to do the maintenance at the time it is needed. It becomes worst especially at the week ends and public holidays. Students have to wait for couples of days for the water supply to resume. This paper presents the development of water level monitoring system with an integration of GSM module to alert the person-in-charge through Short Message Service (SMS). The water level is monitored and its data sent through SMS to the intended technician mobile’s phone upon reaching the critical level. The prototype was tested and functioned properly as a mean to reduce the risk of unexpected shortage of water supply.", "title": "" }, { "docid": "082a077db6f8b0d41c613f9a50934239", "text": "Traceability is recognized to be important for supporting agile development processes. However, after analyzing many of the existing traceability approaches it can be concluded that they strongly depend on traditional development process characteristics. Within this paper it is justified that this is a drawback to support adequately agile processes. As it is discussed, some concepts do not have the same semantics for traditional and agile methodologies. This paper proposes three features that traceability models should support to be less dependent on a specific development process: (1) user-definable traceability links, (2) roles, and (3) linkage rules. To present how these features can be applied, an emerging traceability metamodel (TmM) will be used within this paper. TmM supports the definition of traceability methodologies adapted to the needs of each project. As it is shown, after introducing these three features into traceability models, two main advantages are obtained: 1) the support they can provide to agile process stakeholders is significantly more extensive, and 2) it will be possible to achieve a higher degree of automation. In this sense it will be feasible to have a methodical trace acquisition and maintenance process adapted to agile processes.", "title": "" }, { "docid": "49661a36a9f0053b96ce3cd32c604c3a", "text": "With the severe spectrum shortage in conventional cellular bands, large-scale antenna systems in the mmWave bands can potentially help to meet the anticipated demands of mobile traffic in the 5G era. There are many challenging issues, however, regarding the implementation of digital beamforming in large-scale antenna systems: complexity, energy consumption, and cost. In a practical large-scale antenna deployment, hybrid analog and digital beamforming structures can be important alternative choices. In this article, optimal designs of hybrid beamforming structures are investigated, with the focus on an N (the number of transceivers) by M (the number of active antennas per transceiver) hybrid beamforming structure. Optimal analog and digital beamforming designs in a multi-user beamforming scenario are discussed. Also, the energy efficiency and spectrum efficiency of the N × M beamforming structure are analyzed, including their relationship at the green point (i.e., the point with the highest energy efficiency) on the energy efficiency-spectrum efficiency curve, the impact of N on the energy efficiency performance at a given spectrum efficiency value, and the impact of N on the green point energy efficiency. These results can be conveniently utilized to guide practical LSAS design for optimal energy/ spectrum efficiency trade-off. Finally, a reference signal design for the hybrid beamform structure is presented, which achieves better channel estimation performance than the method solely based on analog beamforming. It is expected that large-scale antenna systems with hybrid beamforming structures in the mmWave band can play an important role in 5G.", "title": "" }, { "docid": "41353a12a579f72816f1adf3cba154dd", "text": "The crux of our initialization technique is n-gram selection, which assists neural networks to extract important n-gram features at the beginning of the training process. In the following tables, we illustrate those selected n-grams of different classes and datasets to understand our technique intuitively. Since all of MR, SST-1, SST-2, CR, and MPQA are sentiment classification datasets, we only report the selected n-grams of SST-1 (Table 1). N-grams selected by our method in SUBJ and TREC are shown in Table 2 and Table 3.", "title": "" }, { "docid": "479c83803b5b53c72cc1715ffdad084f", "text": "SPADE is an open source software infrastructure for data provenance collection and management. The underlying data model used throughout the system is graph-based, consisting of vertices and directed edges that are modeled after the node and relationship types described in the Open Provenance Model. The system has been designed to decouple the collection, storage, and querying of provenance metadata. At its core is a novel provenance kernel that mediates between the producers and consumers of provenance information, and handles the persistent storage of records. It operates as a service, peering with remote instances to enable distributed provenance queries. The provenance kernel on each host handles the buffering, filtering, and multiplexing of incoming metadata from multiple sources, including the operating system, applications, and manual curation. Provenance elements can be located locally with queries that use wildcard, fuzzy, proximity, range, and Boolean operators. Ancestor and descendant queries are transparently propagated across hosts until a terminating expression is satisfied, while distributed path queries are accelerated with provenance sketches.", "title": "" }, { "docid": "7d2243763086490afea2505ef2d81a69", "text": "This paper proposes a GPU-based method that can visualize voxelized surface data with fine and complicated features, has high rendering quality at interactive frame rates, and provides low memory consumption. The surface data is compressed using run-length encoding (RLE) for each level of detail (LOD). Then, the loop for the rendering process is performed on the GPU for the position of the viewpoint at each time instant. The scene is raycasted in planes, where each plane is perpendicular to the horizontal plane in the world coordinate system and passes through the viewpoint. For each plane, one ray is cast to rasterize all RLE elements intersecting this plane, starting from the viewpoint and ranging up to the maximum view distance. This rasterization process projects each RLE element passing the occlusion test onto the screen at a LOD that decreases with the distance of the RLE element from the viewpoint. Finally, the smoothing of voxels in screen space and full screen anti-aliasing is performed. To provide lighting calculations without storing the normal vector inside the RLE data structure, our algorithm recovers the normal vectors from the rendered scene’s depth buffer. After the viewpoint changes, the same process is re-executed for the new viewpoint. Experiments using different scenes have shown that the proposed algorithm is faster than the equivalent CPU implementation and other related methods. Our experiments further prove that this method is memory efficient and achieves high quality results. key words: volume data, voxels, raycasting, splatting, view-transform, run-length encoding", "title": "" }, { "docid": "42e2a8b8c1b855fba201e3421639d80d", "text": "Fraudulent behaviors in Google’s Android app market fuel search rank abuse and malware proliferation. We present FairPlay, a novel system that uncovers both malware and search rank fraud apps, by picking out trails that fraudsters leave behind. To identify suspicious apps, FairPlay’s PCF algorithm correlates review activities and uniquely combines detected review relations with linguistic and behavioral signals gleaned from longitudinal Google Play app data. We contribute a new longitudinal app dataset to the community, which consists of over 87K apps, 2.9M reviews, and 2.4M reviewers, collected over half a year. FairPlay achieves over 95% accuracy in classifying gold standard datasets of malware, fraudulent and legitimate apps. We show that 75% of the identified malware apps engage in search rank fraud. FairPlay discovers hundreds of fraudulent apps that currently evade Google Bouncer’s detection technology, and reveals a new type of attack campaign, where users are harassed into writing positive reviews, and install and review other apps.", "title": "" }, { "docid": "09168164e47fd781e4abeca45fb76c35", "text": "AUTOSAR is a standard for the development of software for embedded devices, primarily created for the automotive domain. It specifies a software architecture with more than 80 software modules that provide services to one or more software components. With the trend towards integrating safety-relevant systems into embedded devices, conformance with standards such as ISO 26262 [ISO11] or ISO/IEC 61508 [IEC10] becomes increasingly important. This article presents an approach to providing freedom from interference between software components by using the MPU available on many modern microcontrollers. Each software component gets its own dedicated memory area, a so-called memory partition. This concept is well known in other industries like the aerospace industry, where the IMA architecture is now well established. The memory partitioning mechanism is implemented by a microkernel, which integrates seamlessly into the architecture specified by AUTOSAR. The development has been performed as SEooC as described in ISO 26262, which is a new development approach. We describe the procedure for developing an SEooC. AUTOSAR: AUTomotive Open System ARchitecture, see [ASR12]. MPU: Memory Protection Unit. 3 IMA: Integrated Modular Avionics, see [RTCA11]. 4 SEooC: Safety Element out of Context, see [ISO11].", "title": "" } ]
scidocsrr
48abb15ae19b9881b249b646984e9683
Customized Regression Model for Airbnb Dynamic Pricing
[ { "docid": "15dbf1ad05c8219be484c01145c09b6c", "text": "In this paper, we study the contextual bandit problem (also known as the multi-armed bandit problem with expert advice) for linear payoff functions. For T rounds, K actions, and d dimensional feature vectors, we prove an O ( √ Td ln(KT ln(T )/δ) ) regret bound that holds with probability 1− δ for the simplest known (both conceptually and computationally) efficient upper confidence bound algorithm for this problem. We also prove a lower bound of Ω( √ Td) for this setting, matching the upper bound up to logarithmic factors.", "title": "" }, { "docid": "bc7c5ab8ec28e9a5917fc94b776b468a", "text": "Reasonable house price prediction is a meaningful task, and the house clustering is an important process in the prediction. In this paper, we propose the method of Multi-Scale Affinity Propagation(MSAP) aggregating the house appropriately by the landmark and the facility. Then in each cluster, using Linear Regression model with Normal Noise(LRNN) predicts the reasonable price, which is verified by the increasing number of the renting reviews. Experiments show that the precision of the reasonable price prediction improved greatly via the method of MSAP.", "title": "" } ]
[ { "docid": "a3a83c8c0592e8335f4687d0e2ee802f", "text": "The rapid growth and development in technology has made computer as a weapon which can cause great loss if used with wrong intentions. Computer forensics aims at collecting, and analyzing evidences from the seized devices in such ways so that they are admissible in court of law. Anti-forensics, on the other hand, is collection of tricks and techniques that are used and applied with clear aim of forestalling the forensic investigation. Crime and crime prevention go hand in hand. Once a crime surfaces, then a defense is developed, then a new crime counters the new defense. Hence along with continuous developments in forensics, a thorough study and knowledge of developments in anti-forensics is equally important. This paper focuses on understanding different techniques that can be used for anti-forensic purposes with help of open source tools.", "title": "" }, { "docid": "b57377a695ce7c5114d61bbe4f29e7a1", "text": "Referring to existing illustrations helps novice drawers to realize their ideas. To find such helpful references from a large image collection, we first build a semantic vector representation of illustrations by training convolutional neural networks. As the proposed vector space correctly reflects the semantic meanings of illustrations, users can efficiently search for references with similar attributes. Besides the search with a single query, a semantic morphing algorithm that searches the intermediate illustrations that gradually connect two queries is proposed. Several experiments were conducted to demonstrate the effectiveness of our methods.", "title": "" }, { "docid": "907940110f89714bf20a8395cd8932d5", "text": "Polyphonic sound event detection (polyphonic SED) is an interesting but challenging task due to the concurrence of multiple sound events. Recently, SED methods based on convolutional neural networks (CNN) and recurrent neural networks (RNN) have shown promising performance. Generally, CNN are designed for local feature extraction while RNN are used to model the temporal dependency among these local features. Despite their success, it is still insufficient for existing deep learning techniques to separate individual sound event from their mixture, largely due to the overlapping characteristic of features. Motivated by the success of Capsule Networks (CapsNet), we propose a more suitable capsule based approach for polyphonic SED. Specifically, several capsule layers are designed to effectively select representative frequency bands for each individual sound event. The temporal dependency of capsule's outputs is then modeled by a RNN. And a dynamic threshold method is proposed for making the final decision based on RNN outputs. Experiments on the TUT-SED Synthetic 2016 dataset show that the proposed approach obtains an F1-score of 68.8% and an error rate of 0.45, outperforming the previous state-of-the-art method of 66.4% and 0.48, respectively.", "title": "" }, { "docid": "ad11946cfb127e19b0ee80f5d77dbe93", "text": "Air quality has great impact on individual and community health. In this demonstration, we present Citisense: a mobile air quality system that enables users to track their personal air quality exposure for discovery, self-reflection, and sharing within their local communities and online social networks.", "title": "" }, { "docid": "e6289c25323dd5f4b7ff6648201a636e", "text": "A new wideband differentially fed dual-polarized antenna with stable radiation pattern for base stations is proposed and studied. A cross-shaped feeding structure is specially designed to fit the differentially fed scheme and four parasitic loop elements are employed to achieve a wide impedance bandwidth. A stable antenna gain and a stable radiation pattern are realized by using a rectangular cavity-shaped reflector instead of a planar one. A detailed parametric study was performed to optimize the antenna’s performances. After that, a prototype was fabricated and tested. Measured results show that the antenna achieves a wide impedance bandwidth of 52% with differential standing-wave ratio <1.5 from 1.7 to 2.9 GHz and a high differential port-to-port isolation of better than 26.3 dB within the operating frequency bandwidth. A stable antenna gain ( $\\approx 8$ dBi) and a stable radiation pattern with 3-dB beamwidth of 65° ±5° were also found over the operating frequencies. Moreover, the proposed antenna can be easily built by using printed circuit board fabrication technique due to its compact and planar structure.", "title": "" }, { "docid": "af0097bec55577049b08f2bc9e65dd4d", "text": "The recent surge in using social media has created a massive amount of unstructured textual complaints about products and services. However, discovering and quantifying potential product defects from large amounts of unstructured text is a nontrivial task. In this paper, we develop a probabilistic defect model (PDM) that identifies the most critical product issues and corresponding product attributes, simultaneously. We facilitate domain-oriented key attributes (e.g., product model, year of production, defective components, symptoms, etc.) of a product to identify and acquire integral information of defect. We conduct comprehensive evaluations including quantitative evaluations and qualitative evaluations to ensure the quality of discovered information. Experimental results demonstrate that our proposed model outperforms existing unsupervised method (K-Means Clustering), and could find more valuable information. Our research has significant managerial implications for mangers, manufacturers, and policy makers. [Category: Data and Text Mining]", "title": "" }, { "docid": "ea2af110b27015b83659182948a32b36", "text": "BACKGROUND\nDescent of the lateral aspect of the brow is one of the earliest signs of aging. The purpose of this study was to describe an open surgical technique for lateral brow lifts, with the goal of achieving reliable, predictable, and long-lasting results.\n\n\nMETHODS\nAn incision was made behind and parallel to the temporal hairline, and then extended deeper through the temporoparietal fascia to the level of the deep temporal fascia. Dissection was continued anteriorly on the surface of the deep temporal fascia and subperiosteally beyond the temporal crest, to the level of the superolateral orbital rim. Fixation of the lateral brow and tightening of the orbicularis oculi muscle was achieved with the placement of sutures that secured the tissue directly to the galea aponeurotica on the lateral aspect of the incision. An additional fixation was made between the temporoparietal fascia and the deep temporal fascia, as well as between the temporoparietal fascia and the galea aponeurotica. The excess skin in the temporal area was excised and the incision was closed.\n\n\nRESULTS\nA total of 519 patients were included in the study. Satisfactory lateral brow elevation was obtained in most of the patients (94.41%). The following complications were observed: total relapse (n=8), partial relapse (n=21), neurapraxia of the frontal branch of the facial nerve (n=5), and limited alopecia in the temporal incision (n=9).\n\n\nCONCLUSIONS\nWe consider this approach to be a safe and effective procedure, with long-lasting results.", "title": "" }, { "docid": "138ada76eb85092ec527e1265bffa36b", "text": "Web service discovery is becoming a challenging and time consuming task due to large number of Web services available on the Internet. Organizing the Web services into functionally similar clusters is one of a very efficient approach for reducing the search space. However, similarity calculation methods that are used in current approaches such as string-based, corpus-based, knowledge-based and hybrid methods have problems that include discovering semantic characteristics, loss of semantic information, encoding fine-grained information and shortage of high-quality ontologies. Because of these issues, the approaches couldn't identify the correct clusters for some services and placed them in wrong clusters. As a result of this, cluster performance is reduced. This paper proposes post-filtering approach to increase precision by rearranging services incorrectly clustered. Our approach uses context aware method that learns term similarity by machine learning under domain context. Experimental results show that our post-filtering approach works efficiently.", "title": "" }, { "docid": "ccf7390abc2924e4d2136a2b82639115", "text": "The proposition of increased innovation in network applications and reduced cost for network operators has won over the networking world to the vision of software-defined networking (SDN). With the excitement of holistic visibility across the network and the ability to program network devices, developers have rushed to present a range of new SDN-compliant hardware, software, and services. However, amidst this frenzy of activity, one key element has only recently entered the debate: Network Security. In this paper, security in SDN is surveyed presenting both the research community and industry advances in this area. The challenges to securing the network from the persistent attacker are discussed, and the holistic approach to the security architecture that is required for SDN is described. Future research directions that will be key to providing network security in SDN are identified.", "title": "" }, { "docid": "2da6c199c7561855fde9be6f4798a4af", "text": "Ontogenetic development of the digestive system in golden pompano (Trachinotus ovatus, Linnaeus 1758) larvae was histologically and enzymatically studied from hatch to 32 day post-hatch (DPH). The development of digestive system in golden pompano can be divided into three phases: phase I starting from hatching and ending at the onset of exogenous feeding; phase II starting from first feeding (3 DPH) and finishing at the formation of gastric glands; and phase III starting from the appearance of gastric glands on 15 DPH and continuing onward. The specific activities of trypsin, amylase, and lipase increased sharply from the onset of first feeding to 5–7 DPH, followed by irregular fluctuations. Toward the end of this study, the specific activities of trypsin and amylase showed a declining trend, while the lipase activity remained at similar levels as it was at 5 DPH. The specific activity of pepsin was first detected on 15 DPH and increased with fish age. The dynamics of digestive enzymes corresponded to the structural development of the digestive system. The enzyme activities tend to be stable after the formation of the gastric glands in fish stomach on 15 DPH. The composition of digestive enzymes in larval pompano indicates that fish are able to digest protein, lipid and carbohydrate at early developmental stages. Weaning of larval pompano is recommended from 15 DPH onward. Results of the present study lead to a better understanding of the ontogeny of golden pompano during the larval stage and provide a guide to feeding and weaning of this economically important fish in hatcheries.", "title": "" }, { "docid": "7ff2f2057d7e38f0258cd361c978eb70", "text": "Sustainable production of renewable energy is being hotly debated globally since it is increasingly understood that first generation biofuels, primarily produced from food crops and mostly oil seeds are limited in their ability to achieve targets for biofuel production, climate change mitigation and economic growth. These concerns have increased the interest in developing second generation biofuels produced from non-food feedstocks such as microalgae, which potentially offer greatest opportunities in the longer term. This paper reviews the current status of microalgae use for biodiesel production, including their cultivation, harvesting, and processing. The microalgae species most used for biodiesel production are presented and their main advantages described in comparison with other available biodiesel feedstocks. The various aspects associated with the design of microalgae production units are described, giving an overview of the current state of development of algae cultivation systems (photo-bioreactors and open ponds). Other potential applications and products from microalgae are also presented such as for biological sequestration of CO2, wastewater treatment, in human health, as food additive, and for aquaculture. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "91e0722c00b109d7db137fb3468c088a", "text": "This paper proposes a novel flexible piezoelectric micro-machined ultrasound transducer, which is based on PZT and a polyimide substrate. The transducer is made on the polyimide substrate and packaged with medical polydimethylsiloxane. Instead of etching the PZT ceramic, this paper proposes a method of putting diced PZT blocks into holes on the polyimide which are pre-etched. The device works in d31 mode and the electromechanical coupling factor is 22.25%. Its flexibility, good conformal contacting with skin surfaces and proper resonant frequency make the device suitable for heart imaging. The flexible packaging ultrasound transducer also has a good waterproof performance after hundreds of ultrasonic electric tests in water. It is a promising ultrasound transducer and will be an effective supplementary ultrasound imaging method in the practical applications.", "title": "" }, { "docid": "620574da26151188171a91eb64de344d", "text": "Major security issues for banking and financial institutions are Phishing. Phishing is a webpage attack, it pretends a customer web services using tactics and mimics from unauthorized persons or organization. It is an illegitimate act to steals user personal information such as bank details, social security numbers and credit card details, by showcasing itself as a truthful object, in the public network. When users provide confidential information, they are not aware of the fact that the websites they are using are phishing websites. This paper presents a technique for detecting phishing website attacks and also spotting phishing websites by combines source code and URL in the webpage. Keywords—Phishing, Website attacks, Source Code, URL.", "title": "" }, { "docid": "117de8844d5a6c506d69de65ae6b62ae", "text": "Computer-based conversational agents are becoming ubiquitous. However, for these systems to be engaging and valuable to the user, they must be able to express emotion, in addition to providing informative responses. Humans rely on much more than language during conversations; visual information is key to providing context. We present the first example of an image-grounded conversational agent using visual sentiment, facial expression and scene features. We show that key qualities of the generated dialogue can be manipulated by the features used for training the agent. We evaluate our model on a large and very challenging real-world dataset of conversations from social media (Twitter). The image-grounding leads to significantly more informative, emotional and specific responses, and the exact qualities can be tuned depending on the image features used. Furthermore, our model improves the objective quality of dialogue responses when evaluated on standard natural language metrics.", "title": "" }, { "docid": "8704a4033132a1d26cf2da726a60045e", "text": "In practical classification, there is often a mix of learnable and unlearnable classes and only a classifier above a minimum performance threshold can be deployed. This problem is exacerbated if the training set is created by active learning. The bias of actively learned training sets makes it hard to determine whether a class has been learned. We give evidence that there is no general and efficient method for reducing the bias and correctly identifying classes that have been learned. However, we characterize a number of scenarios where active learning can succeed despite these difficulties.", "title": "" }, { "docid": "b3556499bf5d788de7c4d46100ac3a9f", "text": "Reuse has been proposed as a microarchitecture-level mechanism to reduce the amount of executed instructions, collapsing dependencies and freeing resources for other instructions. Previous works have used reuse domains such as memory accesses, integer or not floating point, based on the reusability rate. However, these works have not studied the specific contribution of reusing different subsets of instructions for performance. In this work, we analysed the sensitivity of trace reuse to instruction subsets, comparing their efficiency to their complementary subsets. We also studied the amount of reuse that can be extracted from loops. Our experiments show that disabling trace reuse outside loops does not harm performance but reduces in 12% the number of accesses to the reuse table. Our experiments with reuse subsets show that most of the speedup can be retained even when not reusing all types of instructions previously found in the reuse domain. 1 ar X iv :1 71 1. 06 67 2v 1 [ cs .A R ] 1 7 N ov 2 01 7", "title": "" }, { "docid": "0d65394a132dba6d4d6827be8afda33e", "text": "PHYSICIANS’ ABILITY TO PROVIDE high-quality care can be adversely affected by many factors, including sleep deprivation. Concerns about the danger of physicians who are sleep deprived and providing care have led state legislatures and academic institutions to try to constrain the work hours of physicians in training (house staff). Unlike commercial aviation, for example, medicine is an industry in which public safety is directly at risk but does not have mandatory restrictions on work hours. Legislation before the US Congress calls for limiting resident work hours to 80 hours per week and no more than 24 hours of continuous work. Shifts of residents working in the emergency department would be limited to 12 hours. The proposed legislation, which includes public disclosure and civil penalties for hospitals that violate the work hour restrictions, does not address extended duty shifts of attending or private practice physicians. There is still substantial controversy within the medical community about the magnitude and significance of the clinical impairment resulting from work schedules that aggravate sleep deprivation. There is extensive literature on the adverse effects of sleep deprivation in laboratory and nonmedical settings. However, studies on sleep deprivation of physicians performing clinically relevant tasks have been less conclusive. Opinions have been further influenced by the potential adverse impact of reduced work schedules on the economics of health care, on continuity of care, and on quality of care. This review focuses on the consequences of sleep loss both in controlled laboratory environments and in clinical studies involving medical personnel.", "title": "" }, { "docid": "fa320a8347093bca4817da2ed7c54e61", "text": "Gases for electrical insulation are essential for the operation of electric power equipment. This Review gives a brief history of gaseous insulation that involved the emergence of the most potent industrial greenhouse gas known today, namely sulfur hexafluoride. SF6 paved the way to space-saving equipment for the transmission and distribution of electrical energy. Its ever-rising usage in the electrical grid also played a decisive role in the continuous increase of atmospheric SF6 abundance over the last decades. This Review broadly covers the environmental concerns related to SF6 emissions and assesses the latest generation of eco-friendly replacement gases. They offer great potential for reducing greenhouse gas emissions from electrical equipment but at the same time involve technical trade-offs. The rumors of one or the other being superior seem premature, in particular because of the lack of dielectric, environmental, and chemical information for these relatively novel compounds and their dissociation products during operation.", "title": "" }, { "docid": "c2bd5af9470671eabe3a591121cd0ebc", "text": "Menus are a primary control in current interfaces, but there has been relatively little theoretical work to model their performance. We propose a model of menu performance that goes beyond previous work by incorporating components for Fitts' Law pointing time, visual search time when novice, Hick-Hyman Law decision time when expert, and for the transition from novice to expert behaviour. The model is able to predict performance for many different menu designs, including adaptive split menus, items with different frequencies and sizes, and multi-level menus. We tested the model by comparing predictions for four menu designs (traditional menus, recency and frequency based split menus, and an adaptive 'morphing' design) with empirical measures. The empirical data matched the predictions extremely well, suggesting that the model can be used to explore a wide range of menu possibilities before implementation.", "title": "" }, { "docid": "a3cd3ec70b5d794173db36cb9a219403", "text": "We consider the problem of grasping novel objects in cluttered environments. If a full 3-d model of the scene were available, one could use the model to estimate the stability and robustness of different grasps (formalized as form/force-closure, etc); in practice, however, a robot facing a novel object will usually be able to perceive only the front (visible) faces of the object. In this paper, we propose an approach to grasping that estimates the stability of different grasps, given only noisy estimates of the shape of visible portions of an object, such as that obtained from a depth sensor. By combining this with a kinematic description of a robot arm and hand, our algorithm is able to compute a specific positioning of the robot’s fingers so as to grasp an object. We test our algorithm on two robots (with very different arms/manipulators, including one with a multi-fingered hand). We report results on the task of grasping objects of significantly different shapes and appearances than ones in the training set, both in highly cluttered and in uncluttered environments. We also apply our algorithm to the problem of unloading items from a dishwasher. Introduction We consider the problem of grasping novel objects, in the presence of significant amounts of clutter. A key challenge in this setting is that a full 3-d model of the scene is typically not available. Instead, a robot’s depth sensors can usually estimate only the shape of the visible portions of the scene. In this paper, we propose an algorithm that, given such partial models of the scene, selects a grasp—that is, a configuration of the robot’s arm and fingers—to try to pick up an object. If a full 3-d model (including the occluded portions of a scene) were available, then methods such as formand forceclosure (Mason and Salisbury 1985; Bicchi and Kumar 2000; Pollard 2004) and other grasp quality metrics (Pelossof et al. 2004; Hsiao, Kaelbling, and Lozano-Perez 2007; Ciocarlie, Goldfeder, and Allen 2007) can be used to try to find a good grasp. However, given only the point cloud returned by stereo vision or other depth sensors, a straightforward application of these ideas is impossible, since we do not have a model of the occluded portions of the scene. Copyright c © 2008, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Image of an environment (left) and the 3-d pointcloud (right) returned by the Swissranger depth sensor. In detail, we will consider a robot that uses a camera, together with a depth sensor, to perceive a scene. The depth sensor returns a “point cloud,” corresponding to 3-d locations that it has found on the front unoccluded surfaces of the objects. (See Fig. 1.) Such point clouds are typically noisy (because of small errors in the depth estimates); but more importantly, they are also incomplete. 1 This work builds on Saxena et al. (2006a; 2006b; 2007; 2008) which applied supervised learning to identify visual properties that indicate good grasps, given a 2-d image of the scene. However, their algorithm only chose a 3-d “grasp point”—that is, the 3-d position (and 3-d orientation; Saxena et al. 2007) of the center of the end-effector. Thus, it did not generalize well to more complex arms and hands, such as to multi-fingered hands where one has to not only choose the 3d position (and orientation) of the hand, but also address the high dof problem of choosing the positions of all the fingers. Our approach begins by computing a number of features of grasp quality, using both 2-d image and the 3-d point cloud features. For example, the 3-d data is used to compute a number of grasp quality metrics, such as the degree to which the fingers are exerting forces normal to the surfaces of the object, and the degree to which they enclose the object. Using such features, we then apply a supervised learning algorithm to estimate the degree to which different configurations of the full arm and fingers reflect good grasps. We test our algorithm on two robots, on a variety of objects of shapes very different from ones in the training set, including a ski boot, a coil of wire, a game controller, and Forexample, standard stereo vision fails to return depth values for textureless portions of the object, thus its point clouds are typically very sparse. Further, the Swissranger gives few points only because of its low spatial resolution of 144 × 176. Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008)", "title": "" } ]
scidocsrr
a1aaf1734a47c6c8c0523cf4d8a4766a
The State of the Art in Multiple Object Tracking Under Occlusion in Video Sequences
[ { "docid": "bc4791523b11a235d0b1c9e660ea1139", "text": "In this paper, we present a novel system and effective algorithms for soccer video segmentation. The output, about whether the ball is in play, reveals high-level structure of the content. The first step is to classify each sample frame into 3 kinds of view using a unique domain-specific feature, grass-area-ratio. Here the grass value and classification rules are learned and automatically adjusted to each new clip. Then heuristic rules are used in processing the view label sequence, and obtain play/break status of the game. The results provide good basis for detailed content analysis in next step. We also show that lowlevel features and mid-level view classes can be combined to extract more information about the game, via the example of detecting grass orientation in the field. The results are evaluated under different metrics intended for different applications; the best result in segmentation is 86.5%.", "title": "" } ]
[ { "docid": "ca0511810895cfdce607f4fc4df2f4f7", "text": "This paper presents an extension of existing software architecture tools to model physical systems, their interconnections, and the interactions between physical and cyber components. We introduce a new cyber-physical system (CPS) architectural style to support the construction of architectural descriptions of complete systems and to serve as the reference context for analysis and evaluation of design alternatives using existing model-based tools. The implementation of the CPS architectural style in AcmeStudio includes behavioral annotations on components and connectors using either finite state processes (FSP) or linear hybrid automata (LHA) with plug-ins to perform behavior analysis. The application of the CPS architectural style is illustrated for the STARMAC quadrotor.", "title": "" }, { "docid": "0f8cf50e2eca67998138806360713267", "text": "Voice-activated devices are becoming common place: people can use their voice to control smartphones, smart vacuum robots, and interact with their smart homes through virtual assistant devices like Amazon Echo or Google Home. The spread of such voice-controlled devices is possible thanks to the increasing capabilities of natural language processing, and generally have a positive impact on the device accessibility, e.g., for people with disabilities. However, a consequence of these devices embracing voice control is that people with dysarthria or other speech impairments may be unable to control their intelligent environments, at least with proficiency. This paper investigates to which extent people with dysarthria can use and be understood by the three most common virtual assistants, namely Siri, Google Assistant, and Amazon Alexa. Starting from the sentences in the TORGO database of dysarthric articulation, the differences between such assistants are investigated and discussed. Preliminary results show that the three virtual assistants have comparable performance, with an accuracy of the recognition in the range of 50-60%.", "title": "" }, { "docid": "6c13466352923ccf9885afa859cc1ece", "text": "We investigate the application of Neural Machine Translation (NMT) under the following three conditions posed by realworld application scenarios. First, we operate with an input stream of sentences coming from many different domains and with no predefined order. Second, the sentences are presented without domain information. Third, the input stream should be processed by a single generic NMT model. To tackle the weaknesses of current NMT technology in this unsupervised multi-domain setting, we explore an efficient instance-based adaptation method that, by exploiting the similarity between the training instances and each test sentence, dynamically sets the hyperparameters of the learning algorithm and updates the generic model on-the-fly. The results of our experiments with multi-domain data show that local adaptation outperforms not only the original generic NMT system, but also a strong phrase-based system and even single-domain NMT models specifically optimized on each domain and applicable only by violating two of our aforementioned assumptions.", "title": "" }, { "docid": "399f3c7320e8a63da78c9701afbdf842", "text": "Land use (LU) maps are an important source of information in academia and for policy-makers describing the usage of land parcels. A large amount of effort and monetary resources are spent on mapping LU features over time and at local, regional, and global scales. Remote sensing images and signal processing techniques, as well as land surveying are the prime sources to map LU features. However, both data gathering approaches are financially expensive and time consuming. But recently, Web 2.0 technologies and the wide dissemination of GPSenabled devices boosted public participation in collaborative mapping projects (CMPs). In this regard, the OpenStreetMap (OSM) project has been one of the most successful representatives, providing LU features. The main objective of this paper is to comparatively assess the accuracy of the contributed OSM-LU features in four German metropolitan areas versus the pan-European GMESUA dataset as a reference. Kappa index analysis along with per-class user’s and producers’ accuracies are used for accuracy assessment. The empirical findings suggest OSM as an alternative complementary source for extracting LU information whereas exceeding 50 % of the selected cities are mapped by mappers. Moreover, the results identify which land types preserve high/moderate/low accuracy across cities for urban LU mapping. The findings strength the potential of collaboratively collected LU J. Jokar Arsanjani (&) A. Zipf A. Schauss GIScience Research Group, Institute of Geography, Heidelberg University, 69120 Heidelberg, Germany e-mail: jokar.arsanjani@geog.uni-heidelberg.de A. Zipf e-mail: zipf@uni-heidelberg.de A. Schauss e-mail: anneschauss@gmail.com P. Mooney Department of Computer Science, Maynooth University, Maynooth, Co. Kildare, Ireland e-mail: peter.mooney@nuim.ie © Springer International Publishing Switzerland 2015 J. Jokar Arsanjani et al. (eds.), OpenStreetMap in GIScience, Lecture Notes in Geoinformation and Cartography, DOI 10.1007/978-3-319-14280-7_3 37 features for providing temporal LU maps as well as updating/enriching existing inventories. Furthermore, such a collaborative approach can be used for collecting a global coverage of LU information specifically in countries in which temporal and monetary efforts could be minimized.", "title": "" }, { "docid": "6b8329ef59c6811705688e48bf6c0c08", "text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.", "title": "" }, { "docid": "49d2d46a16571524e94b22997d1b585c", "text": "In this paper, we discuss the development of the sprawling-type quadruped robot named “TITAN-XIII” and its dynamic walking algorithm. We develop an experimental quadruped robot especially designed for dynamic walking. Unlike dog-like robots, the prototype robot looks like a four-legged spider. As an experimental robot, we focus on the three basic concepts: lightweight, wide range of motion and ease of maintenance. To achieve these goals, we introduce a wire-driven mechanism using a synthetic fiber to transmit power to each axis making use of this wire-driven mechanism, we can locate the motors at the base of the leg, reducing, consequently, its inertia. Additionally, each part of the robot is unitized, and can be easily disassembled. As a dynamic walking algorithm, we proposed what we call “longitudinal acceleration trajectory”. This trajectory was applied to intermittent trot gait. The algorithm was tested with the developed robot, and its performance was confirmed through experiments.", "title": "" }, { "docid": "ba8f9ea34232a99de4a7b8a819027550", "text": "Transcranial magnetic stimulation (TMS) has developed into a powerful tool for studying human brain physiology and brain–behavior relations. When applied in sessions of repeated stimulation, TMS can lead to changes in neuronal activity/excitability that outlast the stimulation itself. Such aftereffects are at the heart of the offline TMS protocols in cognitive neuroscience and neurotherapeutics. However, whether these aftereffects are of applied interest critically depends on their magnitude and duration, which should fall within an experimentally or clinically useful range without increasing risks and adverse effects. In this short review, we survey combined TMS-EEG studies to characterize the TMS-aftereffects as revealed by EEG to contribute to the characterization of the most effective and promising repetitive TMS-parameters. With one session of conventional repetitive TMS (of fixed pulse frequency), aftereffects were consistently comparable in magnitude to EEG-changes reported after learning or with fatigue, and were short-lived (<70 min). The few studies using recently developed protocols (such as theta burst stimulation) suggest comparable effect-size but longer effect-durations. Based on the reviewed data, it is expected that TMS-efficacy can be further promoted by repeating TMS-sessions, by using EEG-gated TMS to tailor TMS to current neuronal state, or by other, non-conventional TMS-protocols. Newly emerging developments in offline TMS research for cognitive neuroscience and neurotherapeutics are outlined.", "title": "" }, { "docid": "f526623f120390a4521aba83e414617a", "text": "Visual Odometry (VO) can be categorized as being either direct or feature based. When the system is calibrated photometrically, and images are captured at high rates, direct methods have shown to outperform feature-based ones in terms of accuracy and processing time; they are also more robust to failure in feature-deprived environments. On the downside, Direct methods rely on heuristic motion models to seed the estimation of camera motion between frames; in the event that these models are violated (e.g., erratic motion), Direct methods easily fail. This paper proposes a novel system entitled FDMO (Feature assisted Direct Monocular Odometry), which complements the advantages of both direct and featured based techniques. FDMO bootstraps indirect feature tracking upon the sub-pixel accurate localized direct keyframes only when failure modes (e.g., large baselines) of direct tracking occur. Control returns back to direct odometry when these conditions are no longer violated. Efficiencies are introduced to help FDMO perform in real time. FDMO shows significant drift (alignment, rotation & scale) reduction when compared to DSO & ORB SLAM when evaluated using the TumMono and EuroC datasets.", "title": "" }, { "docid": "c61e5bae4dbccf0381269980a22f726a", "text": "—Web mining is the application of the data mining which is useful to extract the knowledge. Web mining has been explored to different techniques have been proposed for the variety of the application. Most research on Web mining has been from a 'data-centric' or information based point of view. Web usage mining, Web structure mining and Web content mining are the types of Web mining. Web usage mining is used to mining the data from the web server log files. Web Personalization is one of the areas of the Web usage mining that can be defined as delivery of content tailored to a particular user or as personalization requires implicitly or explicitly collecting visitor information and leveraging that knowledge in your content delivery framework to manipulate what information you present to your users and how you present it. In this paper, we have focused on various Web personalization categories and their research issues.", "title": "" }, { "docid": "225e7b608d06d218144853b900d40fd1", "text": "Deep neural networks require a large amount of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this paper, we introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. However, the source learning task does not use all existing training data. Our core idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute descriptors from linear or nonlinear filter bank responses on training images from both tasks, and use such descriptors to search for a desired subset of training samples for the source learning task. Experiments demonstrate that our deep transfer learning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, and fine-grained classification problems (Oxford Flowers 102 and Stanford Dogs 120). In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy by 2% - 10% using a single model. Codes and models are available at https://github.com/ZYYSzj/Selective-Joint-Fine-tuning.", "title": "" }, { "docid": "83981d52eb5e58d6c2d611b25c9f6d12", "text": "This tutorial provides an introduction to Simultaneous Localisation and Mapping (SLAM) and the extensive research on SLAM that has been undertaken over the past decade. SLAM is the process by which a mobile robot can build a map of an environment and at the same time use this map to compute it’s own location. The past decade has seen rapid and exciting progress in solving the SLAM problem together with many compelling implementations of SLAM methods. Part I of this tutorial (this paper), describes the probabilistic form of the SLAM problem, essential solution methods and significant implementations. Part II of this tutorial will be concerned with recent advances in computational methods and new formulations of the SLAM problem for large scale and complex environments.", "title": "" }, { "docid": "4fcea2e99877dedc419893313c1baea4", "text": "A cardiac circumstance affected through irregular electrical action of the heart is called an arrhythmia. A noninvasive method called Electrocardiogram (ECG) is used to diagnosis arrhythmias or irregularities of the heart. The difficulty encountered by doctors in the analysis of heartbeat irregularities id due to the non-stationary of ECG signal, the existence of noise and the abnormality of the heartbeat. The computer-assisted study of ECG signal supports doctors to diagnoses diseases of cardiovascular. The major limitations of all the ECG signal analysis of arrhythmia detection are because to the non-stationary behavior of the ECG signals and unobserved information existent in the ECG signals. In addition, detection based on Extreme learning machine (ELM) has become a common technique in machine learning. However, it easily suffers from overfitting. This paper proposes a hybrid classification technique using Bayesian and Extreme Learning Machine (B-ELM) technique for heartbeat recognition of arrhythmia detection AD. The proposed technique is capable of detecting arrhythmia classes with a maximum accuracy of (98.09%) and less computational time about 2.5s.", "title": "" }, { "docid": "bc8b6eecfae2406df986458e82c9bb50", "text": "Recent neuroscience models of adolescent brain development attribute the morbidity and mortality of this period to structural and functional imbalances between more fully developed limbic regions that subserve reward and emotion as opposed to those that enable cognitive control. We challenge this interpretation of adolescent development by distinguishing risk-taking that peaks during adolescence (sensation seeking and impulsive action) from risk taking that declines monotonically from childhood to adulthood (impulsive choice and other decisions under known risk). Sensation seeking is primarily motivated by exploration of the environment under ambiguous risk contexts, while impulsive action, which is likely to be maladaptive, is more characteristic of a subset of youth with weak control over limbic motivation. Risk taking that declines monotonically from childhood to adulthood occurs primarily under conditions of known risks and reflects increases in executive function as well as aversion to risk based on increases in gist-based reasoning. We propose an alternative Life-span Wisdom Model that highlights the importance of experience gained through exploration during adolescence. We propose, therefore, that brain models that recognize the adaptive roles that cognition and experience play during adolescence provide a more complete and helpful picture of this period of development.", "title": "" }, { "docid": "1bde13d981e80f595a412e4ef3cf37be", "text": "A widely used defense practice against malicious traffic on the Internet is through blacklists: lists of prolific attack sources are compiled and shared. The goal of blacklists is to predict and block future attack sources. Existing blacklisting techniques have focused on the most prolific attack sources and, more recently, on collaborative blacklisting. In this paper, we formulate the problem of forecasting attack sources (also referred to as \"predictive blacklisting\") based on shared attack logs, as an implicit recommendation system. We compare the performance of existing approaches against the upper bound for prediction and we demonstrate that there is much room for improvement. Inspired by the recent NetFlix competition, we propose a multi-level collaborative filtering model that is adjusted and tuned specifically for the attack forecasting problem. Our model captures and combines various factors namely: attacker-victim history (using time-series) and attackers and/or victims interactions (using neighborhood models). We evaluate our combined method on one month of logs from Dshield.org and demonstrate that it improves significantly the prediction rate over state-of-the-art methods as well as the robustness against poisoning attacks.", "title": "" }, { "docid": "ccd7e49646f1ef1d31f033f84c63c6e6", "text": "Language modeling is a prototypical unsupervised task of natural language processing (NLP). It has triggered the developments of essential bricks of models used in speech recognition, translation or summarization. More recently, language modeling has been shown to give a sensible loss function for learning high-quality unsupervised representations in tasks like text classification (Howard & Ruder, 2018), sentiment detection (Radford et al., 2017) or word vector learning (Peters et al., 2018) and there is thus a revived interest in developing better language models. More generally, improvement in sequential prediction models are believed to be beneficial for a wide range of applications like model-based planning or reinforcement learning whose models have to encode some form of memory.", "title": "" }, { "docid": "b16bb73155af7f141127617a7e9fdde1", "text": "Organizing code into coherent programs and relating different programs to each other represents an underlying requirement for scaling genetic programming to more difficult task domains. Assuming a model in which policies are defined by teams of programs, in which team and program are represented using independent populations and coevolved, has previously been shown to support the development of variable sized teams. In this work, we generalize the approach to provide a complete framework for organizing multiple teams into arbitrarily deep/wide structures through a process of continuous evolution; hereafter the Tangled Program Graph (TPG). Benchmarking is conducted using a subset of 20 games from the Arcade Learning Environment (ALE), an Atari 2600 video game emulator. The games considered here correspond to those in which deep learning was unable to reach a threshold of play consistent with that of a human. Information provided to the learning agent is limited to that which a human would experience. That is, screen capture sensory input, Atari joystick actions, and game score. The performance of the proposed approach exceeds that of deep learning in 15 of the 20 games, with 7 of the 15 also exceeding that associated with a human level of competence. Moreover, in contrast to solutions from deep learning, solutions discovered by TPG are also very ‘sparse’. Rather than assuming that all of the state space contributes to every decision, each action in TPG is resolved following execution of a subset of an individual’s graph. This results in significantly lower computational requirements for model building than presently the case for deep learning.", "title": "" }, { "docid": "7aac766efdaad42a064d461b03d6d69c", "text": "Programmers frequently use instructive code examples found on the Web to overcome cognitive barriers while programming. These examples couple the concrete functionality of code with rich contextual information about how the code works. However, using these examples necessitates understanding, configuring, and integrating the code, all of which typically take place after the example enters the user's code and has been removed from its original instructive context. In short, a user's interaction with an example continues well after the code is pasted. This paper investigates whether treating examples as \"first-class\" objects in the code editor - rather than simply as strings of text - will allow programmers to use examples more effectively. We explore this through the creation and evaluation of Codelets. A Codelet is presented inline with the user's code, and consists of a block of example code and an interactive helper widget that assists the user in understanding and integrating the example. The Codelet persists throughout the example's lifecycle, remaining accessible even after configuration and integration is done. A comparative laboratory study with 20 participants found that programmers were able to complete tasks involving examples an average of 43% faster when using Codelets than when using a standard Web browser.", "title": "" }, { "docid": "578d40b5c82fcc59fa2333e47a99d84c", "text": "Brain tumor is one of the major causes of death among people. It is evident that the chances of survival can be increased if the tumor is detected and classified correctly at its early stage. Conventional methods involve invasive techniques such as biopsy, lumbar puncture and spinal tap method, to detect and classify brain tumors into benign (non cancerous) and malignant (cancerous). A computer aided diagnosis algorithm has been designed so as to increase the accuracy of brain tumor detection and classification, and thereby replace conventional invasive and time consuming techniques. This paper introduces an efficient method of brain tumor classification, where, the real Magnetic Resonance (MR) images are classified into normal, non cancerous (benign) brain tumor and cancerous (malignant) brain tumor. The proposed method follows three steps, (1) wavelet decomposition, (2) textural feature extraction and (3) classification. Discrete Wavelet Transform is first employed using Daubechies wavelet (db4), for decomposing the MR image into different levels of approximate and detailed coefficients and then the gray level co-occurrence matrix is formed, from which the texture statistics such as energy, contrast, correlation, homogeneity and entropy are obtained. The results of co-occurrence matrices are then fed into a probabilistic neural network for further classification and tumor detection. The proposed method has been applied on real MR images, and the accuracy of classification using probabilistic neural network is found to be nearly 100%.", "title": "" }, { "docid": "c06e1491b0aabbbd73628c2f9f45d65d", "text": "With the integration of deep learning into the traditional field of reinforcement learning in the recent decades, the spectrum of applications that artificial intelligence caters is currently very broad. As using AI to play games is a traditional application of reinforcement learning, the project’s objective is to implement a deep reinforcement learning agent that can defeat a video game. Since it is often difficult to determine which algorithms are appropriate given the wide selection of state-of-the-art techniques in the discipline, proper comparisons and investigations of the algorithms are a prerequisite to implementing such an agent. As a result, this paper serves as a platform for exploring the possibility and effectiveness of using conventional state-of-the-art reinforcement learning methods for playing Pacman maps. In particular, this paper demonstrates that Combined DQN, a variation of Rainbow DQN, is able to attain high performance in small maps such as 506Pacman, smallGrid and mediumGrid. It was also demonstrated that the trained agents could also play Pacman maps similar to training with limited performance. Nevertheless, the algorithm suffers due to its data inefficiency and lack of human-like features, which may be remedied in the future by introducing more human-like features into the algortihm, such as intrinsic motivation and imagination.", "title": "" } ]
scidocsrr
9207f7e8dc972a7dc46e2addab30b5f9
Arc-fault unwanted tripping survey with UL 1699B-listed products
[ { "docid": "1634b893909c900194f0f936d3dcdc10", "text": "The 2011 National Electrical Code® (NEC®) added Article 690.11 that requires photovoltaic (PV) systems on or penetrating a building to include a listed DC arc fault protection device. To fill this new market, manufacturers are developing new Arc Fault Circuit Interrupters (AFCIs). Comprehensive and challenging testing has been conducted using a wide range of PV technologies, system topologies, loads and noise sources. The Distributed Energy Technologies Laboratory (DETL) at Sandia National Laboratories (SNL) has used multiple reconfigurable arrays with a variety of module technologies, inverters, and balance of system (BOS) components to characterize new Photovoltaic (PV) DC AFCIs and Arc Fault Detectors (AFDs). The device's detection capabilities, characteristics and nuisance tripping avoidance were the primary purpose of the testing. SNL and Eaton Corporation collaborated to test an Eaton AFD prototype and quantify arc noise for a wide range of PV array configurations and the system responses. The tests were conducted by generating controlled, series PV arc faults between PV modules. Arc fault detection studies were performed on systems using aged modules, positive- and negative-grounded arrays, DC/DC converters, 3-phase inverters, and on strings with branch connectors. The tests were conducted to determine if nuisance trips would occur in systems using electrically noisy inverters, with series arc faults on parallel strings, and in systems with inverters performing anti-islanding and maximum power point tracking (MPPT) algorithms. The tests reported herein used the arc fault detection device to indicate when the trip signal was sent to the circuit interrupter. Results show significant noise is injected into the array from the inverter but AFCI functionality of the device was generally stable. The relative locations of the arc fault and detector had little influence on arc fault detection. Lastly, detection of certain frequency bands successfully differentiated normal operational noise from an arc fault signal.", "title": "" } ]
[ { "docid": "8785f90ae8e4522832c3b9da9165e3e3", "text": "Recent studies have shown that the efficiency of deep neural networks in mobile applications can be significantly improved by distributing the computational workload between the mobile device and the cloud. This paradigm, termed collaborative intelligence, involves communicating feature data between the mobile and the cloud. The efficiency of such approach can be further improved by lossy compression of feature data, which has not been examined to date. In this work we focus on collaborative object detection and study the impact of both near-lossless and lossy compression of feature data on its accuracy. We also propose a strategy for improving the accuracy under lossy feature compression. Experiments indicate that using this strategy, the communication overhead can be reduced by up to 70% without sacrificing accuracy.", "title": "" }, { "docid": "1e865bd59571b6c1b1012f229efde437", "text": "Do we really need 3D labels in order to learn how to predict 3D? In this paper, we show that one can learn a mapping from appearance to 3D properties without ever seeing a single explicit 3D label. Rather than use explicit supervision, we use the regularity of indoor scenes to learn the mapping in a completely unsupervised manner. We demonstrate this on both a standard 3D scene understanding dataset as well as Internet images for which 3D is unavailable, precluding supervised learning. Despite never seeing a 3D label, our method produces competitive results.", "title": "" }, { "docid": "7f3686b783273c4df7c4fb41fe7ccefd", "text": "Data from service and manufacturing sectors is increasing sharply and lifts up a growing enthusiasm for the notion of Big Data. This paper investigates representative Big Data applications from typical services like finance & economics, healthcare, Supply Chain Management (SCM), and manufacturing sector. Current technologies from key aspects of storage technology, data processing technology, data visualization technique, Big Data analytics, as well as models and algorithms are reviewed. This paper then provides a discussion from analyzing current movements on the Big Data for SCM in service and manufacturing world-wide including North America, Europe, and Asia Pacific region. Current challenges, opportunities, and future perspectives such as data collection methods, data transmission, data storage, processing technologies for Big Data, Big Data-enabled decision-making models, as well as Big Data interpretation and application are highlighted. Observations and insights from this paper could be referred by academia and practitioners when implementing Big Data analytics in the service and manufacturing sectors. 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f8f1e4f03c6416e9d9500472f5e00dbe", "text": "Template attack is the most common and powerful profiled side channel attack. It relies on a realistic assumption regarding the noise of the device under attack: the probability density function of the data is a multivariate Gaussian distribution. To relax this assumption, a recent line of research has investigated new profiling approaches mainly by applying machine learning techniques. The obtained results are commensurate, and in some particular cases better, compared to template attack. In this work, we propose to continue this recent line of research by applying more sophisticated profiling techniques based on deep learning. Our experimental results confirm the overwhelming advantages of the resulting new attacks when targeting both unprotected and protected cryptographic implementations.", "title": "" }, { "docid": "0c7d731d0ba250b0a798fffe5c09c6c2", "text": "There has been much interest in the Sharing Economy in recent years, accompanied with the hope that it will change and specifically make better use of existing resources. It intuitively makes sense, from a sustainability point of view, that the sharing of resources is good. It could even be said that the Sharing Economy ought to align well with Computing within Limits and its underlying premises. In this paper however, we take a critical stance and will elaborate on the intersection between the Sharing Economy and Limits (including pinpointing potential conflicts) so as to identify and discuss a 'Limits-compliant Sharing Economy'. We argue that even though there are limits to the Sharing Economy today, it still has potential benefits for a future of scarcity---but only if the practice of sharing is approached with a dual focus on sharing and on limits at the same time. Finally we conclude that even though we have begun to explore the future of sharing, there is still a need to further develop ideas of how the underlying infrastructure for this movement will look.", "title": "" }, { "docid": "64a3fec90138f6786dd8257a5ecd73e4", "text": "Unlabeled high-dimensional text-image web news data are produced every day, presenting new challenges to unsupervised feature selection on multi-view data. State-of-the-art multi-view unsupervised feature selection methods learn pseudo class labels by spectral analysis, which is sensitive to the choice of similarity metric for each view. For text-image data, the raw text itself contains more discriminative information than similarity graph which loses information during construction, and thus the text feature can be directly used for label learning, avoiding information loss as in spectral analysis. We propose a new multi-view unsupervised feature selection method in which image local learning regularized orthogonal nonnegative matrix factorization is used to learn pseudo labels and simultaneously robust joint $l_{2,1}$-norm minimization is performed to select discriminative features. Cross-view consensus on pseudo labels can be obtained as much as possible. We systematically evaluate the proposed method in multi-view text-image web news datasets. Our extensive experiments on web news datasets crawled from two major US media channels: CNN and FOXNews demonstrate the efficacy of the new method over state-of-the-art multi-view and single-view unsupervised feature selection methods.", "title": "" }, { "docid": "432fe001ec8f1331a4bd033e9c49ccdf", "text": "Recently, methods based on local image features have shown promise for texture and object recognition tasks. This paper presents a large-scale evaluation of an approach that represents images as distributions (signatures or histograms) of features extracted from a sparse set of keypoint locations and learns a Support Vector Machine classifier with kernels based on two effective measures for comparing distributions, the Earth Mover’s Distance and the χ2 distance. We first evaluate the performance of our approach with different keypoint detectors and descriptors, as well as different kernels and classifiers. We then conduct a comparative evaluation with several state-of-the-art recognition methods on four texture and five object databases. On most of these databases, our implementation exceeds the best reported results and achieves comparable performance on the rest. Finally, we investigate the influence of background correlations on recognition performance via extensive tests on the PASCAL database, for which ground-truth object localization information is available. Our experiments demonstrate that image representations based on distributions of local features are surprisingly effective for classification of texture and object images under challenging real-world conditions, including significant intra-class variations and substantial background clutter.", "title": "" }, { "docid": "77d73cf3aa583e12cc102f48be184100", "text": "The combinatorial cross-regulation of hundreds of sequence-specific transcription factors (TFs) defines a regulatory network that underlies cellular identity and function. Here we use genome-wide maps of in vivo DNaseI footprints to assemble an extensive core human regulatory network comprising connections among 475 sequence-specific TFs and to analyze the dynamics of these connections across 41 diverse cell and tissue types. We find that human TF networks are highly cell selective and are driven by cohorts of factors that include regulators with previously unrecognized roles in control of cellular identity. Moreover, we identify many widely expressed factors that impact transcriptional regulatory networks in a cell-selective manner. Strikingly, in spite of their inherent diversity, all cell-type regulatory networks independently converge on a common architecture that closely resembles the topology of living neuronal networks. Together, our results provide an extensive description of the circuitry, dynamics, and organizing principles of the human TF regulatory network.", "title": "" }, { "docid": "bc33f06340e652336ef2abb875937d5a", "text": "WORKING PAPERS Examining the Impact of Contextual Ambiguity on Search Advertising Keyword Performance: A Topic Model Approach (with Abhishek, Vibhanshu and Beibei Li), Job Market Paper, invited for resubmission to Marketing Science. Substitution or Promotion? The Impact of Price Discounts on Cross-Channel Sales of Digital Movies (with Michael D. Smith, and Rahul Telang), conditionally accepted at the Journal of Retailing.", "title": "" }, { "docid": "15d2651aa06ac8276a8cc48d3399a504", "text": "Recently, the NLP community has shown a renewed interest in lexical semantics in the extent of automatic recognition of semantic relationships between pairs of words in text. Lexical semantics has become increasingly important in many natural language applications, this approach to semantics is concerned with psychological facts associated with meaning of words and how these words can be connected in semantic relations to build ontologies that provide a shared vocabulary to model a specified domain. And represent a structural framework for organizing information across fields of Artificial Intelligence (AI), Semantic Web, systems engineering and information architecture. But current systems mainly concentrate on classification of semantic relations rather than to give solutions for how these relations can be created [14]. At the same time, systems that do provide methods for creating the relations tend to ignore the context in which the conceptual relationships occur. Furthermore, methods that address semantic (non-taxonomic) relations are yet to come up with widely accepted ways of enhancing the process of classifying and extracting semantic relations. In this research we will focus on the learning of semantic relations patterns between word meanings by taking into consideration the surrounding context in the general domain. We will first generate semantic patterns in domain independent environment depending on previous specific semantic information, and a set of input examples. Our case of study will be causation relations. Then these patterns will classify causation in general domain texts taking into consideration the context of the relations, and then the classified relations will be used to learn new causation semantic patterns.", "title": "" }, { "docid": "4f37b872c44c2bda3ff62e3e8ebf4391", "text": "This paper proposes a method based on conditional random fields to incorporate sentence structure (syntax and semantics) and context information to identify sentiments of sentences within a document. It also proposes and evaluates two different active learning strategies for labeling sentiment data. The experiments with the proposed approach demonstrate a 5-15% improvement in accuracy on Amazon customer reviews compared to existing supervised learning and rule-based methods.", "title": "" }, { "docid": "df044b996752beb7f0fd067d17c91199", "text": "We introduce lemonUby, a new lexical resource integrated in the Semantic Web which is the result of converting data extracted from the existing large-scale linked lexical resource UBY to the lemon lexicon model. The following data from UBY were converted: WordNet, FrameNet, VerbNet, English and German Wiktionary, the English and German entries of OmegaWiki, as well as links between pairs of these lexicons at the word sense level (links between VerbNet and FrameNet, VerbNet and WordNet, WordNet and FrameNet, WordNet and Wiktionary, WordNet and German OmegaWiki). We linked lemonUby to other lexical resources and linguistic terminology repositories in the Linguistic Linked Open Data cloud and outline possible applications of this new dataset.", "title": "" }, { "docid": "4f686e9f37ec26070d0d280b98f78673", "text": "State-of-the-art visual perception models for a wide range of tasks rely on supervised pretraining. ImageNet classification is the de facto pretraining task for these models. Yet, ImageNet is now nearly ten years old and is by modern standards “small”. Even so, relatively little is known about the behavior of pretraining with datasets that are multiple orders of magnitude larger. The reasons are obvious: such datasets are difficult to collect and annotate. In this paper, we present a unique study of transfer learning with large convolutional networks trained to predict hashtags on billions of social media images. Our experiments demonstrate that training for large-scale hashtag prediction leads to excellent results. We show improvements on several image classification and object detection tasks, and report the highest ImageNet-1k single-crop, top-1 accuracy to date: 85.4% (97.6% top-5). We also perform extensive experiments that provide novel empirical data on the relationship between large-scale pretraining and transfer learning performance.", "title": "" }, { "docid": "76dcd35124d95bffe47df5decdc5926a", "text": "While kernel drivers have long been know to poses huge security risks, due to their privileged access and lower code quality, bug-finding tools for drivers are still greatly lacking both in quantity and effectiveness. This is because the pointer-heavy code in these drivers present some of the hardest challenges to static analysis, and their tight coupling with the hardware make dynamic analysis infeasible in most cases. In this work, we present DR. CHECKER, a soundy (i.e., mostly sound) bug-finding tool for Linux kernel drivers that is based on well-known program analysis techniques. We are able to overcome many of the inherent limitations of static analysis by scoping our analysis to only the most bug-prone parts of the kernel (i.e., the drivers), and by only sacrificing soundness in very few cases to ensure that our technique is both scalable and precise. DR. CHECKER is a fully-automated static analysis tool capable of performing general bug finding using both pointer and taint analyses that are flow-sensitive, context-sensitive, and fieldsensitive on kernel drivers. To demonstrate the scalability and efficacy of DR. CHECKER, we analyzed the drivers of nine production Linux kernels (3.1 million LOC), where it correctly identified 158 critical zero-day bugs with an overall precision of 78%.", "title": "" }, { "docid": "f9571dc9a91dd8c2c6495814c44c88c0", "text": "Automatic number plate recognition is the task of extracting vehicle registration plates and labeling it for its underlying identity number. It uses optical character recognition on images to read symbols present on the number plates. Generally, numberplate recognition system includes plate localization, segmentation, character extraction and labeling. This research paper describes machine learning based automated Nepali number plate recognition model. Various image processing algorithms are implemented to detect number plate and to extract individual characters from it. Recognition system then uses Support Vector Machine (SVM) based learning and prediction on calculated Histograms of Oriented Gradients (HOG) features from each character. The system is evaluated on self-created Nepali number plate dataset. Evaluation accuracy of number plate character dataset is obtained as; 6.79% of average system error rate, 87.59% of average precision, 98.66% of average recall and 92.79% of average f-score. The accuracy of the complete number plate labeling experiment is obtained as 75.0%. Accuracy of the automatic number plate recognition is greatly influenced by the segmentation accuracy of the individual characters along with the size, resolution, pose, and illumination of the given image. Keywords—Nepali License Plate Recognition, Number Plate Detection, Feature Extraction, Histograms of Oriented Gradients, Optical Character Recognition, Support Vector Machines, Computer Vision, Machine Learning", "title": "" }, { "docid": "c7e04eca694526623434c67381194a63", "text": "Synchrony refers to individuals' temporal coordination during social interactions. The analysis of this phenomenon is complex, requiring the perception and integration of multimodal communicative signals. The evaluation of synchrony has received multidisciplinary attention because of its role in early development, language learning, and social connection. Originally studied by developmental psychologists, synchrony has now captured the interest of researchers in such fields as social signal processing, robotics, and machine learning. This paper emphasizes the current questions asked by synchrony evaluation and the state-of-the-art related methods. First, we present definitions and functions of synchrony in youth and adulthood. Next, we review the noncomputational and computational approaches of annotating, evaluating, and modeling interactional synchrony. Finally, the current limitations and future research directions in the fields of developmental robotics, social robotics, and clinical studies are discussed.", "title": "" }, { "docid": "45a15455945fdd03ee726b285b8dd75a", "text": "The nonequispaced Fourier transform arises in a variety of application areas, from medical imaging to radio astronomy to the numerical solution of partial differential equations. In a typical problem, one is given an irregular sampling of N data in the frequency domain and one is interested in reconstructing the corresponding function in the physical domain. When the sampling is uniform, the fast Fourier transform (FFT) allows this calculation to be computed in O(N logN) operations rather than O(N2) operations. Unfortunately, when the sampling is nonuniform, the FFT does not apply. Over the last few years, a number of algorithms have been developed to overcome this limitation and are often referred to as nonuniform FFTs (NUFFTs). These rely on a mixture of interpolation and the judicious use of the FFT on an oversampled grid [A. Dutt and V. Rokhlin, SIAM J. Sci. Comput., 14 (1993), pp. 1368–1383]. In this paper, we observe that one of the standard interpolation or “gridding” schemes, based on Gaussians, can be accelerated by a significant factor without precomputation and storage of the interpolation weights. This is of particular value in twoand threedimensional settings, saving either 10dN in storage in d dimensions or a factor of about 5–10 in CPU time (independent of dimension).", "title": "" }, { "docid": "32c28df748ea98dffac8bc0fe5aea395", "text": "The stability of an interconnected power system is its ability to return to normal or stable operation after having been subjected to some form of disturbance. Instability means a condition denoting loss of synchronism or falling out of step. Stability considerations have been recognized as an essential part of power system planning for a long time. With interconnected system continually growing in size and extending over vast geographical regions, it is becoming increasingly more difficult to maintain synchronism between various parts of a power system. FACTS devices have shown very promising results when used to improve power system steady-state performance. They have been very promising Candidates for utilization in power system damping enhancement. Hybrid Power Flow Controller (HPFC) is incorporated with MM system in the present work as it can be used to replace or supplement the existing equipments. Usually, it can be installed at locations already having the reactive power compensation equipments like the SVC, STATCOM etc. In this Paper author Studied the power system stability enhancement by implementing the HPFC in MM System power system. The system also has the provision of a comparative study of the performances of UPFC and HPFC regarding power system stability enhancement of the system. Results obtained are encouraging and indicate that the designed model has very good performance which is comparable to the already existing UPFC.", "title": "" }, { "docid": "88fb71e503e0d0af7515dd8489061e25", "text": "The recent boom in the Internet of Things (IoT) will turn Smart Cities and Smart Homes (SH) from hype to reality. SH is the major building block for Smart Cities and have long been a dream for decades, hobbyists in the late 1970smade Home Automation (HA) possible when personal computers started invading home spaces. While SH can share most of the IoT technologies, there are unique characteristics that make SH special. From the result of a recent research survey on SH and IoT technologies, this paper defines the major requirements for building SH. Seven unique requirement recommendations are defined and classified according to the specific quality of the SH building blocks. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "ff4f272d2ddfd41f58679c076b0acf63", "text": "When scoring the quality of JPEG images, the two main considerations for viewers are blocking artifacts and improper luminance changes, such as blur. In this letter, we first propose two measures to estimate the blockiness and the luminance change within individual blocks. Then, a no-reference image quality assessment (NR-IQA) method for JPEG images is proposed. Our method obtains the quality score by considering the blocking artifacts and the luminance changes from all nonoverlapping 8 × 8 blocks in one JPEG image. The proposed method has been tested on five public IQA databases and compared with five state-of-the-art NR-IQA methods for JPEG images. The experimental results show that our method is more consistent with subjective evaluations than the state-of-the-art NR-IQA methods. The MATLAB source code of our method is available at http://image.ustc.edu.cn/IQA.html.", "title": "" } ]
scidocsrr
f1cd6ca7a4182e30b7fc0a88c0815f23
Stating the Obvious: Extracting Visual Common Sense Knowledge
[ { "docid": "5d79d7e9498d7d41fbc7c70d94e6a9ae", "text": "Reasoning about objects and their affordances is a fundamental problem for visual intelligence. Most of the previous work casts this problem as a classification task where separate classifiers are trained to label objects, recognize attributes, or assign affordances. In this work, we consider the problem of object affordance reasoning using a knowledge base representation. Diverse information of objects are first harvested from images and other meta-data sources. We then learn a knowledge base (KB) using a Markov Logic Network (MLN). Given the learned KB, we show that a diverse set of visual inference tasks can be done in this unified framework without training separate classifiers, including zeroshot affordance prediction and object recognition given human poses.", "title": "" } ]
[ { "docid": "70991373ae71f233b0facd2b5dd1a0d3", "text": "Information communications technology systems are facing an increasing number of cyber security threats, the majority of which are originated by insiders. As insiders reside behind the enterprise-level security defence mechanisms and often have privileged access to the network, detecting and preventing insider threats is a complex and challenging problem. In fact, many schemes and systems have been proposed to address insider threats from different perspectives, such as intent, type of threat, or available audit data source. This survey attempts to line up these works together with only three most common types of insider namely traitor, masquerader, and unintentional perpetrator, while reviewing the countermeasures from a data analytics perspective. Uniquely, this survey takes into account the early stage threats which may lead to a malicious insider rising up. When direct and indirect threats are put on the same page, all the relevant works can be categorised as host, network, or contextual data-based according to audit data source and each work is reviewed for its capability against insider threats, how the information is extracted from the engaged data sources, and what the decision-making algorithm is. The works are also compared and contrasted. Finally, some issues are raised based on the observations from the reviewed works and new research gaps and challenges identified.", "title": "" }, { "docid": "204f7f8282954de4d6b725f5cce0b00f", "text": "Traffic classification plays an important and basic role in network management and cyberspace security. With the widespread use of encryption techniques in network applications, encrypted traffic has recently become a great challenge for the traditional traffic classification methods. In this paper we proposed an end-to-end encrypted traffic classification method with one-dimensional convolution neural networks. This method integrates feature extraction, feature selection and classifier into a unified end-to-end framework, intending to automatically learning nonlinear relationship between raw input and expected output. To the best of our knowledge, it is the first time to apply an end-to-end method to the encrypted traffic classification domain. The method is validated with the public ISCX VPN-nonVPN traffic dataset. Among all of the four experiments, with the best traffic representation and the fine-tuned model, 11 of 12 evaluation metrics of the experiment results outperform the state-of-the-art method, which indicates the effectiveness of the proposed method.", "title": "" }, { "docid": "c7d54d4932792f9f1f4e08361716050f", "text": "In this paper, we address several puzzles concerning speech acts,particularly indirect speech acts. We show how a formal semantictheory of discourse interpretation can be used to define speech actsand to avoid murky issues concerning the metaphysics of action. Weprovide a formally precise definition of indirect speech acts, includingthe subclass of so-called conventionalized indirect speech acts. Thisanalysis draws heavily on parallels between phenomena at the speechact level and the lexical level. First, we argue that, just as co-predicationshows that some words can behave linguistically as if they're `simultaneously'of incompatible semantic types, certain speech acts behave this way too.Secondly, as Horn and Bayer (1984) and others have suggested, both thelexicon and speech acts are subject to a principle of blocking or ``preemptionby synonymy'': Conventionalized indirect speech acts can block their`paraphrases' from being interpreted as indirect speech acts, even ifthis interpretation is calculable from Gricean-style principles. Weprovide a formal model of this blocking, and compare it withexisting accounts of lexical blocking.", "title": "" }, { "docid": "8fe95ffa1989c458c9955faad48df195", "text": "While ever more companies use Enterprise Social Networks for knowledge management, there is still a lack of understanding of users’ knowledge exchanging behavior. In this context, it is important to be able to identify and characterize users who contribute and communicate their knowledge in the network and help others to get their work done. In this paper, we propose a new methodological approach consisting of three steps, namely ―message classification‖, ―identification of users’ roles‖ as well as ―characterization of users’ roles‖. We apply the approach to a dataset from a multinational consulting company, which allows us to identify three user roles based on their knowledge contribution in messages: givers, takers, and matchers. Going beyond this categorization, our data shows that whereas the majority of messages aims to share knowledge, matchers, that means people that give and take, are a central element of the network. In conclusion, the development and application of a new methodological approach allows us to contribute to a more refined understanding of users’ knowledge exchanging behavior in Enterprise Social Networks which can ultimately help companies to take measures to improve their knowledge management.", "title": "" }, { "docid": "7cf8e2555cfccc1fc091272559ad78d7", "text": "This paper presents a multimodal emotion recognition method that uses a feature-level combination of three-dimensional (3D) geometric features (coordinates, distance and angle of joints), kinematic features such as velocity and displacement of joints, and features extracted from daily behavioral patterns such as frequency of head nod, hand wave, and body gestures that represent specific emotions. Head, face, hand, body, and speech data were captured from 15 participants using an infrared sensor (Microsoft Kinect). The 3D geometric and kinematic features were developed using raw feature data from the visual channel. Human emotional behavior-based features were developed using inter-annotator agreement and commonly observed expressions, movements and postures associated to specific emotions. The features from each modality and the behavioral pattern-based features (head shake, arm retraction, body forward movement depicting anger) were combined to train the multimodal classifier for the emotion recognition system. The classifier was trained using 10-fold cross validation and support vector machine (SVM) to predict six basic emotions. The results showed improvement in emotion recognition accuracy (The precision increased by 3.28% and the recall rate by 3.17%) when the 3D geometric, kinematic, and human behavioral pattern-based features were combined for multimodal emotion recognition using supervised classification.", "title": "" }, { "docid": "564872511b110238b1a2d755700fdf12", "text": "The present paper makes use of factorial experiments to assess software complexity using insertion sort as a trivial example. We next propose to implement the methodology in quicksort and other advanced algorithms.", "title": "" }, { "docid": "9b7e83fbcb9c725fbcc42cc082825f4f", "text": "Amazon is well-known for personalization and recommendations, which help customers discover items they might otherwise not have found. In this update to their original paper, the authors discuss some of the changes as Amazon has grown.", "title": "" }, { "docid": "dc812a89cadb88ec6cfc5d75f68052ff", "text": "The recent advancements in sensor technology have made it possible to collect enormous amounts of data in real time. How to find out unusual pattern from time series data plays a very important role in data mining. In this paper, we focus on the abnormal subsequence detection. The original definition of discord subsequences is defective for some kind of time series, in this paper we give a more robust definition which is based on the k nearest neighbors. We also donate a novel method for time series representation, it has better performance than traditional methods (like PAA/SAX) to represent the characteristic of some special time series. To speed up the process of abnormal subsequence detection, we used the clustering method to optimize the outer loop ordering and early abandon subsequence which is impossible to be abnormal. The experiment results validate that the algorithm is correct and has a high efficiency.", "title": "" }, { "docid": "ce4a19ccb75c82a0afde6b531776a23f", "text": "This article describes posterior maximization for topic models, identifying computational and conceptual gains from inference under a non-standard parametrization. We then show that fitted parameters can be used as the basis for a novel approach to marginal likelihood estimation, via block-diagonal approximation to the information matrix, that facilitates choosing the number of latent topics. This likelihood-based model selection is complemented with a goodness-of-fit analysis built around estimated residual dispersion. Examples are provided to illustrate model selection as well as to compare our estimation against standard alternative techniques.", "title": "" }, { "docid": "5c4f20fcde1cc7927d359fd2d79c2ba5", "text": "There are different interpretations of user experience that lead to different scopes of measure. The ISO definition suggests measures of user experience are similar to measures of satisfaction in usability. A survey at Nokia showed that user experience was interpreted in a similar way to usability, but with the addition of anticipation and hedonic responses. CHI 2009 SIG participants identified not just measurement methods, but methods that help understanding of how and why people use products. A distinction can be made between usability methods that have the objective of improving human performance, and user experience methods that have the objective of improving user satisfaction with achieving both pragmatic and hedonic goals. Sometimes the term “user experience” is used to refer to both approaches. DEFINITIONS OF USABILITY AND USER EXPERIENCE There has been a lot of recent debate about the scope of user experience, and how it should be defined [5]. The definition of user experience in ISO FDIS 9241-210 is: A person's perceptions and responses that result from the use and/or anticipated use of a product, system or service. This contrasts with the revised definition of usability in ISO FDIS 9241-210: Extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. Both these definitions suggest that usability or user experience can be measured during or after use of a product, system or service. A person's “perceptions and responses” in the definition of user experience are similar to the concept of satisfaction in usability. From this perspective, measures of user experience can be encompassed within the 3-component model of usability [1], particularly when the experience is task-related. A weakness of both definitions is that they are not explicitly concerned with time. Just as the ISO 9241-11 definition of usability has nothing to say about learnability (where usability changes over time), so the ISO 9241-210 definition of user experience has nothing to say about the way user experience evolves from expectation, through actual interaction, to a total experience that includes reflection on the experience [7]. USER EXPERIENCE NEEDS IN DESIGN AND DEVELOPMENT Ketola and Roto [4] surveyed the needs for information on user experience in Nokia, asking senior staff: Which User Experience information (measurable data gained from our target users directly or indirectly), is useful for your organization? How? 21 needs were identified from 18 respondents who worked in Research, Development, Care, and Quality. Ketola and Roto categorised the responses in terms of the area measured: UX lifecycle, retention, use of functions, breakdowns, customer care, localization, device performance and new technology. In Table 1, the needs have been recategorized by type of measure. It is clear that most of the measures are common to conventional approaches to user centred design, but three measures are specific to user experience: • The impact of expected UX to purchase decisions • Continuous excitement • Why and when the user experiences frustration? USER EXPERIENCE EVALUATION METHODS At the CHI 2009 SIG: “User Experience Evaluation – Do You Know Which Method to Use?” [6] [8], participants were asked to describe user experience evaluation methods that they used. 36 methods were collected (including the example methods presented by the organizers). These have been categorised in Table 2 by the type of evaluation context, and the type of data collected. There was very little mention of using measures specific to user experience, particularly from industry participants. It seems that industry's interpretation of user experience evaluation methods is much broader, going beyond conventional evaluation to encompass methods that collect information that helps design for user experience. In that sense user experience evaluation seems to be interpreted as user centred design methods for achieving user experience. The differentiating factor from more traditional usability work is thus a wider end goal: not just achieving effectiveness, efficiency and satisfaction, but optimising the whole user experience from expectation through actual interaction to reflection on the experience. DIFFERENCES BETWEEN USABILITY AND USER EXPERIENCE Although there is no fundamental difference between measures of usability and measures of user experience at a particular point in time, the difference in emphasis between task performance and pleasure leads to different concerns during development. In the context of user centred design, typical usability concerns include: Measurement category Measurement type Measure Area measured Anticipation Pre-purchase Anticipated use The impact of expected UX to purchase decisions UX lifecycle Overall usability First use Effectiveness Success of taking the product into use UX lifecycle Product upgrade Effectiveness Success in transferring content from old device to the new device UX lifecycle Expectations vs. reality Satisfaction Has the device met your expectations? Retention Long term experience Satisfaction Are you satisfied with the product quality (after 3 months of use) Retention Hedonic Engagement Pleasure Continuous excitement Retention UX Obstacles Frustration Why and when the user experiences frustration? Breakdowns Detailed usability Use of device functions How used What functions are used, how often, why, how, when, where? Use of functions Malfunction Technical problems Amount of “reboots” and severe technical problems experienced. Breakdowns Usability problems Usability problems Top 10 usability problems experienced by the customers. Breakdowns Effect of localization Satisfaction with localisation How do users perceive content in their local language? Localization Latencies Satisfaction with device performance Perceived latencies in key tasks. Device performance Performance Satisfaction with device performance Perceived UX on device performance Device performance Perceived complexity Satisfaction with task complexity Actual and perceived complexity of task accomplishments. Device performance User differences Previous devices Previous user experience Which device you had previously? Retention Differences in user groups User differences How different user groups access features? Use of functions Reliability of product planning User differences Comparison of target users vs. actual buyers? Use of functions Support Customer experience in “touchpoints” Satisfaction with support How does customer think & feel about the interaction in the touch points? Customer care Accuracy of support information Consequences of poor support Does inaccurate support information result in product returns? How? Customer care Innovation feedback User wish list New user ideas & innovations triggered by new experiences New technologies Impact of use Change in user behaviour How the device affects user behaviour How are usage patterns changing when new technologies are introduced New technologies Table 1. Categorisation of usability measures reported in [4] 1. Designing for and evaluating overall effectiveness and efficiency. 2. Designing for and evaluating user comfort and satisfaction. 3. Designing to make the product easy to use, and evaluating the product in order to identify and fix usability problems. 4. When relevant, the temporal aspect leads to a concern for learnability. In the context of user centred design, typical user experience concerns include: 1. Understanding and designing the user’s experience with a product: the way in which people interact with a product over time: what they do and why. 2. Maximising the achievement of the hedonic goals of stimulation, identification and evocation and associated emotional responses. Sometimes the two sets of issues are contrasted as usability and user experience. But some organisations would include both under the common umbrella of user experience. Evaluation context Lab tests Lab study with mind maps Paper prototyping Field tests Product / Tool Comparison Competitive evaluation of prototypes in the wild Field observation Long term pilot study Longitudinal comparison Contextual Inquiry Observation/Post Interview Activity Experience Sampling Longitudinal Evaluation Ethnography Field observations Longitudinal Studies Evaluation of groups Evaluating collaborative user experiences, Instrumented product TRUE Tracking Realtime User Experience Domain specific Nintendi Wii Children OPOS Outdoor Play Observation Scheme This-or-that Approaches Evaluating UX jointly with usability Evaluation data User opinion/interview Lab study with mind maps Quick and dirty evaluation Audio narrative Retrospective interview Contextual Inquiry Focus groups evaluation Observation \\ Post Interview Activity Experience Sampling Sensual Evaluation Instrument Contextual Laddering Interview ESM User questionnaire Survey Questions Emocards Experience sampling triggered by events, SAM Magnitude Estimation TRUE Tracking Realtime User Experience Questionnaire (e.g. AttrakDiff) Human responses PURE preverbal user reaction evaluation Psycho-physiological measurements Expert evaluation Expert evaluation Heuristic matrix Perspective-Based Inspection Table2. User experience evaluation methods (CHI 2009 SIG) CONCLUSIONS The scope of user experience The concept of user experience both broadens: • The range of human responses that would be measured to include pleasure. • The circumstances in which they would be measured to include anticipated use and reflection on use. Equally importantly the goal to achieve improved user experience over the whole lifecycle of user involvement with the product leads to increased emphasis on use of methods that help understand what can be done to improve this experience through the whole lifecycle of user involvement. However, notably absent from any of the current surveys or initiative", "title": "" }, { "docid": "61615f5aefb0aa6de2dd1ab207a966d5", "text": "Wikipedia provides an enormous amount of background knowledge to reason about the semantic relatedness between two entities. We propose Wikipedia-based Distributional Semantics for Entity Relatedness (DiSER), which represents the semantics of an entity by its distribution in the high dimensional concept space derived from Wikipedia. DiSER measures the semantic relatedness between two entities by quantifying the distance between the corresponding high-dimensional vectors. DiSER builds the model by taking the annotated entities only, therefore it improves over existing approaches, which do not distinguish between an entity and its surface form. We evaluate the approach on a benchmark that contains the relative entity relatedness scores for 420 entity pairs. Our approach improves the accuracy by 12% on state of the art methods for computing entity relatedness. We also show an evaluation of DiSER in the Entity Disambiguation task on a dataset of 50 sentences with highly ambiguous entity mentions. It shows an improvement of 10% in precision over the best performing methods. In order to provide the resource that can be used to find out all the related entities for a given entity, a graph is constructed, where the nodes represent Wikipedia entities and the relatedness scores are reflected by the edges. Wikipedia contains more than 4.1 millions entities, which required efficient computation of the relatedness scores between the corresponding 17 trillions of entity-pairs.", "title": "" }, { "docid": "3fa0ab962ec54cea182a293810cf7ce8", "text": "Peer review is at the heart of the processes of not just medical journals but of all of science. It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won. Yet it is hard to define. It has until recently been unstudied. And its defects are easier to identify than its attributes. Yet it shows no sign of going away. Famously, it is compared with democracy: a system full of problems but the least worst we have. When something is peer reviewed it is in some sense blessed. Even journalists recognize this. When the BMJ published a highly controversial paper that argued that a new ‘disease’, female sexual dysfunction, was in some ways being created by pharmaceutical companies, a friend who is a journalist was very excited—not least because reporting it gave him a chance to get sex onto the front page of a highly respectable but somewhat priggish newspaper (the Financial Times). ‘But,’ the news editor wanted to know, ‘was this paper peer reviewed?’. The implication was that if it had been it was good enough for the front page and if it had not been it was not. Well, had it been? I had read it much more carefully than I read many papers and had asked the author, who happened to be a journalist, to revise the paper and produce more evidence. But this was not peer review, even though I was a peer of the author and had reviewed the paper. Or was it? (I told my friend that it had not been peer reviewed, but it was too late to pull the story from the front page.)", "title": "" }, { "docid": "fd9db865b26556e99923346a5eb51938", "text": "Optogenetic approaches promise to revolutionize neuroscience by using light to manipulate neural activity in genetically or functionally defined neurons with millisecond precision. Harnessing the full potential of optogenetic tools, however, requires light to be targeted to the right neurons at the right time. Here we discuss some barriers and potential solutions to this problem. We review methods for targeting the expression of light-activatable molecules to specific cell types, under genetic, viral or activity-dependent control. Next we explore new ways to target light to individual neurons to allow their precise activation and inactivation. These techniques provide a precision in the temporal and spatial activation of neurons that was not achievable in previous experiments. In combination with simultaneous recording and imaging techniques, these strategies will allow us to mimic the natural activity patterns of neurons in vivo, enabling previously impossible 'dream experiments'.", "title": "" }, { "docid": "35830166ddf17086a61ab07ec41be6b0", "text": "As the need for Human Computer Interaction (HCI) designers increases so does the need for courses that best prepare students for their future work life. Multidisciplinary teamwork is what very frequently meets the graduates in their new work situations. Preparing students for such multidisciplinary work through education is not easy to achieve. In this paper, we investigate ways to engage computer science students, majoring in design, use, and interaction (with technology), in design practices through an advanced graduate course in interaction design. Here, we take a closer look at how prior embodied and explicit knowledge of HCI that all of the students have, combined with understanding of design practice through the course, shape them as human-computer interaction designers. We evaluate the results of the effort in terms of increase in creativity, novelty of ideas, body language when engaged in design activities, and in terms of perceptions of how well this course prepared the students for the work practice outside of the university. Keywords—HCI education; interaction design; studio; design education; multidisciplinary teamwork.", "title": "" }, { "docid": "e9bc802e8ce6a823526084c82aa89c95", "text": "Non-orthogonal multiple access (NOMA) is a promising radio access technique for further cellular enhancements toward 5G. Single-user multiple-input multiple-output (SU-MIMO) is one of the key technologies in LTE /LTE-Advanced systems. Thus, it is of great interest to study how to efficiently and effectively combine NOMA and SU-MIMO techniques together for further system performance improvement. This paper investigates the combination of NOMA with open-loop and closed-loop SU-MIMO. The key issues involved in the combination are presented and discussed, including scheduling algorithm, successive interference canceller (SIC) order determination, transmission power assignment and feedback design. The performances of NOMA with SU-MIMO are investigated by system-level simulations with very practical assumptions. Simulation results show that compared to orthogonal multiple access system, NOMA can achieve large performance gains both open-loop and closed-loop SU-MIMO, which are about 23% for cell average throughput and 33% for cell-edge user throughput.", "title": "" }, { "docid": "7a3573bfb32dc1e081d43fe9eb35a23b", "text": "Collections of relational paraphrases have been automatically constructed from large text corpora, as a WordNet counterpart for the realm of binary predicates and their surface forms. However, these resources fall short in their coverage of hypernymy links (subsumptions) among the synsets of phrases. This paper closes this gap by computing a high-quality alignment between the relational phrases of the Patty taxonomy, one of the largest collections of this kind, and the verb senses of WordNet. To this end, we devise judicious features and develop a graph-based alignment algorithm by adapting and extending the SimRank random-walk method. The resulting taxonomy of relational phrases and verb senses, coined HARPY, contains 20,812 synsets organized into a Directed Acyclic Graph (DAG) with 616,792 hypernymy links. Our empirical assessment, indicates that the alignment links between Patty and WordNet have high accuracy, with Mean Reciprocal Rank (MRR) score 0.7 and Normalized Discounted Cumulative Gain (NDCG) score 0.73. As an additional extrinsic value, HARPY provides fine-grained lexical types for the arguments of verb senses in WordNet.", "title": "" }, { "docid": "e871e2b5bd1ed95fd5302e71f42208bf", "text": "Chapters 2–7 make up Part II of the book: artificial neural networks. After introducing the basic concepts of neurons and artificial neuron learning rules in Chapter 2, Chapter 3 describes a particular formalism, based on signal-plus-noise, for the learning problem in general. After presenting the basic neural network types this chapter reviews the principal algorithms for error function minimization/optimization and shows how these learning issues are addressed in various supervised models. Chapter 4 deals with issues in unsupervised learning networks, such as the Hebbian learning rule, principal component learning, and learning vector quantization. Various techniques and learning paradigms are covered in Chapters 3–6, and especially the properties and relative merits of the multilayer perceptron networks, radial basis function networks, self-organizing feature maps and reinforcement learning are discussed in the respective four chapters. Chapter 7 presents an in-depth examination of performance issues in supervised learning, such as accuracy, complexity, convergence, weight initialization, architecture selection, and active learning. Par III (Chapters 8–15) offers an extensive presentation of techniques and issues in evolutionary computing. Besides the introduction to the basic concepts in evolutionary computing, it elaborates on the more important and most frequently used techniques on evolutionary computing paradigm, such as genetic algorithms, genetic programming, evolutionary programming, evolutionary strategies, differential evolution, cultural evolution, and co-evolution, including design aspects, representation, operators and performance issues of each paradigm. The differences between evolutionary computing and classical optimization are also explained. Part IV (Chapters 16 and 17) introduces swarm intelligence. It provides a representative selection of recent literature on swarm intelligence in a coherent and readable form. It illustrates the similarities and differences between swarm optimization and evolutionary computing. Both particle swarm optimization and ant colonies optimization are discussed in the two chapters, which serve as a guide to bringing together existing work to enlighten the readers, and to lay a foundation for any further studies. Part V (Chapters 18–21) presents fuzzy systems, with topics ranging from fuzzy sets, fuzzy inference systems, fuzzy controllers, to rough sets. The basic terminology, underlying motivation and key mathematical models used in the field are covered to illustrate how these mathematical tools can be used to handle vagueness and uncertainty. This book is clearly written and it brings together the latest concepts in computational intelligence in a friendly and complete format for undergraduate/postgraduate students as well as professionals new to the field. With about 250 pages covering such a wide variety of topics, it would be impossible to handle everything at a great length. Nonetheless, this book is an excellent choice for readers who wish to familiarize themselves with computational intelligence techniques or for an overview/introductory course in the field of computational intelligence. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond—Bernhard Schölkopf and Alexander Smola, (MIT Press, Cambridge, MA, 2002, ISBN 0-262-19475-9). Reviewed by Amir F. Atiya.", "title": "" }, { "docid": "7ec5faf2081790e7baa1832d5f9ab5bd", "text": "Text detection in complex background images is a challenging task for intelligent vehicles. Actually, almost all the widely-used systems focus on commonly used languages while for some minority languages, such as the Uyghur language, text detection is paid less attention. In this paper, we propose an effective Uyghur language text detection system in complex background images. First, a new channel-enhanced maximally stable extremal regions (MSERs) algorithm is put forward to detect component candidates. Second, a two-layer filtering mechanism is designed to remove most non-character regions. Third, the remaining component regions are connected into short chains, and the short chains are extended by a novel extension algorithm to connect the missed MSERs. Finally, a two-layer chain elimination filter is proposed to prune the non-text chains. To evaluate the system, we build a new data set by various Uyghur texts with complex backgrounds. Extensive experimental comparisons show that our system is obviously effective for Uyghur language text detection in complex background images. The F-measure is 85%, which is much better than the state-of-the-art performance of 75.5%.", "title": "" }, { "docid": "9679713ae8ab7e939afba18223086128", "text": "If, as many psychologists seem to believe, im­ mediate memory represents a distinct system or set of processes from long-term memory (L TM), then what might· it be for? This fundamental, functional question was surprisingly unanswer­ able in the 1970s, given the volume of research that had explored short-term memory (STM), and given the ostensible role that STM was thought to play in cognitive control (Atkinson & Shiffrin, 1971 ). Indeed, failed attempts to link STM to complex cognitive· functions, such as reading comprehension, loomed large in Crow­ der's (1982) obituary for the concept. Baddeley and Hitch ( 197 4) tried to validate immediate memory's functions by testing sub­ jects in reasoning, comprehension, and list­ learning tasks at the same time their memory was occupied by irrelevant material. Generally, small memory loads (i.e., three or fewer items) were retained with virtually no effect on the primary tasks, whereas memory loads of six items consistently impaired reasoning, compre­ hension, and learning. Baddeley and Hitch therefore argued that \"working memory\" (WM)", "title": "" } ]
scidocsrr
61d177c20d922637859089bce8e4bce9
Dotplot Patterns: A Literal Look at Pattern Languages
[ { "docid": "c5fb41286774838ba088415541a84089", "text": "Numerous classes, complex inheritance and containment hierarchies, and diverse patterns of dynamic interaction all contribute to difficulties in understanding, reusing, debugging, and tuning large object-oriented systems. To help overcome these difficulties, we introduce novel views of the behavior of object-oriented systems and an architecture for creating and animating these views. We describe platform-independent techniques for instrumenting object-oriented programs, a language-independent protocol for monitoring their execution, and a structure for decoupling the execution of a subject program from its visualization. Case studies involving tuning and debugging of real systems are presented to demonstrate the benefits of visualization. We believe that visualization will prove to be a valuable tool for object-oriented software development.", "title": "" } ]
[ { "docid": "5b341604b207e80ef444d11a9de82f72", "text": "Digital deformities continue to be a common ailment among many patients who present to foot and ankle specialists. When conservative treatment fails to eliminate patient complaints, surgical correction remains a viable treatment option. Proximal interphalangeal joint arthrodesis remains the standard procedure among most foot and ankle surgeons. With continued advances in fixation technology and techniques, surgeons continue to have better options for the achievement of excellent digital surgery outcomes. This article reviews current trends in fixation of digital deformities while highlighting pertinent aspects of the physical examination, radiographic examination, and surgical technique.", "title": "" }, { "docid": "e4bb68996b39f43b45304bb012d52271", "text": "Humanity produces data at exponential rates, creating a growing demand for better storage devices. DNA molecules are an attractive medium to store digital information due to their durability and high information density. Recent studies have made large strides in developing DNA storage schemes by exploiting the advent of massive parallel synthesis of DNA oligos and the high throughput of sequencing platforms. However, most of these experiments reported small gaps and errors in the retrieved information. Here, we report a strategy to store and retrieve DNA information that is robust and approaches the theoretical maximum of information that can be stored per nucleotide. The success of our strategy lies in careful adaption of recent developments in coding theory to the domain specific constrains of DNA storage. To test our strategy, we stored an entire computer operating system, a movie, a gift card, and other computer files with a total of 2.14×10 bytes in DNA oligos. We were able to fully retrieve the information without a single error even with a sequencing throughput on the scale of a single tile of an Illumina sequencing flow cell. To further stress our strategy, we created a deep copy of the data by PCR amplifying the oligo pool in a total of nine successive reactions, reflecting one complete path of an exponential process to copy the file 218×10 times. We perfectly retrieved the original data with only five million reads. Taken together, our approach opens the possibility of highly reliable DNA-based storage that approaches the information capacity of DNA molecules and enables virtually unlimited data retrieval. . CC-BY-NC 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/074237 doi: bioRxiv preprint first posted online Sep. 9, 2016;", "title": "" }, { "docid": "65baa2316024ca738f566a53818fc626", "text": "The proper usage and creation of transfer functions for time-varying data sets is an often ignored problem in volume visualization. Although methods and guidelines exist for time-invariant data, little formal study for the timevarying case has been performed. This paper examines this problem, and reports the study that we have conducted to determine how the dynamic behavior of time-varying data may be captured by a single or small set of transfer functions. The criteria which dictate when more than one transfer function is needed were also investigated. Four data sets with different temporal characteristics were used for our study. Results obtained using two different classes of methods are discussed, along with lessons learned. These methods, including a new multiresolution opacity map approach, can be used for semi-automatic generation of transfer functions to explore large-scale time-varying data sets.", "title": "" }, { "docid": "1f484a558cd75b0f4b4cf3fe27559585", "text": "The relationship between theatre and games has been repeatedly discussed (Laurel 1993; Murray 1997; Frasca 2004; El-Nasr 2007;Fernández-Vara 2009), but its possibilities have not been explored in enough depth. This paper goes beyond a theoretical proposal, and describes how Stanislavski’s acting method (1959) served as the inspiration to design a game based on the Spanish classical theatre play, La Dama Boba (The Foolish Lady). The result was a point-and-click adventure game developed with the eAdventure platform, (Torrente, del Blanco, Marchiori, Moreno-Ger, Fernandez-Manjon 2010) a tool to create educational games. The paper provides an overview of the most and least successful aspects of this design method, and how it helped transform a narrative, dramatic in this case, into a digital game.", "title": "" }, { "docid": "6289f4eea3f0c99d1dfafc5cb90de607", "text": "In this paper, for the first time, we introduce a multiple instance (MI) deep hashing technique for learning discriminative hash codes with weak bag-level supervision suited for large-scale retrieval. We learn such hash codes by aggregating deeply learnt hierarchical representations across bag members through a dedicated MI pool layer. For better trainability and retrieval quality, we propose a two-pronged approach that includes robust optimization and training with an auxiliary single instance hashing arm which is down-regulated gradually. We pose retrieval for tumor assessment as an MI problem because tumors often coexist with benign masses and could exhibit complementary signatures when scanned from different anatomical views. Experimental validations on benchmark mammography and histology datasets demonstrate improved retrieval performance over the state-of-the-art methods.", "title": "" }, { "docid": "f5a188c87dd38a0a68612352891bcc3f", "text": "Sentiment analysis of online documents such as news articles, blogs and microblogs has received increasing attention in recent years. In this article, we propose an efficient algorithm and three pruning strategies to automatically build a word-level emotional dictionary for social emotion detection. In the dictionary, each word is associated with the distribution on a series of human emotions. In addition, a method based on topic modeling is proposed to construct a topic-level dictionary, where each topic is correlated with social emotions. Experiment on the real-world data sets has validated the effectiveness and reliability of the methods. Compared with other lexicons, the dictionary generated using our approach is language-independent, fine-grained, and volume-unlimited. The generated dictionary has a wide range of applications, including predicting the emotional distribution of news articles, identifying social emotions on certain entities and news events.", "title": "" }, { "docid": "ed4050c6934a5a26fc377fea3eefa3bc", "text": "This paper presents the design of the permanent magnetic system for the wall climbing robot with permanent magnetic tracks. A proposed wall climbing robot with permanent magnetic adhesion mechanism for inspecting the oil tanks is briefly put forward, including the mechanical system architecture. The permanent magnetic adhesion mechanism and the tracked locomotion mechanism are employed in the robot system. By static and dynamic force analysis of the robot, design parameters about adhesion mechanism are derived. Two types of the structures of the permanent magnetic units are given in the paper. The analysis of those two types of structure is also detailed. Finally, two wall climbing robots equipped with those two different magnetic systems are discussed and the experiments are included in the paper.", "title": "" }, { "docid": "92600ef3d90d5289f70b10ccedff7a81", "text": "In this paper, the chicken farm monitoring system is proposed and developed based on wireless communication unit to transfer data by using the wireless module combined with the sensors that enable to detect temperature, humidity, light and water level values. This system is focused on the collecting, storing, and controlling the information of the chicken farm so that the high quality and quantity of the meal production can be produced. This system is developed to solve several problems in the chicken farm which are many human workers is needed to control the farm, high cost in maintenance, and inaccurate data collected at one point. The proposed methodology really helps in finishing this project within the period given. Based on the research that has been carried out, the system that can monitor and control environment condition (temperature, humidity, and light) has been developed by using the Arduino microcontroller. This system also is able to collect data and operate autonomously.", "title": "" }, { "docid": "d0b803e0ce29b3347c4a17dabe086199", "text": "The portrayal of mentally ill persons in movies and television programs has an important and underestimated influence on public perceptions of their condition and care. Movie stereotypes that contribute to the stigmatization of mentally ill persons include the mental patient as rebellious free spirit, homicidal maniac, seductress, enlightened member of society, narcissistic parasite, and zoo specimen. The authors suggest that mental health professionals can fight this source of stigma by increasing their collaboration with patient advocacy groups in monitoring negative portrayals of mentally ill people, using public information campaigns such as Mental Illness Awareness Week to call attention to the process of stigmatization, and supporting accurate dramatic and documentary depictions of mental illness.", "title": "" }, { "docid": "20f3b5b42f33056276c44fe4b2f655d2", "text": "We explore unsupervised representation learning of radio communication signals in raw sampled time series representation. We demonstrate that we can learn modulation basis functions using convolutional autoencoders and visually recognize their relationship to the analytic bases used in digital communications. We also propose and evaluate quantitative metrics for quality of encoding using domain relevant performance metrics.", "title": "" }, { "docid": "ca715288ff8af17697e65d8b3c9f01bf", "text": "In the last five years, biologically inspired features (BIF) always held the state-of-the-art results for human age estimation from face images. Recently, researchers mainly put their focuses on the regression step after feature extraction, such as support vector regression (SVR), partial least squares (PLS), canonical correlation analysis (CCA) and so on. In this paper, we apply convolutional neural network (CNN) to the age estimation problem, which leads to a fully learned end-toend system can estimate age from image pixels directly. Compared with BIF, the proposed method has deeper structure and the parameters are learned instead of hand-crafted. The multi-scale analysis strategy is also introduced from traditional methods to the CNN, which improves the performance significantly. Furthermore, we train an efficient network in a multi-task way which can do age estimation, gender classification and ethnicity classification well simultaneously. The experiments on MORPH Album 2 illustrate the superiorities of the proposed multi-scale CNN over other state-of-the-art methods.", "title": "" }, { "docid": "38e95632ff481471ddf38c12044257df", "text": "Retrieving object instances among cluttered scenes efficiently requires compact yet comprehensive regional image representations. Intuitively, object semantics can help build the index that focuses on the most relevant regions. However, due to the lack of bounding-box datasets for objects of interest among retrieval benchmarks, most recent work on regional representations has focused on either uniform or class-agnostic region selection. In this paper, we first fill the void by providing a new dataset of landmark bounding boxes, based on the Google Landmarks dataset, that includes 94k images with manually curated boxes from 15k unique landmarks. Then, we demonstrate how a trained landmark detector, using our new dataset, can be leveraged to index image regions and improve retrieval accuracy while being much more efficient than existing regional methods. In addition, we further introduce a novel regional aggregated selective match kernel (R-ASMK) to effectively combine information from detected regions into an improved holistic image representation. R-ASMK boosts image retrieval accuracy substantially at no additional memory cost, while even outperforming systems that index image regions independently. Our complete image retrieval system improves upon the previous state-of-the-art by significant margins on the Revisited Oxford and Paris datasets. Code and data will be released.", "title": "" }, { "docid": "a0501b0b3ba110692f9b162ce5f72c05", "text": "RDF and related Semantic Web technologies have been the recent focus of much research activity. This work has led to new specifications for RDF and OWL. However, efficient implementations of these standards are needed to realize the vision of a world-wide semantic Web. In particular, implementations that scale to large, enterprise-class data sets are required. Jena2 is the second generation of Jena, a leading semantic web programmers’ toolkit. This paper describes the persistence subsystem of Jena2 which is intended to support large datasets. This paper describes its features, the changes from Jena1, relevant details of the implementation and performance tuning issues. Query optimization for RDF is identified as a promising area for future research.", "title": "" }, { "docid": "a48b7c679008235568d3d431081277b4", "text": "This paper discusses the security aspects of a registration protocol in a mobile satellite communication system. We propose a new mobile user authentication and data encryption scheme for mobile satellite communication systems. The scheme can remedy a replay attack.", "title": "" }, { "docid": "cf3923db7a4880b586e869be16739c8f", "text": "Deep learning algorithms excel at extracting patterns from raw data, and with large datasets, they have been very successful in computer vision and natural language applications. However, in other domains, large datasets on which to learn representations from may not exist. In this work, we develop a novel multimodal CNN-MLP neural network architecture that utilizes both domain-specific feature engineering as well as learned representations from raw data. We illustrate the effectiveness of such network designs in the chemical sciences, for predicting biodegradability. DeepBioD, a multimodal CNN-MLP network is more accurate than either standalone network designs, and achieves an error classification rate of 0.125 that is 27% lower than the current state-of-theart. Thus, our work indicates that combining traditional feature engineering with representation learning can be effective, particularly in situations where labeled data is limited.", "title": "" }, { "docid": "12f8d5a55ba9b1e773fbab5429880db6", "text": "Addiction is associated with neuroplasticity in the corticostriatal brain circuitry that is important for guiding adaptive behaviour. The hierarchy of corticostriatal information processing that normally permits the prefrontal cortex to regulate reinforcement-seeking behaviours is impaired by chronic drug use. A failure of the prefrontal cortex to control drug-seeking behaviours can be linked to an enduring imbalance between synaptic and non-synaptic glutamate, termed glutamate homeostasis. The imbalance in glutamate homeostasis engenders changes in neuroplasticity that impair communication between the prefrontal cortex and the nucleus accumbens. Some of these pathological changes are amenable to new glutamate- and neuroplasticity-based pharmacotherapies for treating addiction.", "title": "" }, { "docid": "e9ea3dd59bb3ab6bd698b44c993a8b0e", "text": "We present an optical flow algorithm for large displacement motions. Most existing optical flow methods use the standard coarse-to-fine framework to deal with large displacement motions which has intrinsic limitations. Instead, we formulate the motion estimation problem as a motion segmentation problem. We use approximate nearest neighbor fields to compute an initial motion field and use a robust algorithm to compute a set of similarity transformations as the motion candidates for segmentation. To account for deviations from similarity transformations, we add local deformations in the segmentation process. We also observe that small objects can be better recovered using translations as the motion candidates. We fuse the motion results obtained under similarity transformations and under translations together before a final refinement. Experimental validation shows that our method can successfully handle large displacement motions. Although we particularly focus on large displacement motions in this work, we make no sacrifice in terms of overall performance. In particular, our method ranks at the top of the Middlebury benchmark.", "title": "" }, { "docid": "a53065d1cfb1fe898182d540d65d394b", "text": "This paper presents a novel approach for detecting affine invariant interest points. Our method can deal with significant affine transformations including large scale changes. Such transformations introduce significant changes in the point location as well as in the scale and the shape of the neighbourhood of an interest point. Our approach allows to solve for these problems simultaneously. It is based on three key ideas : 1) The second moment matrix computed in a point can be used to normalize a region in an affine invariant way (skew and stretch). 2) The scale of the local structure is indicated by local extrema of normalized derivatives over scale. 3) An affine-adapted Harris detector determines the location of interest points. A multi-scale version of this detector is used for initialization. An iterative algorithm then modifies location, scale and neighbourhood of each point and converges to affine invariant points. For matching and recognition, the image is characterized by a set of affine invariant points ; the affine transformation associated with each point allows the computation of an affine invariant descriptor which is also invariant to affine illumination changes. A quantitative comparison of our detector with existing ones shows a significant improvement in the presence of large affine deformations. Experimental results for wide baseline matching show an excellent performance in the presence of large perspective transformations including significant scale changes. Results for recognition are very good for a database with more than 5000 images.", "title": "" } ]
scidocsrr
59e961dd5a4db454129f31cd2e85e782
Probabilistic risk analysis and terrorism risk.
[ { "docid": "7adb0a3079fb3b64f7a503bd8eae623e", "text": "Attack trees have found their way to practice because they have proved to be an intuitive aid in threat analysis. Despite, or perhaps thanks to, their apparent simplicity, they have not yet been provided with an unambiguous semantics. We argue that such a formal interpretation is indispensable to precisely understand how attack trees can be manipulated during construction and analysis. We provide a denotational semantics, based on a mapping to attack suites, which abstracts from the internal structure of an attack tree, we study transformations between attack trees, and we study the attribution and projection of an attack tree.", "title": "" } ]
[ { "docid": "a2189a6b0cf23e40e2d1948e86330466", "text": "Evolutionary psychology is an approach to the psychological sciences in which principles and results drawn from evolutionary biology, cognitive science, anthropology, and neuroscience are integrated with the rest of psychology in order to map human nature. By human nature, evolutionary psychologists mean the evolved, reliably developing, species-typical computational and neural architecture of the human mind and brain. According to this view, the functional components that comprise this architecture were designed by natural selection to solve adaptive problems faced by our hunter-gatherer ancestors, and to regulate behavior so that these adaptive problems were successfully addressed (for discussion, see Cosmides & Tooby, 1987, Tooby & Cosmides, 1992). Evolutionary psychology is not a specific subfield of psychology, such as the study of vision, reasoning, or social behavior. It is a way of thinking about psychology that can be applied to any topic within it including the emotions.", "title": "" }, { "docid": "f555a50f629bd9868e1be92ebdcbc154", "text": "The transformation of traditional energy networks to smart grids revolutionizes the energy industry in terms of reliability, performance, and manageability by providing bi-directional communications to operate, monitor, and control power flow and measurements. However, communication networks in smart grid bring increased connectivity with increased severe security vulnerabilities and challenges. Smart grid can be a prime target for cyber terrorism because of its critical nature. As a result, smart grid security is already getting a lot of attention from governments, energy industries, and consumers. There have been several research efforts for securing smart grid systems in academia, government and industries. This article provides a comprehensive study of challenges in smart grid security, which we concentrate on the problems and proposed solutions. Then, we outline current state of the research and future perspectives.With this article, readers can have a more thorough understanding of smart grid security and the research trends in this topic.", "title": "" }, { "docid": "60fbaecc398f04bdb428ccec061a15a5", "text": "A decade earlier, work on modeling and analyzing social network, was primarily focused on manually collected datasets where the friendship links were sparse but relatively noise free (i.e. all links represented strong physical relation). With the popularity of online social networks, the notion of “friendship” changed dramatically. The data collection, now although automated, contains dense friendship links but the links contain noisier information (i.e. some weaker relationships). The aim of this study is to identify these weaker links and suggest how these links (identification) play a vital role in improving social media design elements such as privacy control, detection of auto-bots, friend introductions, information prioritization and so on. The binary metric used so far for modeling links in social network (i.e. friends or not) is of little importance as it groups all our relatives, close friends and acquaintances in the same category. Therefore a popular notion of tie-strength has been incorporated for modeling links. In this paper, a predictive model is presented that helps evaluate tie-strength for each link in network based on transactional features (e.g. communication, file transfer, photos). The model predicts tie strength with 76.4% efficiency. This work also suggests that important link properties manifest similarly across different social media sites.", "title": "" }, { "docid": "b8fe5687c8b18a8cfdac14a198b77033", "text": "1 Sia Siew Kien, Michael Rosemann and Phillip Yetton are the accepting senior editors for this article. 2 This research was partly funded by an Australian Research Council Discovery grant. The authors are grateful to the interviewees, whose willingness to share their valuable insights and experiences made this study possible, and to the senior editors and reviewers for their very helpful feedback and advice throughout the review process. 3 All quotes in this article are from employees of “RetailCo,” the subject of this case study. The names of the organization and its business divisions have been anonymized. 4 A digital business platform is “an integrated set of electronic business processes and the technologies, applications and data supporting those processes” Weill, P. and Ross, J. W. IT Savvy: What Top Executives Must Know to Go from Pain to Gain, Harvard Business School Publishing, 2009, p. 4; for more on digitized platforms, see pp. 67-87 of this publication. How an Australian Retailer Enabled Business Transformation Through Enterprise Architecture", "title": "" }, { "docid": "de17b1fcae6336947e82adab0881b5ba", "text": "Presence of duplicate documents in the World Wide Web adversely affects crawling, indexing and relevance, which are the core building blocks of web search. In this paper, we present a set of techniques to mine rules from URLs and utilize these learnt rules for de-duplication using just URL strings without fetching the content explicitly. Our technique is composed of mining the crawl logs and utilizing clusters of similar pages to extract specific rules from URLs belonging to each cluster. Preserving each mined rules for de-duplication is not efficient due to the large number of specific rules. We present a machine learning technique to generalize the set of rules, which reduces the resource footprint to be usable at web-scale. The rule extraction techniques are robust against web-site specific URL conventions. We demonstrate the effectiveness of our techniques through experimental evaluation.", "title": "" }, { "docid": "171d9acd0e2cb86a02d5ff56d4515f0d", "text": "We explore two solutions to the problem of mistranslating rare words in neural machine translation. First, we argue that the standard output layer, which computes the inner product of a vector representing the context with all possible output word embeddings, rewards frequent words disproportionately, and we propose to fix the norms of both vectors to a constant value. Second, we integrate a simple lexical module which is jointly trained with the rest of the model. We evaluate our approaches on eight language pairs with data sizes ranging from 100k to 8M words, and achieve improvements of up to +4.3 BLEU, surpassing phrasebased translation in nearly all settings.1", "title": "" }, { "docid": "2d6523ef6609c11274449d3b9a57c53c", "text": "Performing information retrieval tasks while preserving data confidentiality is a desirable capability when a database is stored on a server maintained by a third-party service provider. This paper addresses the problem of enabling content-based retrieval over encrypted multimedia databases. Search indexes, along with multimedia documents, are first encrypted by the content owner and then stored onto the server. Through jointly applying cryptographic techniques, such as order preserving encryption and randomized hash functions, with image processing and information retrieval techniques, secure indexing schemes are designed to provide both privacy protection and rank-ordered search capability. Retrieval results on an encrypted color image database and security analysis of the secure indexing schemes under different attack models show that data confidentiality can be preserved while retaining very good retrieval performance. This work has promising applications in secure multimedia management.", "title": "" }, { "docid": "3caa8fc1ea07fcf8442705c3b0f775c5", "text": "Recent research in the field of computational social science have shown how data resulting from the widespread adoption and use of social media channels such as twitter can be used to predict outcomes such as movie revenues, election winners, localized moods, and epidemic outbreaks. Underlying assumptions for this research stream on predictive analytics are that social media actions such as tweeting, liking, commenting and rating are proxies for user/consumer's attention to a particular object/product and that the shared digital artefact that is persistent can create social influence. In this paper, we demonstrate how social media data from twitter can be used to predict the sales of iPhones. Based on a conceptual model of social data consisting of social graph (actors, actions, activities, and artefacts) and social text (topics, keywords, pronouns, and sentiments), we develop and evaluate a linear regression model that transforms iPhone tweets into a prediction of the quarterly iPhone sales with an average error close to the established prediction models from investment banks. This strong correlation between iPhone tweets and iPhone sales becomes marginally stronger after incorporating sentiments of tweets. We discuss the findings and conclude with implications for predictive analytics with big social data.", "title": "" }, { "docid": "52b1c306355e6bf8ba10ea7e3cf1d05e", "text": "QUESTION\nIs there a means of assessing research impact beyond citation analysis?\n\n\nSETTING\nThe case study took place at the Washington University School of Medicine Becker Medical Library.\n\n\nMETHOD\nThis case study analyzed the research study process to identify indicators beyond citation count that demonstrate research impact.\n\n\nMAIN RESULTS\nThe authors discovered a number of indicators that can be documented for assessment of research impact, as well as resources to locate evidence of impact. As a result of the project, the authors developed a model for assessment of research impact, the Becker Medical Library Model for Assessment of Research.\n\n\nCONCLUSION\nAssessment of research impact using traditional citation analysis alone is not a sufficient tool for assessing the impact of research findings, and it is not predictive of subsequent clinical applications resulting in meaningful health outcomes. The Becker Model can be used by both researchers and librarians to document research impact to supplement citation analysis.", "title": "" }, { "docid": "5e85b2fedd9fc66b198ccfc5b010da54", "text": "a r t i c l e i n f o Keywords: Theory of planned behaviour Post-adoption Perceived value Facebook Social networking sites TPB SNS This study examines the continuance participation intentions and behaviour on Facebook, as a representative of Social Networking Sites (SNSs), from a social and behavioural perspective. The study extends the Theory of Planned Behaviour (TPB) through the inclusion of perceived value construct and utilizes the extended theory to explain users' continuance participation intentions and behaviour on Facebook. Despite the recent massive uptake of Facebook, our review of the related-literature revealed that very few studies tackled such technologies from the context of post-adoption as in this research. Using data from surveys of undergraduate and postgraduate students in Jordan (n=403), the extended theory was tested using statistical analysis methods. The results show that attitude, subjective norm, perceived behavioural control, and perceived value have significant effect on the continuance participation intention of post-adopters. Further, the results show that continuance participation intention and perceived value have significant effect on continuance participation behaviour. However, the results show that perceived be-havioural control has no significant effect on continuance participation behaviour of post-adopters. When comparing the extended theory developed in this study with the standard TPB, it was found that the inclusion of the perceived value construct in the extended theory is fruitful; as such an extension explained an additional 11.6% of the variance in continuance participation intention and 4.5% of the variance in continuance participation behaviour over the standard TPB constructs. Consistent with the research on value-driven post-adoption behaviour, these findings suggest that continuance intentions and behaviour of users of Facebook are likely to be greater when they perceive the behaviour to be associated with significant added-value (i.e. benefits outperform sacrifices). Since its introduction, the Internet has enabled entirely new forms of social interaction and activities, thanks to its basic features such as the prevalent usability and access. As the Internet is massively evolving over time, the World Wide Web or otherwise referred to as Web 1.0 has been transformed to the so-called Web 2.0. In fact, Web 2.0 refers to the second generation of the World Wide Web that facilitates information sharing, interoperability, user-centred design and collaboration. The advent of Web 2.0 has led to the development and evolution of Web-based communities, hosted services, and Web applications that work as a mainstream medium for value creation and exchange. Examples of Web …", "title": "" }, { "docid": "11a9d7a218d1293878522252e1f62778", "text": "This paper presents a wideband circularly polarized millimeter-wave (mmw) antenna design. We introduce a novel 3-D-printed polarizer, which consists of several air and dielectric slabs to transform the polarization of the antenna radiation from linear to circular. The proposed polarizer is placed above a radiating aperture operating at the center frequency of 60 GHz. An electric field, <inline-formula> <tex-math notation=\"LaTeX\">${E}$ </tex-math></inline-formula>, radiated from the aperture generates two components of electric fields, <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula>. After passing through the polarizer, both <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula> fields can be degenerated with an orthogonal phase difference which results in having a wide axial ratio bandwidth. The phase difference between <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula> is determined by the incident angle <inline-formula> <tex-math notation=\"LaTeX\">$\\phi $ </tex-math></inline-formula>, of the polarization of the electric field to the polarizer as well as the thickness, <inline-formula> <tex-math notation=\"LaTeX\">${h}$ </tex-math></inline-formula>, of the dielectric slabs. With the help of the thickness of the polarizer, the directivity of the radiation pattern is increased so as to devote high-gain and wideband characteristics to the antenna. To verify our concept, an intensive parametric study and an experiment were carried out. Three antenna sources, including dipole, patch, and aperture antennas, were investigated with the proposed 3-D-printed polarizer. All measured results agree with the theoretical analysis. The proposed antenna with the polarizer achieves a wide impedance bandwidth of 50% from 45 to 75 GHz for the reflection coefficient less than or equal −10 dB, and yields an overlapped axial ratio bandwidth of 30% from 49 to 67 GHz for the axial ratio ≤ 3 dB. The maximum gain of the antenna reaches to 15 dBic. The proposed methodology of this design can apply to applications related to mmw wireless communication systems. The ultimate goal of this paper is to develop a wideband, high-gain, and low-cost antenna for the mmw frequency band.", "title": "" }, { "docid": "289b67247b109ee0de851c0cd4e76ec3", "text": "User engagement is a key concept in designing user-centred web applications. It refers to the quality of the user experience that emphasises the positive aspects of the interaction, and in particular the phenomena associated with being captivated by technology. This definition is motivated by the observation that successful technologies are not just used, but they are engaged with. Numerous methods have been proposed in the literature to measure engagement, however, little has been done to validate and relate these measures and so provide a firm basis for assessing the quality of the user experience. Engagement is heavily influenced, for example, by the user interface and its associated process flow, the user’s context, value system and incentives. In this paper we propose an approach to relating and developing unified measures of user engagement. Our ultimate aim is to define a framework in which user engagement can be studied, measured, and explained, leading to recommendations and guidelines for user interface and interaction design for front-end web technology. Towards this aim, in this paper, we consider how existing user engagement metrics, web analytics, information retrieval metrics, and measures from immersion in gaming can bring new perspective to defining, measuring and explaining user engagement.", "title": "" }, { "docid": "00602badbfba6bc97dffbdd6c5a2ae2d", "text": "Accurately drawing 3D objects is difficult for untrained individuals, as it requires an understanding of perspective and its effects on geometry and proportions. Step-by-step tutorials break the complex task of sketching an entire object down into easy-to-follow steps that even a novice can follow. However, creating such tutorials requires expert knowledge and is time-consuming. As a result, the availability of tutorials for a given object or viewpoint is limited. How2Sketch (H2S) addresses this problem by automatically generating easy-to-follow tutorials for arbitrary 3D objects. Given a segmented 3D model and a camera viewpoint, H2S computes a sequence of steps for constructing a drawing scaffold comprised of geometric primitives, which helps the user draw the final contours in correct perspective and proportion. To make the drawing scaffold easy to construct, the algorithm solves for an ordering among the scaffolding primitives and explicitly makes small geometric modifications to the size and location of the object parts to simplify relative positioning. Technically, we formulate this scaffold construction as a single selection problem that simultaneously solves for the ordering and geometric changes of the primitives. We generate different tutorials on man-made objects using our method and evaluate how easily the tutorials can be followed with a user study.", "title": "" }, { "docid": "d19e825235b5fbb759ff49a1c8398cea", "text": "Febrile seizures are common and mostly benign. They are the most common cause of seizures in children less than five years of age. There are two categories of febrile seizures, simple and complex. Both the International League against Epilepsy and the National Institute of Health has published definitions on the classification of febrile seizures. Simple febrile seizures are mostly benign, but a prolonged (complex) febrile seizure can have long term consequences. Most children who have a febrile seizure have normal health and development after the event, but there is recent evidence that suggests a small subset of children that present with seizures and fever may have recurrent seizure or develop epilepsy. This review will give an overview of the definition of febrile seizures, epidemiology, evaluation, treatment, outcomes and recent research.", "title": "" }, { "docid": "bb799a3aac27f4ac764649e1f58ee9fb", "text": "White grubs (larvae of Coleoptera: Scarabaeidae) are abundant in below-ground systems and can cause considerable damage to a wide variety of crops by feeding on roots. White grub populations may be controlled by natural enemies, but the predator guild of the European species is barely known. Trophic interactions within soil food webs are difficult to study with conventional methods. Therefore, a polymerase chain reaction (PCR)-based approach was developed to investigate, for the first time, a soil insect predator-prey system. Can, however, highly sensitive detection methods identify carrion prey in predators, as has been shown for fresh prey? Fresh Melolontha melolontha (L.) larvae and 1- to 9-day-old carcasses were presented to Poecilus versicolor Sturm larvae. Mitochondrial cytochrome oxidase subunit I fragments of the prey, 175, 327 and 387 bp long, were detectable in 50% of the predators 32 h after feeding. Detectability decreased to 18% when a 585 bp sequence was amplified. Meal size and digestion capacity of individual predators had no influence on prey detection. Although prey consumption was negatively correlated with cadaver age, carrion prey could be detected by PCR as efficiently as fresh prey irrespective of carrion age. This is the first proof that PCR-based techniques are highly efficient and sensitive, both in fresh and carrion prey detection. Thus, if active predation has to be distinguished from scavenging, then additional approaches are needed to interpret the picture of prey choice derived by highly sensitive detection methods.", "title": "" }, { "docid": "1255c63b8fc0406b1f3a0161f59ebfb1", "text": "This paper proposes an EMI filter design software which can serve as an aid to the designer to quickly arrive at optimal filter sizes based on off-line measurement data or simulation results. The software covers different operating conditions-such as: different switching devices, different types of switching techniques, different load conditions and layout of the test setup. The proposed software design works for both silicon based and WBG based power converters.", "title": "" }, { "docid": "0c41de0df5dd88c87061c57ae26c5b32", "text": "Context. The share and importance of software within automotive vehicles is growing steadily. Most functionalities in modern vehicles, especially safety related functions like advanced emergency braking, are controlled by software. A complex and common phenomenon in today’s automotive vehicles is the distribution of such software functions across several Electronic Control Units (ECUs) and consequently across several ECU system software modules. As a result, integration testing of these distributed software functions has been found to be a challenge. The automotive industry neither has infinite resources, nor has the time to carry out exhaustive testing of these functions. On the other hand, the traditional approach of implementing an ad-hoc selection of test scenarios based on the tester’s experience, can lead to test gaps and test redundancies. Hence, there is a pressing need within the automotive industry for a feasible and effective verification strategy for testing distributed software functions. Objectives. Firstly, to identify the current approach used to test the distributed automotive embedded software functions in literature and in a case company. Secondly, propose and validate a feasible and effective verification strategy for testing the distributed software functions that would help improve test coverage while reducing test redundancies and test gaps. Methods. To accomplish the objectives, a case study was conducted at Scania CV AB, Södertälje, Sweden. One of the data collection methods was through conducting interviews of different employees involved in the software testing activities. Based on the research objectives, an interview questionnaire with open-ended and close-ended questions has been used. Apart from interviews, data from relevant artifacts in databases and archived documents has been used to achieve data triangulation. Moreover, to further strengthen the validity of the results obtained, adequate literature support has been presented throughout. Towards the end, a verification strategy has been proposed and validated using existing historical data at Scania. Conclusions. The proposed verification strategy to test distributed automotive embedded software functions has given promising results by providing means to identify test gaps and test redundancies. It helps establish an effective and feasible approach to capture function test coverage information that helps enhance the effectiveness of integration testing of the distributed software functions.", "title": "" }, { "docid": "5bb98a6655f823b38c3866e6d95471e9", "text": "This article describes the HR Management System in place at Sears. Key emphases of Sears' HR management infrastructure include : (1) formulating and communicating a corporate mission, vision, and goals, (2) employee education and development through the Sears University, (3) performance management and incentive compensation systems linked closely to the firm's strategy, (4) validated employee selection systems, and (5) delivering the \"HR Basics\" very competently. Key challenges for the future include : (1) maintaining momentum in the performance improvement process, (2) identifying barriers to success, and (3) clearly articulating HR's role in the change management process . © 1999 John Wiley & Sons, Inc .", "title": "" }, { "docid": "f14b2dda47ff1eed966a3dad44514334", "text": "Diced cartilage rolled up in a fascia (DC-F) is a recent technique developed by Rollin K Daniel. It consists to tailor make a composite graft composed by pieces of cartilage cut in small dices wrapped in a layer of deep temporal aponeurosis. This initially malleable graft allows an effective dorsum augmentation (1 to 10 mm), adjustable until the end of the operation and even post operatively. The indications are all the primary and secondary augmentation rhinoplasties. However, the elective indications are the secondary augmentation rhinoplasties with cartilaginous donor site depletion, or when cartilaginous grafts are of poor quality (insufficient length, multifragmented...), or finally when the recipient site is uneven or asymmetrical. We report our experience of 20 patients operated in 2006 and 2007, with one year minimal follow-up. All the cases are relative or absolute saddle noses, idiopathic, post-traumatic or iatrogenic. Moreover, two patients also had a concomitant chin augmentation with DC-F. No case of displacement or resorption was noted. We modified certain technical points in order to make this technique even more powerful and predictable.", "title": "" }, { "docid": "fb9bbd096fa29cbb0abf646b33f7693b", "text": "This paper presents a new parameter extraction methodology, based on an accurate and continuous MOS model dedicated to low-voltage and low-current analog circuit design and simulation (EKV MOST Model). The extraction procedure provides the key parameters from the pinch-off versus gate voltage characteristic, measured at constant current from a device biased in moderate inversion. Unique parameter sets, suitable for statistical analysis, describe the device behavior in all operating regions and over all device geometries. This efficient and simple method is shown to be accurate for both submicron bulk CMOS and fully depleted SOI technologies. INTRODUCTION The requirements for good MOS analog simulation models such as accuracy and continuity of the largeand small-signal characteristics are well established [1][2]. Continuity of the largeand small-signal characteristics from weak to strong inversion is one of the main features of the Enz-Krummenacher-Vittoz or EKV MOS transistor model [3][4][5]. One of the basic concepts of this model is the pinch-off voltage. A constant current bias is used to measure the pinch-off voltage versus gate voltage characteristic in moderate inversion (MI). This measure allows for an efficient and simple characterization method to be formulated for the most important model parameters as the threshold voltage and the other parameters related to the channel doping, using a single measured characteristic. The same principle is applied for various geometries, including shortand narrow-channel devices, and forms the major part of the complete characterization methodology. The simplicity of the model and the relatively small number of parameters to be extracted eases the parameter extraction. This is of particular importance if large statistical data are to be gathered. This method has been validated on a large number of different CMOS processes. To show its flexibility as well as the abilities of the model, results are presented for submicron bulk and fully depleted SOI technologies. SHORT DESCRIPTION OF THE STATIC MODEL A detailed description of the model formulation can be found in [3]; important concepts are shortly recalled here since they form the basis of the parameter extraction. A set of 13 intrinsic parameters is used for first and second order effects, listed in Table I. Unlike most other MOS simulation models, in the EKV model the gate, source and drain voltages, VG , VS and VD , are all referred to the substrate in order to preserve the intrinsic symmetry of the device. The Pinch-off Voltage The threshold voltage VTO, which is consequently also referred to the bulk, is defined as the gate voltage for which the inversion charge forming the channel is zero at equilibrium. The pinch-off voltage VP corresponds to the value of the channel potential Vch for which the inversion charge becomes zero in a non-equilibrium situation. VP can be directly related to VG :", "title": "" } ]
scidocsrr
ca48c9c0014753549bd29a61a5924f01
Design of a High-Performance System for Secure Image Communication in the Internet of Things
[ { "docid": "adc9e237e2ca2467a85f54011b688378", "text": "Quadrotors are rapidly emerging as a popular platform for unmanned aerial vehicle (UAV) research, due to the simplicity of their construction and maintenance, their ability to hover, and their vertical take off and landing (VTOL) capability. Current designs have often considered only nominal operating conditions for vehicle control design. This work seeks to address issues that arise when deviating significantly from the hover flight regime. Aided by well established research for helicopter flight control, four separate aerodynamic effects are investigated as they pertain to quadrotor flight. The effects result from either translational or vertical vehicular velocity components, and cause both moments that affect attitude control and thrust variation that affects altitude control. Where possible, a theoretical development is first presented, and is then validated through both thrust test stand measurements and vehicle flight tests using the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC) quadrotor helicopter. The results have enabled improved controller tracking throughout the flight envelope, including at higher speeds and in gusting winds.", "title": "" } ]
[ { "docid": "339b405d32b9afb4a36f2a8f9bba485d", "text": "Inspired by the recent advances in generative models, we introduce a human action generation model in order to generate a consecutive sequence of human motions to formulate novel actions. We propose a framework of an autoencoder and a generative adversarial network (GAN) to produce multiple and consecutive human actions conditioned on the initial state and the given class label. The proposed model is trained in an end-to-end fashion, where the autoencoder is jointly trained with the GAN. The model is trained on the NTU RGB+D dataset and we show that the proposed model can generate different styles of actions. Moreover, the model can successfully generate a sequence of novel actions given different action labels as conditions. The conventional human action prediction and generation models lack those features, which are essential for practical applications.", "title": "" }, { "docid": "c64dd1051c5b6892df08813e38285843", "text": "Diabetes has emerged as a major healthcare problem in India. Today Approximately 8.3 % of global adult population is suffering from Diabetes. India is one of the most diabetic populated country in the world. Today the technologies available in the market are invasive methods. Since invasive methods cause pain, time consuming, expensive and there is a potential risk of infectious diseases like Hepatitis & HIV spreading and continuous monitoring is therefore not possible. Now a days there is a tremendous increase in the use of electrical and electronic equipment in the medical field for clinical and research purposes. Thus biomedical equipment’s have a greater role in solving medical problems and enhance quality of life. Hence there is a great demand to have a reliable, instantaneous, cost effective and comfortable measurement system for the detection of blood glucose concentration. Non-invasive blood glucose measurement device is one such which can be used for continuous monitoring of glucose levels in human body.", "title": "" }, { "docid": "317f1a01a8df4becdb3611c63cef618f", "text": "High brightness white LED has attracted a lot of attention for its high efficacy, simple to drive, environmentally friendly, long lifespan and small size. The power supply for LED lighting also requires long life while maintaining high efficiency, high power factor and low cost. However, a typical design employs electrolytic capacitor as storage capacitor, which is not only bulky, but also with short lifespan, thus hampering the entire LED lighting system. To prolong the lifespan of power supply, it has to use film capacitor with small capacitance to replace electrolytic capacitor. In this paper, a universal input high efficiency, high power factor LED driver is proposed based on the modified SEPIC converter. Along with a relatively large voltage ripple allowable in a PFC design, the proposal of LED lamp driver is able to eliminate the electrolytic capacitor while maintaining high power factor. To increase the efficiency of LED driver, the presented SEPIC-derived converter is modified further as the twin-bus output stage for matching ultra-high efficiency twin-bus LED current regulator. The operation principle and related analysis is described in detail. A 50-W prototype has been built and tested to verify the proposed LED Driver.", "title": "" }, { "docid": "5a1cdadf05fc4c5ae6f7fa3142e7ed16", "text": "One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge. To better understand this issue, we study the problem of continual learning, where the model observes, once and one by one, examples concerning a sequence of tasks. First, we propose a set of metrics to evaluate models learning over a continuum of data. These metrics characterize models not only by their test accuracy, but also in terms of their ability to transfer knowledge across tasks. Second, we propose a model for continual learning, called Gradient Episodic Memory (GEM) that alleviates forgetting, while allowing beneficial transfer of knowledge to previous tasks. Our experiments on variants of the MNIST and CIFAR-100 datasets demonstrate the strong performance of GEM when compared to the state-of-the-art.", "title": "" }, { "docid": "00bd0665891eb9cd9c865074dcf89e9a", "text": "This case report presents the treatment of a patient with skeletal Cl II malocclusion and anterior open-bite who was treated with zygomatic miniplates through the intrusion of maxillary posterior teeth. A 16-year-old female patient with a chief complaint of anterior open-bite had a symmetric face, incompetent lips, convex profile, retrusive lower lip and chin. Intraoral examination showed that the buccal segments were in Class II relationship, and there was anterior open-bite (overbite -6.5 mm). The cephalometric analysis showed Class II skeletal relationship with increased lower facial height. The treatment plan included intrusion of the maxillary posterior teeth using zygomatic miniplates followed by fixed orthodontic treatment. At the end of treatment Class I canine and molar relationships were achieved, anterior open-bite was corrected and normal smile line was obtained. Skeletal anchorage using zygomatic miniplates is an effective method for open-bite treatment through the intrusion of maxillary posterior teeth.", "title": "" }, { "docid": "cf32fb173182e8bd64150019f9fa36bb", "text": "LEARNING OBJECTIVES\nAfter reading this article, the participant should be able to: 1. Identify and describe the anatomy of and changes to the aging face, including changes in bone mass and structure and changes to the skin, tissue, and muscles. 2. Assess each individual's unique anatomy before embarking on face-lift surgery and incorporate various surgical techniques, including fat grafting and other corrective procedures in addition to shifting existing fat to a higher position on the face, into discussions with patients. 3. Identify risk factors and potential complications in prospective patients. 4. Describe the benefits and risks of various techniques.\n\n\nSUMMARY\nThe ability to surgically rejuvenate the aging face has progressed in parallel with plastic surgeons' understanding of facial anatomy. In turn, a more clear explanation now exists for the visible changes seen in the aging face. This article and its associated video content review the current understanding of facial anatomy as it relates to facial aging. The standard face-lift techniques are explained and their various features, both good and bad, are reviewed. The objective is for surgeons to make a better aesthetic diagnosis before embarking on face-lift surgery, and to have the ability to use the appropriate technique depending on the clinical situation.", "title": "" }, { "docid": "543099ac1bb00e14f4fc757a25d9487c", "text": "With the development of personalized services, collaborative filtering techniques have been successfully applied to the network recommendation system. But sparse data seriously affect the performance of collaborative filtering algorithms. To alleviate the impact of data sparseness, using user interest information, an improved user-based clustering Collaborative Filtering (CF) algorithm is proposed in this paper, which improves the algorithm by two ways: user similarity calculating method and user-item rating matrix extended. The experimental results show that the algorithm could describe the user similarity more accurately and alleviate the impact of data sparseness in collaborative filtering algorithm. Also the results show that it can improve the accuracy of the collaborative recommendation algorithm.", "title": "" }, { "docid": "66878197b06f3fac98f867d5457acafe", "text": "As a result of disparities in the educational system, numerous scholars and educators across disciplines currently support the STEAM (Science, Technology, Engineering, Art, and Mathematics) movement for arts integration. An educational approach to learning focusing on guiding student inquiry, dialogue, and critical thinking through interdisciplinary instruction, STEAM values proficiency, knowledge, and understanding. Despite extant literature urging for this integration, the trend has yet to significantly influence federal or state standards for K-12 education in the United States. This paper provides a brief and focused review of key theories and research from the fields of cognitive psychology and neuroscience outlining the benefits of arts integrative curricula in the classroom. Cognitive psychologists have found that the arts improve participant retention and recall through semantic elaboration, generation of information, enactment, oral production, effort after meaning, emotional arousal, and pictorial representation. Additionally, creativity is considered a higher-order cognitive skill and EEG results show novel brain patterns associated with creative thinking. Furthermore, cognitive neuroscientists have found that long-term artistic training can augment these patterns as well as lead to greater plasticity and neurogenesis in associated brain regions. Research suggests that artistic training increases retention and recall, generates new patterns of thinking, induces plasticity, and results in strengthened higher-order cognitive functions related to creativity. These benefits of arts integration, particularly as approached in the STEAM movement, are what develops students into adaptive experts that have the skills to then contribute to innovation in a variety of disciplines.", "title": "" }, { "docid": "17ceaef57bfa8bf97a75f4f341c58783", "text": "Slip is the major cause of falls in human locomotion. We present a new bipedal modeling approach to capture and predict human walking locomotion with slips. Compared with the existing bipedal models, the proposed slip walking model includes the human foot rolling effects, the existence of the double-stance gait and active ankle joints. One of the major developments is the relaxation of the nonslip assumption that is used in the existing bipedal models. We conduct extensive experiments to optimize the gait profile parameters and to validate the proposed walking model with slips. The experimental results demonstrate that the model successfully predicts the human recovery gaits with slips.", "title": "" }, { "docid": "a2842352924cbd1deff52976425a0bd6", "text": "Content-based music information retrieval tasks have traditionally been solved using engineered features and shallow processing architectures. In recent years, there has been increasing interest in using feature learning and deep architectures instead, thus reducing the required engineering effort and the need for prior knowledge. However, this new approach typically still relies on mid-level representations of music audio, e.g. spectrograms, instead of raw audio signals. In this paper, we investigate whether it is possible to apply feature learning directly to raw audio signals. We train convolutional neural networks using both approaches and compare their performance on an automatic tagging task. Although they do not outperform a spectrogram-based approach, the networks are able to autonomously discover frequency decompositions from raw audio, as well as phase-and translation-invariant feature representations.", "title": "" }, { "docid": "c949e051cbfd9cff13d939a7b594e6e6", "text": "Propagation measurements at 28 GHz were conducted in outdoor urban environments in New York City using four different transmitter locations and 83 receiver locations with distances of up to 500 m. A 400 mega- chip per second channel sounder with steerable 24.5 dBi horn antennas at the transmitter and receiver was used to measure the angular distributions of received multipath power over a wide range of propagation distances and urban settings. Measurements were also made to study the small-scale fading of closely-spaced power delay profiles recorded at half-wavelength (5.35 mm) increments along a small-scale linear track (10 wavelengths, or 107 mm) at two different receiver locations. Our measurements indicate that power levels for small- scale fading do not significantly fluctuate from the mean power level at a fixed angle of arrival. We propose here a new lobe modeling technique that can be used to create a statistical channel model for lobe path loss and shadow fading, and we provide many model statistics as a function of transmitter- receiver separation distance. Our work shows that New York City is a multipath-rich environment when using highly directional steerable horn antennas, and that an average of 2.5 signal lobes exists at any receiver location, where each lobe has an average total angle spread of 40.3° and an RMS angle spread of 7.8°. This work aims to create a 28 GHz statistical spatial channel model for future 5G cellular networks.", "title": "" }, { "docid": "799043a0617a8a9e5aa22fdb1501084d", "text": "Test case prioritization is a crucial element in software quality assurance in practice, specially, in the context of regression testing. Typically, test cases are prioritized in a way that they detect the potential faults earlier. The effectiveness of test cases, in terms of fault detection, is estimated using quality metrics, such as code coverage, size, and historical fault detection. Prior studies have shown that previously failing test cases are highly likely to fail again in the next releases, therefore, they are highly ranked, while prioritizing. However, in practice, a failing test case may not be exactly the same as a previously failed test case, but quite similar, e.g., when the new failing test is a slightly modified version of an old failing one to catch an undetected fault. In this paper, we define a class of metrics that estimate the test cases quality using their similarity to the previously failing test cases. We have conducted several experiments with five real world open source software systems, with real faults, to evaluate the effectiveness of these quality metrics. The results of our study show that our proposed similarity-based quality measure is significantly more effective for prioritizing test cases compared to existing test case quality measures.", "title": "" }, { "docid": "58920ab34e358c13612d793bb3127c9f", "text": "We revisit the problem of interval estimation of a binomial proportion. The erratic behavior of the coverage probability of the standard Wald confidence interval has previously been remarked on in the literature (Blyth and Still, Agresti and Coull, Santner and others). We begin by showing that the chaotic coverage properties of the Wald interval are far more persistent than is appreciated. Furthermore, common textbook prescriptions regarding its safety are misleading and defective in several respects and cannot be trusted. This leads us to consideration of alternative intervals. A number of natural alternatives are presented, each with its motivation and context. Each interval is examined for its coverage probability and its length. Based on this analysis, we recommend the Wilson interval or the equal-tailed Jeffreys prior interval for small n and the interval suggested in Agresti and Coull for larger n. We also provide an additional frequentist justification for use of the Jeffreys interval.", "title": "" }, { "docid": "81ef390009fb64bf235147bc0e186bab", "text": "In this paper, we show how to calibrate a camera and to recover the geometry and the photometry (textures) of objects from a single image. The aim of this work is to make it possible walkthrough and augment reality in a 3D model reconstructed from a single image. The calibration step does not need any calibration target and makes only four assumptions: (1) the single image contains at least two vanishing points, (2) the length (in 3D space) of one line segment (for determining the translation vector) in the image is known, (3) the principle point is the center of the image, and (4) the aspect ratio is fixed by the user. Each vanishing point is determined from a set of parallel lines. These vanishing points help determine a 3D world coordinate system R o. After having computed the focal length, the rotation matrix and the translation vector are evaluated in turn for describing the rigid motion between R o and the camera coordinate system R c. Next, the reconstruction step consists in placing, rotating, scaling, and translating a rectangular 3D box that must fit at best with the potential objects within the scene as seen through the single image. With each face of a rectangular box, a texture that may contain holes due to invisible parts of certain objects is assigned. We show how the textures are extracted and how these holes are located and filled. Our method has been applied to various real images (pictures scanned from books, photographs) and synthetic images.", "title": "" }, { "docid": "611eacd767f1ea709c1c4aca7acdfcdb", "text": "This paper presents a bi-directional converter applied in electric bike. The main structure is a cascade buck-boost converter, which transfers the energy stored in battery for driving motor, and can recycle the energy resulted from the back electromotive force (BEMF) to charge battery by changing the operation mode. Moreover, the proposed converter can also serve as a charger by connecting with AC line directly. Besides, the single-chip DSP TMS320F2812 is adopted as a control core to manage the switching behaviors of each mode and to detect the battery capacity. In this paper, the equivalent models of each mode and complete design considerations are all detailed. All the experimental results are used to demonstrate the feasibility.", "title": "" }, { "docid": "cf5f3db56feb7d46c4806be434f6a665", "text": "Computational propaganda has recently exploded into public consciousness. The U.S. presidential campaign of 2016 was marred by evidence, which continues to emerge, of targeted political propaganda and the use of bots to distribute political messages on social media. This computational propaganda is both a social and technical phenomenon. Technical knowledge is necessary to work with the massive databases used for audience targeting; it is necessary to create the bots and algorithms that distribute propaganda; it is necessary to monitor and evaluate the results of these efforts in agile campaigning. Thus, a technical knowledge comparable to those who create and distribute this propaganda is necessary to investigate the phenomenon. However, viewing computational propaganda only from a technical perspective—as a set of variables, models, codes, and algorithms—plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it. The very act of making something technical and impartial makes it seem inevitable and unbiased. This undermines the opportunities to argue for change in the social value and meaning of this content and the structures in which it exists. Bigdata research is necessary to understand the sociotechnical issue of computational propaganda and the influence of technology in politics. However, big data researchers must maintain a critical stance toward the data being used and analyzed so as to ensure that we are critiquing as we go about describing, predicting, or recommending changes. If research studies of computational propaganda and political big data do not engage with the forms of power and knowledge that produce it, then the very possibility for improving the role of social-media platforms in public life evaporates. Definitionally, computational propaganda has two important parts: the technical and the social. Focusing on the technical, Woolley and Howard define computational propaganda as the assemblage of social-media platforms, autonomous agents, and big data tasked with the manipulation of public opinion. In contrast, the social definition of computational propaganda derives from the definition of propaganda—communications that deliberately misrepresent symbols, appealing to emotions and prejudices and bypassing rational thought, to achieve a specific goal of its creators—with computational propaganda understood as propaganda created or disseminated using computational (technical) means. Propaganda has a long history. Scholars who study propaganda as an offline or historical phenomenon have long been split over whether the existence of propaganda is necessarily detrimental to the functioning of democracies. However, the rise of the Internet and, in particular, social media has profoundly changed the landscape of propaganda. It has opened the creation and dissemination of propaganda messages, which were once the province of states and large institutions, to a wide variety of individuals and groups. It has allowed cross-border computational propaganda and interference in domestic political processes by foreign states. The anonymity of the Internet has allowed stateproduced propaganda to be presented as if it were not produced by state actors. The Internet has also provided new affordances for the efficient dissemination of propaganda, through the manipulation of the algorithms and processes that govern online information and through audience targeting based on big data analytics. The social effects of the changing nature of propaganda are only just beginning to be understood, and the advancement of this understanding is complicated by the unprecedented marrying of the social and the technical that the Internet age has enabled. The articles in this special issue showcase the state of the art in the use of big data in the study of computational propaganda and the influence of social media on politics. This rapidly emerging field represents a new clash of the highly social and highly technical in both", "title": "" }, { "docid": "99a4fc6540802ff820fef9ca312cdc1c", "text": "Problem diagnosis is one crucial aspect in the cloud operation that is becoming increasingly challenging. On the one hand, the volume of logs generated in today's cloud is overwhelmingly large. On the other hand, cloud architecture becomes more distributed and complex, which makes it more difficult to troubleshoot failures. In order to address these challenges, we have developed a tool, called LOGAN, that enables operators to quickly identify the log entries that potentially lead to the root cause of a problem. It constructs behavioral reference models from logs that represent the normal patterns. When problem occurs, our tool enables operators to inspect the divergence of current logs from the reference model and highlight logs likely to contain the hints to the root cause. To support these capabilities we have designed and developed several mechanisms. First, we developed log correlation algorithms using various IDs embedded in logs to help identify and isolate log entries that belong to the failed request. Second, we provide efficient log comparison to help understand the differences between different executions. Finally we designed mechanisms to highlight critical log entries that are likely to contain information pertaining to the root cause of the problem. We have implemented the proposed approach in a popular cloud management system, OpenStack, and through case studies, we demonstrate this tool can help operators perform problem diagnosis quickly and effectively.", "title": "" }, { "docid": "6aab23ee181e8db06cc4ca3f7f7367be", "text": "In their original article, Ericsson, Krampe, and Tesch-Römer (1993) reviewed the evidence concerning the conditions of optimal learning and found that individualized practice with training tasks (selected by a supervising teacher) with a clear performance goal and immediate informative feedback was associated with marked improvement. We found that this type of deliberate practice was prevalent when advanced musicians practice alone and found its accumulated duration related to attained music performance. In contrast, Macnamara, Moreau, and Hambrick's (2016, this issue) main meta-analysis examines the use of the term deliberate practice to refer to a much broader and less defined concept including virtually any type of sport-specific activity, such as group activities, watching games on television, and even play and competitions. Summing up every hour of any type of practice during an individual's career implies that the impact of all types of practice activity on performance is equal-an assumption that I show is inconsistent with the evidence. Future research should collect objective measures of representative performance with a longitudinal description of all the changes in different aspects of the performance so that any proximal conditions of deliberate practice related to effective improvements can be identified and analyzed experimentally.", "title": "" }, { "docid": "29734bed659764e167beac93c81ce0a7", "text": "Fashion classification encompasses the identification of clothing items in an image. The field has applications in social media, e-commerce, and criminal law. In our work, we focus on four tasks within the fashion classification umbrella: (1) multiclass classification of clothing type; (2) clothing attribute classification; (3) clothing retrieval of nearest neighbors; and (4) clothing object detection. We report accuracy measurements for clothing style classification (50.2%) and clothing attribute classification (74.5%) that outperform baselines in the literature for the associated datasets. We additionally report promising qualitative results for our clothing retrieval and clothing object detection tasks.", "title": "" }, { "docid": "2657bb2a6b2fb59714417aa9e6c6c5eb", "text": "Mash extends the MinHash dimensionality-reduction technique to include a pairwise mutation distance and P value significance test, enabling the efficient clustering and search of massive sequence collections. Mash reduces large sequences and sequence sets to small, representative sketches, from which global mutation distances can be rapidly estimated. We demonstrate several use cases, including the clustering of all 54,118 NCBI RefSeq genomes in 33 CPU h; real-time database search using assembled or unassembled Illumina, Pacific Biosciences, and Oxford Nanopore data; and the scalable clustering of hundreds of metagenomic samples by composition. Mash is freely released under a BSD license ( https://github.com/marbl/mash ).", "title": "" } ]
scidocsrr
f6d8b57317b9b054453e22c65e37e879
5G cellular: key enabling technologies and research challenges
[ { "docid": "f84c399ff746a8721640e115fd20745e", "text": "Self-interference cancellation invalidates a long-held fundamental assumption in wireless network design that radios can only operate in half duplex mode on the same channel. Beyond enabling true in-band full duplex, which effectively doubles spectral efficiency, self-interference cancellation tremendously simplifies spectrum management. Not only does it render entire ecosystems like TD-LTE obsolete, it enables future networks to leverage fragmented spectrum, a pressing global issue that will continue to worsen in 5G networks. Self-interference cancellation offers the potential to complement and sustain the evolution of 5G technologies toward denser heterogeneous networks and can be utilized in wireless communication systems in multiple ways, including increased link capacity, spectrum virtualization, any-division duplexing (ADD), novel relay solutions, and enhanced interference coordination. By virtue of its fundamental nature, self-interference cancellation will have a tremendous impact on 5G networks and beyond.", "title": "" } ]
[ { "docid": "bfd19a8b2c11c9c3083b358f72314fc5", "text": "Changes in temperature, precipitation, and other climatic drivers and sea-level rise will affect populations of existing native and non-native aquatic species and the vulnerability of aquatic environments to new invasions. Monitoring surveys provide the foundation for assessing the combined effects of climate change and invasions by providing baseline biotic and environmental conditions, although the utility of a survey depends on whether the results are quantitative or qualitative, and other design considerations. The results from a variety of monitoring programs in the United States are available in integrated biological information systems, although many include only non-native species, not native species. Besides including natives, we suggest these systems could be improved through the development of standardized methods that capture habitat and physiological requirements and link regional and national biological databases into distributed Web portals that allow drawing information from multiple sources. Combining the outputs from these biological information systems with environmental data would allow the development of ecological-niche models that predict the potential distribution or abundance of native and non-native species on the basis of current environmental conditions. Environmental projections from climate models can be used in these niche models to project changes in species distributions or abundances under altered climatic conditions and to identify potential high-risk invaders. There are, however, a number of challenges, such as uncertainties associated with projections from climate and niche models and difficulty in integrating data with different temporal and spatial granularity. Even with these uncertainties, integration of biological and environmental information systems, niche models, and climate projections would improve management of aquatic ecosystems under the dual threats of biotic invasions and climate change.", "title": "" }, { "docid": "f20c08bd1194f8589d6e56e66951a7f8", "text": "The computational complexity grows exponentially for multi-level thresholding (MT) with the increase of the number of thresholds. Taking Kapur’s entropy as the optimized objective function, the paper puts forward the modified quick artificial bee colony algorithm (MQABC), which employs a new distance strategy for neighborhood searches. The experimental results show that MQABC can search out the optimal thresholds efficiently, precisely, and speedily, and the thresholds are very close to the results examined by exhaustive searches. In comparison to the EMO (Electro-Magnetism optimization), which is based on Kapur’s entropy, the classical ABC algorithm, and MDGWO (modified discrete grey wolf optimizer) respectively, the experimental results demonstrate that MQABC has exciting advantages over the latter three in terms of the running time in image thesholding, while maintaining the efficient segmentation quality.", "title": "" }, { "docid": "80114263a722c25125803c7c8ecebb91", "text": "features suggest that this patient is an atypical presentation of chemotherapy-induced acral erythema, sparing the classic palmar location. The suggestion for an overlapping spectrum of chemotherapyinduced toxic injury of the skin helps resolve the clinicopathological challenge of this case. Toxic erythema of chemotherapy describes a particular category of toxin-associated diseases, some of which are specific, eg, chemotherapyassociated neutrophilic hidradenitis, and others, such as the eruption presented, defy further classification. Although dermatologists will likely preserve some of their preferred appellations, the field of dermatology will benefit from including toxic erythema of chemotherapy within the conceptual framework of chemotherapy-associated dermatoses.", "title": "" }, { "docid": "5168f7f952d937460d250c44b43f43c0", "text": "This letter presents the design of a coplanar waveguide (CPW) circularly polarized antenna for the central frequency 900 MHz, it comes in handy for radio frequency identification (RFID) short-range reading applications within the band of 902-928 MHz where the axial ratio of proposed antenna model is less than 3 dB. The proposed design has an axial-ratio bandwidth of 36 MHz (4%) and impedance bandwidth of 256 MHz (28.5%).", "title": "" }, { "docid": "b0e316e2efe4b408985216a33492897b", "text": "Human activity detection within smart homes is one of the basis of unobtrusive wellness monitoring of a rapidly aging population in developed countries. Most works in this area use the concept of \"activity\" as the building block with which to construct applications such as healthcare monitoring or ambient assisted living. The process of identifying a specific activity encompasses the selection of the appropriate set of sensors, the correct preprocessing of their provided raw data and the learning/reasoning using this information. If the selection of the sensors and the data processing methods are wrongly performed, the whole activity detection process may fail, leading to the consequent failure of the whole application. Related to this, the main contributions of this review are the following: first, we propose a classification of the main activities considered in smart home scenarios which are targeted to older people's independent living, as well as their characterization and formalized context representation; second, we perform a classification of sensors and data processing methods that are suitable for the detection of the aforementioned activities. Our aim is to help researchers and developers in these lower-level technical aspects that are nevertheless fundamental for the success of the complete application.", "title": "" }, { "docid": "2b30506690acbae9240ef867e961bc6c", "text": "Background Breast milk can turn pink with Serratia marcescens colonization, this bacterium has been associated with several diseases and even death. It is seen most commonly in the intensive care settings. Discoloration of the breast milk can lead to premature termination of nursing. We describe two cases of pink-colored breast milk in which S. marsescens was isolated from both the expressed breast milk. Antimicrobial treatment was administered to the mothers. Return to breastfeeding was successful in both the cases. Conclusions Pink breast milk is caused by S. marsescens colonization. In such cases,early recognition and treatment before the development of infection is recommended to return to breastfeeding.", "title": "" }, { "docid": "b169a813dcaa659555f082911bcc843f", "text": "Pharmacogenomics studies the impact of genetic variation of patients on drug responses and searches for correlations between gene expression or Single Nucleotide Polymorphisms (SNPs) of patient's genome and the toxicity or efficacy of a drug. SNPs data, produced by microarray platforms, need to be preprocessed and analyzed in order to find correlation between the presence/absence of SNPs and the toxicity or efficacy of a drug. Due to the large number of samples and the high resolution of instruments, the data to be analyzed can be very huge, requiring high performance computing. The paper presents the design and experimentation of Cloud4SNP, a novel Cloud-based bioinformatics tool for the parallel preprocessing and statistical analysis of pharmacogenomics SNP microarray data. Experimental evaluation shows good speed-up and scalability. Moreover, the availability on the Cloud platform allows to face in an elastic way the requirements of small as well as very large pharmacogenomics studies.", "title": "" }, { "docid": "fd0cfef7be75a9aa98229c25ffaea864", "text": "A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.", "title": "" }, { "docid": "945b8c26961fb3a2329b6356b853b358", "text": "This paper presents a synteny visualization and analysis tool developed in connection with IMAS - the Interactive Multigenomic Analysis System. This visual analysis tool enables biologists to analyze the relationships among genomes of closely related organisms in terms of the locations of genes and clusters of genes. A biologist starts IMAS with the DNA sequence, uses BLAST to find similar genes in related sequences, and uses these similarity linkages to create an enhanced node-link diagram of syntenic sequences. We refer to this as Spring Synteny visualization, which is aimed at helping a biologist discover similar gene ordering relationships across species. The paper describes the techniques that are used to support synteny visualization, in terms of computation, visual design, and interaction design.", "title": "" }, { "docid": "0ce05b9c26df484fc59366762d31465a", "text": "This paper presents an algorithm that extracts the tempo of a musical excerpt. The proposed system assumes a constant tempo and deals directly with the audio signal. A sliding window is applied to the signal and two feature classes are extracted. The first class is the log-energy of each band of a mel-scale triangular filterbank, a common feature vector used in various MIR applications. For the second class, a novel feature for the tempo induction task is presented; the strengths of the twelve western musical tones at all octaves are calculated for each audio frame, in a similar fashion with Pitch Class Profile. The timeevolving feature vectors are convolved with a bank of resonators, each resonator corresponding to a target tempo. Then the results of each feature class are combined to give the final output. The algorithm was evaluated on the popular ISMIR 2004 Tempo Induction Evaluation Exchange Dataset. Results demonstrate that the superposition of the different types of features enhance the performance of the algorithm, which is in the current state-of-the-art algorithms of the tempo induction task.", "title": "" }, { "docid": "712be4d6aabf8e76b050c30e6241ad0f", "text": "The United States, like many nations, continues to experience rapid growth in its racial minority population and is projected to attain so-called majority-minority status by 2050. Along with these demographic changes, staggering racial disparities persist in health, wealth, and overall well-being. In this article, we review the social psychological literature on race and race relations, beginning with the seemingly simple question: What is race? Drawing on research from different fields, we forward a model of race as dynamic, malleable, and socially constructed, shifting across time, place, perceiver, and target. We then use classic theoretical perspectives on intergroup relations to frame and then consider new questions regarding contemporary racial dynamics. We next consider research on racial diversity, focusing on its effects during interpersonal encounters and for groups. We close by highlighting emerging topics that should top the research agenda for the social psychology of race and race relations in the twenty-first century.", "title": "" }, { "docid": "8a538c63adfd618d8967f736d8c59761", "text": "Skyline queries ask for a set of interesting points from a potentially large set of data points. If we are traveling, for instance, a restaurant might be interesting if there is no other restaurant which is nearer, cheaper, and has better food. Skyline queries retrieve all such interesting restaurants so that the user can choose the most promising one. In this paper, we present a new online algorithm that computes the Skyline. Unlike most existing algorithms that compute the Skyline in a batch, this algorithm returns the first results immediately, produces more and more results continuously, and allows the user to give preferences during the running time of the algorithm so that the user can control what kind of results are produced next (e.g., rather cheap or rather near restaurants).", "title": "" }, { "docid": "0ce5f897c55f40451878e37a4da1c91c", "text": "The analysis of drainage morphometry is usually a prerequisite to the assessment of hydrological characteristics of surface water basin. In this study, the western region of the Arabian Peninsula was selected for detailed morphometric analysis. In this region, there are a large number of drainage systems that are originated from the mountain chains of the Arabian Shield to the east and outlet into the Red Sea. As a typical type of these drainage systems, the morphometry of Wadi Aurnah was analyzed. The study performed manual and computerized delineation and drainage sampling, which enables applying detailed morphological measures. Topographic maps in combination with remotely sensed data, (i.e. different types of satellite images) were utilized to delineate the existing drainage system, thus to identify precisely water divides. This was achieved using Geographic Information System (GIS) to provide computerized data that can be manipulated for different calculations and hydrological measures. The obtained morhpometric analysis in this study tackled: 1) stream behavior, 2) morphometric setting of streams within the drainage system and 3) interrelation between connected streams. The study introduces an imperial approach of morphometric analysis that can be utilized in different hydrological assessments (e.g., surface water harvesting, flood mitigation, etc). As well as, the applied analysis using remote sensing and GIS can be followed in the rest drainage systems of the Western Arabian Peninsula.", "title": "" }, { "docid": "0251f38f48c470e2e04fb14fc7ba34b2", "text": "The fast development of Internet of Things (IoT) and cyber-physical systems (CPS) has triggered a large demand of smart devices which are loaded with sensors collecting information from their surroundings, processing it and relaying it to remote locations for further analysis. The wide deployment of IoT devices and the pressure of time to market of device development have raised security and privacy concerns. In order to help better understand the security vulnerabilities of existing IoT devices and promote the development of low-cost IoT security methods, in this paper, we use both commercial and industrial IoT devices as examples from which the security of hardware, software, and networks are analyzed and backdoors are identified. A detailed security analysis procedure will be elaborated on a home automation system and a smart meter proving that security vulnerabilities are a common problem for most devices. Security solutions and mitigation methods will also be discussed to help IoT manufacturers secure their products.", "title": "" }, { "docid": "91e4994a20bb3b48ef3d70c3affa5c0c", "text": "In this paper, we address the challenging task of simultaneous recognition of overlapping sound events from single channel audio. Conventional frame-based methods aren’t well suited to the problem, as each time frame contains a mixture of information from multiple sources. Missing feature masks are able to improve the recognition in such cases, but are limited by the accuracy of the mask, which is a non-trivial problem. In this paper, we propose an approach based on Local Spectrogram Features (LSFs) which represent local spectral information that is extracted from the two-dimensional region surrounding “keypoints” detected in the spectrogram. The keypoints are designed to locate the sparse, discriminative peaks in the spectrogram, such that we can model sound events through a set of representative LSF clusters and their occurrences in the spectrogram. To recognise overlapping sound events, we use a Generalised Hough Transform (GHT) voting system, which sums the information over many independent keypoints to produce onset hypotheses, that can detect any arbitrary combination of sound events in the spectrogram. Each hypothesis is then scored against the class distribution models to recognise the existence of the sound in the spectrogram. Experiments on a set of five overlapping sound events, in the presence of non-stationary background noise, demonstrates the potential of our approach.", "title": "" }, { "docid": "1dc0d5c7dbc0ae85a424b17e463bd7a4", "text": "Plasma protein binding (PPB) strongly affects drug distribution and pharmacokinetic behavior with consequences in overall pharmacological action. Extended plasma protein binding may be associated with drug safety issues and several adverse effects, like low clearance, low brain penetration, drug-drug interactions, loss of efficacy, while influencing the fate of enantiomers and diastereoisomers by stereoselective binding within the body. Therefore in holistic drug design approaches, where ADME(T) properties are considered in parallel with target affinity, considerable efforts are focused in early estimation of PPB mainly in regard to human serum albumin (HSA), which is the most abundant and most important plasma protein. The second critical serum protein α1-acid glycoprotein (AGP), although often underscored, plays also an important and complicated role in clinical therapy and thus the last years it has been studied thoroughly too. In the present review, after an overview of the principles of HSA and AGP binding as well as the structure topology of the proteins, the current trends and perspectives in the field of PPB predictions are presented and discussed considering both HSA and AGP binding. Since however for the latter protein systematic studies have started only the last years, the review focuses mainly to HSA. One part of the review highlights the challenge to develop rapid techniques for HSA and AGP binding simulation and their performance in assessment of PPB. The second part focuses on in silico approaches to predict HSA and AGP binding, analyzing and evaluating structure-based and ligand-based methods, as well as combination of both methods in the aim to exploit the different information and overcome the limitations of each individual approach. Ligand-based methods use the Quantitative Structure-Activity Relationships (QSAR) methodology to establish quantitate models for the prediction of binding constants from molecular descriptors, while they provide only indirect information on binding mechanism. Efforts for the establishment of global models, automated workflows and web-based platforms for PPB predictions are presented and discussed. Structure-based methods relying on the crystal structures of drug-protein complexes provide detailed information on the underlying mechanism but are usually restricted to specific compounds. They are useful to identify the specific binding site while they may be important in investigating drug-drug interactions, related to PPB. Moreover, chemometrics or structure-based modeling may be supported by experimental data a promising integrated alternative strategy for ADME(T) properties optimization. In the case of PPB the use of molecular modeling combined with bioanalytical techniques is frequently used for the investigation of AGP binding.", "title": "" }, { "docid": "604b46c973be0a277faa96a407dc845f", "text": "A nonlinear dynamic model for a quadrotor unmanned aerial vehicle is presented with a new vision of state parameter control which is based on Euler angles and open loop positions state observer. This method emphasizes on the control of roll, pitch and yaw angle rather than the translational motions of the UAV. For this reason the system has been presented into two cascade partial parts, the first one relates the rotational motion whose the control law is applied in a closed loop form and the other one reflects the translational motion. A dynamic feedback controller is developed to transform the closed loop part of the system into linear, controllable and decoupled subsystem. The wind parameters estimation of the quadrotor is used to avoid more sensors. Hence an estimator of resulting aerodynamic moments via Lyapunov function is developed. Performance and robustness of the proposed controller are tested in simulation.", "title": "" }, { "docid": "221970fad528f2538930556dde7a0062", "text": "The recent explosive growth in convolutional neural network (CNN) research has produced a variety of new architectures for deep learning. One intriguing new architecture is the bilinear CNN (B-CNN), which has shown dramatic performance gains on certain fine-grained recognition problems [15]. We apply this new CNN to the challenging new face recognition benchmark, the IARPA Janus Benchmark A (IJB-A) [12]. It features faces from a large number of identities in challenging real-world conditions. Because the face images were not identified automatically using a computerized face detection system, it does not have the bias inherent in such a database. We demonstrate the performance of the B-CNN model beginning from an AlexNet-style network pre-trained on ImageNet. We then show results for fine-tuning using a moderate-sized and public external database, FaceScrub [17]. We also present results with additional fine-tuning on the limited training data provided by the protocol. In each case, the fine-tuned bilinear model shows substantial improvements over the standard CNN. Finally, we demonstrate how a standard CNN pre-trained on a large face database, the recently released VGG-Face model [20], can be converted into a B-CNN without any additional feature training. This B-CNN improves upon the CNN performance on the IJB-A benchmark, achieving 89.5% rank-1 recall.", "title": "" }, { "docid": "c3ae2b20405aa932bb5ada3874cdd29c", "text": "In this letter, a novel compact quadrature hybrid using low-pass and high-pass lumped elements is proposed. This proposed topology enables significant circuit size reduction in comparison with former approaches applying microstrip branch line or Lange couplers. In addition, it provides wider bandwidth in terms of operational frequency, and provides more convenience to the monolithic microwave integrated circuit layout since it does not have any bulky via holes as compared to those with lumped elements that have been published. In addition, the simulation and measurement of the fabricated hybrid implemented using PHEMT processes are evidently good. With the operational bandwidth ranging from 25 to 30 GHz, the measured results of the return loss are better than 17.6 dB, and the insertion losses of coupled and direct ports are approximately 3.4plusmn0.7 dB, while the relative phase difference is approximately 92.3plusmn1.4deg. The core dimension of the circuit is 0.4 mm times 0.15 mm.", "title": "" }, { "docid": "be1b9731df45408571e75d1add5dfe9c", "text": "We investigate a new commonsense inference task: given an event described in a short free-form text (“X drinks coffee in the morning”), a system reasons about the likely intents (“X wants to stay awake”) and reactions (“X feels alert”) of the event’s participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people’s intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts.", "title": "" } ]
scidocsrr
86cf5b7c33c66b58bd9240d95967ff13
Semi-supervised Learning with Encoder-Decoder Recurrent Neural Networks: Experiments with Motion Capture Sequences
[ { "docid": "711daac04e27d0a413c99dd20f6f82e1", "text": "The gesture recognition using motion capture data and depth sensors has recently drawn more attention in vision recognition. Currently most systems only classify dataset with a couple of dozens different actions. Moreover, feature extraction from the data is often computational complex. In this paper, we propose a novel system to recognize the actions from skeleton data with simple, but effective, features using deep neural networks. Features are extracted for each frame based on the relative positions of joints (PO), temporal differences (TD), and normalized trajectories of motion (NT). Given these features a hybrid multi-layer perceptron is trained, which simultaneously classifies and reconstructs input data. We use deep autoencoder to visualize learnt features. The experiments show that deep neural networks can capture more discriminative information than, for instance, principal component analysis can. We test our system on a public database with 65 classes and more than 2,000 motion sequences. We obtain an accuracy above 95% which is, to our knowledge, the state of the art result for such a large dataset.", "title": "" } ]
[ { "docid": "9e05a37d781d8a3ee0ecca27510f1ae9", "text": "Context: Evidence-based software engineering (EBSE) provides a process for solving practical problems based on a rigorous research approach. The primary focus so far was on mapping and aggregating evidence through systematic reviews. Objectives: We extend existing work on evidence-based software engineering by using the EBSE process in an industrial case to help an organization to improve its automotive testing process. With this we contribute in (1) providing experiences on using evidence based processes to analyze a real world automotive test process; and (2) provide evidence of challenges and related solutions for automotive software testing processes. Methods: In this study we perform an in-depth investigation of an automotive test process using an extended EBSE process including case study research (gain an understanding of practical questions to define a research scope), systematic literature review (identify solutions through systematic literature), and value stream mapping (map out an improved automotive test process based on the current situation and improvement suggestions identified). These are followed by reflections on the EBSE process used. Results: In the first step of the EBSE process we identified 10 challenge areas with a total of 26 individual challenges. For 15 out of those 26 challenges our domain specific systematic literature review identified solutions. Based on the input from the challenges and the solutions, we created a value stream map of the current and future process. Conclusions: Overall, we found that the evidence-based process as presented in this study helps in technology transfer of research results to industry, but at the same time some challenges lie ahead (e.g. scoping systematic reviews to focus more on concrete industry problems, and understanding strategies of conducting EBSE with respect to effort and quality of the evidence).", "title": "" }, { "docid": "80394c124d823e7639af06fd33ef99c1", "text": "We investigate whether income inequality affects subsequent growth in a cross-country sample for 1965-90, using the models of Barro (1997), Bleaney and Nishiyama (2002) and Sachs and Warner (1997), with negative results. We then investigate the evolution of income inequality over the same period and its correlation with growth. The dominating feature is inequality convergence across countries. This convergence has been significantly faster amongst developed countries. Growth does not appear to influence the evolution of inequality over time. Outline", "title": "" }, { "docid": "0ba907b893e3017dd55a67ae7c43b276", "text": "Android applications (apps for short) can send out users' sensitive information against users' intention. Based on the stats from Genome and Mobile-Sandboxing, 55.8% and 59.7% Android malware families feature privacy leakage. Prior approaches to detecting privacy leakage on smartphones primarily focused on the discovery of sensitive information flows. However, Android apps also send out users' sensitive information for legitimate functions. Due to the fuzzy nature of the privacy leakage detection problem, we formulate it as a justification problem, which aims to justify if a sensitive information transmission in an app serves any purpose, either for intended functions of the app itself or for other related functions. This formulation makes the problem more distinct and objective, and therefore more feasible to solve than before. We propose DroidJust, an automated approach to justifying an app's sensitive information transmission by bridging the gap between the sensitive information transmission and application functions. We also implement a prototype of DroidJust and evaluate it with over 6000 Google Play apps and over 300 known malware collected from VirusTotal. Our experiments show that our tool can effectively and efficiently analyze Android apps w.r.t their sensitive information flows and functionalities, and can greatly assist in detecting privacy leakage.", "title": "" }, { "docid": "da74e402f4542b6cbfb27f04c7640eb4", "text": "Hand-built verb clusters such as the widely used Levin classes (Levin, 1993) have proved useful, but have limited coverage. Verb classes automatically induced from corpus data such as those from VerbKB (Wijaya, 2016), on the other hand, can give clusters with much larger coverage, and can be adapted to specific corpora such as Twitter. We present a method for clustering the outputs of VerbKB: verbs with their multiple argument types, e.g.“marry(person, person)”, “feel(person, emotion).” We make use of a novel lowdimensional embedding of verbs and their arguments to produce high quality clusters in which the same verb can be in different clusters depending on its argument type. The resulting verb clusters do a better job than hand-built clusters of predicting sarcasm, sentiment, and locus of control in tweets.", "title": "" }, { "docid": "d79d6dd8267c66ad98f33bd54ff68693", "text": "We propose a multigrid extension of convolutional neural networks (CNNs). Rather than manipulating representations living on a single spatial grid, our network layers operate across scale space, on a pyramid of grids. They consume multigrid inputs and produce multigrid outputs, convolutional filters themselves have both within-scale and cross-scale extent. This aspect is distinct from simple multiscale designs, which only process the input at different scales. Viewed in terms of information flow, a multigrid network passes messages across a spatial pyramid. As a consequence, receptive field size grows exponentially with depth, facilitating rapid integration of context. Most critically, multigrid structure enables networks to learn internal attention and dynamic routing mechanisms, and use them to accomplish tasks on which modern CNNs fail. Experiments demonstrate wide-ranging performance advantages of multigrid. On CIFAR and ImageNet classification tasks, flipping from a single grid to multigrid within the standard CNN paradigm improves accuracy, while being compute and parameter efficient. Multigrid is independent of other architectural choices, we show synergy in combination with residual connections. Multigrid yields dramatic improvement on a synthetic semantic segmentation dataset. Most strikingly, relatively shallow multigrid networks can learn to directly perform spatial transformation tasks, where, in contrast, current CNNs fail. Together, our results suggest that continuous evolution of features on a multigrid pyramid is a more powerful alternative to existing CNN designs on a flat grid.", "title": "" }, { "docid": "8aff34c5a9f80fab499d4014cafba278", "text": "Social influence is the behavioral change of a person because of the perceived relationship with other people, organizations and society in general. Social influence has been a widely accepted phenomenon in social networks for decades. Many applications have been built based around the implicit notation of social influence between people, such as marketing, advertisement and recommendations. With the exponential growth of online social network services such as Facebook and Twitter, social influence can for the first time be measured over a large population. In this tutorial, we survey the research on social influence analysis with a focus on the computational aspects. First, we introduce how to verify the existence of social influence in various social networks. Second, we present computational models for quantifying social influence. Third, we describe how social influence can help real applications. In particular, we will focus on opinion leader finding and influence maximization for viral marketing. Finally, we apply the selected algorithms of social influence analysis on different social network data, such as twitter, arnetminer data, weibo, and slashdot forum.", "title": "" }, { "docid": "ecb146ae27419d9ca1911dc4f13214c1", "text": "In this paper, a simple mix integer programming for distribution center location is proposed. Based on this simple model, we introduce two important factors, transport mode and carbon emission, and extend it a model to describe the location problem for green supply chain. Sequently, IBM Watson implosion technologh (WIT) tool was introduced to describe them and solve them. By changing the price of crude oil, we illustrate the its impact on distribution center locations and transportation mode option for green supply chain. From the cases studies, we have known that, as the crude oil price increasing, the profits of the whole supply chain will decrease, carbon emission will also decrease to some degree, while the number of opened distribution center will increase.", "title": "" }, { "docid": "427d0d445985ac4eb31c7adbaf6f1e22", "text": "In this work, we jointly address the problem of text detection and recognition in natural scene images based on convolutional recurrent neural networks. We propose a unified network that simultaneously localizes and recognizes text with a single forward pass, avoiding intermediate processes, such as image cropping, feature re-calculation, word separation, and character grouping. In contrast to existing approaches that consider text detection and recognition as two distinct tasks and tackle them one by one, the proposed framework settles these two tasks concurrently. The whole framework can be trained end-to-end, requiring only images, ground-truth bounding boxes and text labels. The convolutional features are calculated only once and shared by both detection and recognition, which saves processing time. Through multi-task training, the learned features become more informative and improves the overall performance. Our proposed method has achieved competitive performance on several benchmark datasets.", "title": "" }, { "docid": "a1eff890cfc0d1334ebea1d90d152ae5", "text": "The purpose of this research was to develop understanding about how vendor firms make choice about agile methodologies in software projects and their fit. Two analytical frameworks were developed from extant literature and the findings were compared with real world decisions. Framework 1 showed that the choice of XP for one project was not supported by the guidelines given by the framework. The choices of SCRUM for other two projects, were partially supported. Analysis using the framework 2 showed that except one XP project, all others had sufficient project management support, limited scope for adaptability and had prominence for rules.", "title": "" }, { "docid": "8c26ab9cb2b5bc30c29b722ab7efe135", "text": "Conscious \"free will\" is problematic because (1) brain mechanisms causing consciousness are unknown, (2) measurable brain activity correlating with conscious perception apparently occurs too late for real-time conscious response, consciousness thus being considered \"epiphenomenal illusion,\" and (3) determinism, i.e., our actions and the world around us seem algorithmic and inevitable. The Penrose-Hameroff theory of \"orchestrated objective reduction (Orch OR)\" identifies discrete conscious moments with quantum computations in microtubules inside brain neurons, e.g., 40/s in concert with gamma synchrony EEG. Microtubules organize neuronal interiors and regulate synapses. In Orch OR, microtubule quantum computations occur in integration phases in dendrites and cell bodies of integrate-and-fire brain neurons connected and synchronized by gap junctions, allowing entanglement of microtubules among many neurons. Quantum computations in entangled microtubules terminate by Penrose \"objective reduction (OR),\" a proposal for quantum state reduction and conscious moments linked to fundamental spacetime geometry. Each OR reduction selects microtubule states which can trigger axonal firings, and control behavior. The quantum computations are \"orchestrated\" by synaptic inputs and memory (thus \"Orch OR\"). If correct, Orch OR can account for conscious causal agency, resolving problem 1. Regarding problem 2, Orch OR can cause temporal non-locality, sending quantum information backward in classical time, enabling conscious control of behavior. Three lines of evidence for brain backward time effects are presented. Regarding problem 3, Penrose OR (and Orch OR) invokes non-computable influences from information embedded in spacetime geometry, potentially avoiding algorithmic determinism. In summary, Orch OR can account for real-time conscious causal agency, avoiding the need for consciousness to be seen as epiphenomenal illusion. Orch OR can rescue conscious free will.", "title": "" }, { "docid": "714c06da1a728663afd8dbb1cd2d472d", "text": "This paper proposes hybrid semiMarkov conditional random fields (SCRFs) for neural sequence labeling in natural language processing. Based on conventional conditional random fields (CRFs), SCRFs have been designed for the tasks of assigning labels to segments by extracting features from and describing transitions between segments instead of words. In this paper, we improve the existing SCRF methods by employing word-level and segment-level information simultaneously. First, word-level labels are utilized to derive the segment scores in SCRFs. Second, a CRF output layer and an SCRF output layer are integrated into an unified neural network and trained jointly. Experimental results on CoNLL 2003 named entity recognition (NER) shared task show that our model achieves state-of-the-art performance when no external knowledge is used.", "title": "" }, { "docid": "41131af8c79ddfde932ecb5cff0c274d", "text": "We investigated whether experts can objectively focus on feature information in fingerprints without being misled by extraneous information, such as context. We took fingerprints that have previously been examined and assessed by latent print experts to make positive identification of suspects. Then we presented these same fingerprints again, to the same experts, but gave a context that suggested that they were a no-match, and hence the suspects could not be identified. Within this new context, most of the fingerprint experts made different judgements, thus contradicting their own previous identification decisions. Cognitive aspects involved in biometric identification can explain why experts are vulnerable to make erroneous identifications.", "title": "" }, { "docid": "4cb94c63d5c32a15977ed08553f8a80c", "text": "In the machine learning community it is generally believed that graph Laplacians corresponding to a finite sample of data points converge to a continuous Laplace operator if the sample size increases. Even though this assertion serves as a justification for many Laplacianbased algorithms, so far only some aspects of this claim have been rigorously proved. In this paper we close this gap by establishing the strong pointwise consistency of a family of graph Laplacians with datadependent weights to some weighted Laplace operator. Our investigation also includes the important case where the data lies on a submanifold of R.", "title": "" }, { "docid": "18b744209b3918d6636a87feed2597c6", "text": "Robot learning is critically enabled by the availability of appropriate state representations. We propose a robotics-specific approach to learning such state representations. As robots accomplish tasks by interacting with the physical world, we can facilitate representation learning by considering the structure imposed by physics; this structure is reflected in the changes that occur in the world and in the way a robot can effect them. By exploiting this structure in learning, robots can obtain state representations consistent with the aspects of physics relevant to the learning task. We name this prior knowledge about the structure of interactions with the physical world robotic priors. We identify five robotic priors and explain how they can be used to learn pertinent state representations. We demonstrate the effectiveness of this approach in simulated and real robotic experiments with distracting moving objects. We show that our method extracts task-relevant state representations from high-dimensional observations, even in the presence of taskirrelevant distractions. We also show that the state representations learned by our method greatly improve generalization in reinforcement learning.", "title": "" }, { "docid": "41cfa26891e28a76c1d4508ab7b60dfb", "text": "This paper analyses the digital simulation of a buck converter to emulate the photovoltaic (PV) system with focus on fuzzy logic control of buck converter. A PV emulator is a DC-DC converter (buck converter in the present case) having same electrical characteristics as that of a PV panel. The emulator helps in the real analysis of PV system in an environment where using actual PV systems can produce inconsistent results due to variation in weather conditions. The paper describes the application of fuzzy algorithms to the control of dynamic processes. The complete system is modelled in MATLAB® Simulink SimPowerSystem software package. The results obtained from the simulation studies are presented and the steady state and dynamic stability of the PV emulator system is discussed.", "title": "" }, { "docid": "e53b56da0d9221528a8020bf422522ce", "text": "This paper proposed a design of a modern FPGA-based Traffic Light Control (TLC) System to manage the road traffic. The approach is by controlling the access to areas shared among multiple intersections and allocating effective time between various users; during peak and off-peak hours. The implementation is based on real location in a city in Malaysia where the existing traffic light controller is a basic fixed-time method. This method is inefficient and almost always leads to traffic congestion during peak hours while drivers are given unnecessary waiting time during off-peak hours. The proposed design is a more universal and intelligent approach to the situation and has been implemented using FPGA. The system is implemented on ALTERA FLEX10K chip and simulation results are proven to be successful. Theoretically the waiting time for drivers during off-peak hours has been reduced further, therefore making the system better than the one being used at the moment. Future improvements include addition of other functions to the proposed design to suit various traffic conditions at different locations.", "title": "" }, { "docid": "1f88243ef61c52941208a9e92eb1a420", "text": "The maximum operating distance and the optimum performance (good coupling, lower/moderate power consumption) of even the well-designed NFC-reader-antenna in an RFID system depend largely on the good matching circuit. With the aforementioned objective, the paper presents here a modeling and computer aided design and then parameter extraction technique of a NFC-Reader Antenna. The 3D geometry model of the antenna is then simulated in frequency domain using Comsol multiphysics tool in order to extract the Reader-Antenna parameters. The extracted parameters at 13.56 MHz frequency are required for further RFsimulation, based on which matching circuit components (damping resistance and series & parallel capacitances etc.) of the Reader-Antenna at above frequency have been selected to achieve the best performance of the antenna.", "title": "" }, { "docid": "a5be27d89874b1dfcad85206ad7403ba", "text": "The upcoming Fifth Generation (5G) networks can provide ultra-reliable ultra-low latency vehicle-to-everything for vehicular ad hoc networks (VANET) to promote road safety, traffic management, information dissemination, and automatic driving for drivers and passengers. However, 5G-VANET also attracts tremendous security and privacy concerns. Although several pseudonymous authentication schemes have been proposed for VANET, the expensive cost for their initial authentication may cause serious denial of service (DoS) attacks, which furthermore enables to do great harm to real space via VANET. Motivated by this, a puzzle-based co-authentication (PCA) scheme is proposed here. In the PCA scheme, the Hash puzzle is carefully designed to mitigate DoS attacks against the pseudonymous authentication process, which is facilitated through collaborative verification. The effectiveness and efficiency of the proposed scheme is approved by performance analysis based on theory and experimental results.", "title": "" }, { "docid": "f65c027ab5baa981667955cc300d2f34", "text": "In-band full-duplex (FD) wireless communication, i.e. simultaneous transmission and reception at the same frequency, in the same channel, promises up to 2x spectral efficiency, along with advantages in higher network layers [1]. the main challenge is dealing with strong in-band leakage from the transmitter to the receiver (i.e. self-interference (SI)), as TX powers are typically >100dB stronger than the weakest signal to be received, necessitating TX-RX isolation and SI cancellation. Performing this SI-cancellation solely in the digital domain, if at all possible, would require extremely clean (low-EVM) transmission and a huge dynamic range in the RX and ADC, which is currently not feasible [2]. Cancelling SI entirely in analog is not feasible either, since the SI contains delayed TX components reflected by the environment. Cancelling these requires impractically large amounts of tunable analog delay. Hence, FD-solutions proposed thus far combine SI-rejection at RF, analog BB, digital BB and cross-domain.", "title": "" }, { "docid": "3bca1dd8dc1326693f5ebbe0eaf10183", "text": "This paper presents a novel multi-way multi-stage power divider design method based on the theory of small reflections. Firstly, the application of the theory of small reflections is extended from transmission line to microwave network. Secondly, an explicit closed-form analytical formula of the input reflection coefficient, which consists of the scattering parameters of power divider elements and the lengths of interconnection lines between each element, is derived. Thirdly, the proposed formula is applied to determine the lengths of interconnection lines. A prototype of a 16-way 4-stage power divider working at 4 GHz is designed and fabricated. Both the simulation and measurement results demonstrate the validity of the proposed method.", "title": "" } ]
scidocsrr
ec02edc5b59e82dc1d5b837df54e12d3
NVC-Hashmap: A Persistent and Concurrent Hashmap For Non-Volatile Memories
[ { "docid": "14e92e2c9cd31db526e084669d15903c", "text": "This paper presents three building blocks for enabling the efficient and safe design of persistent data stores for emerging non-volatile memory technologies. Taking the fullest advantage of the low latency and high bandwidths of emerging memories such as phase change memory (PCM), spin torque, and memristor necessitates a serious look at placing these persistent storage technologies on the main memory bus. Doing so, however, introduces critical challenges of not sacrificing the data reliability and consistency that users demand from storage. This paper introduces techniques for (1) robust wear-aware memory allocation, (2) preventing of erroneous writes, and (3) consistency-preserving updates that are cache-efficient. We show through our evaluation that these techniques are efficiently implementable and effective by demonstrating a B+-tree implementation modified to make full use of our toolkit.", "title": "" } ]
[ { "docid": "f09bc6f1b4f37fc4d822ccc4cdc1497f", "text": "It is generally believed that a metaphor tends to have a stronger emotional impact than a literal statement; however, there is no quantitative study establishing the extent to which this is true. Further, the mechanisms through which metaphors convey emotions are not well understood. We present the first data-driven study comparing the emotionality of metaphorical expressions with that of their literal counterparts. Our results indicate that metaphorical usages are, on average, significantly more emotional than literal usages. We also show that this emotional content is not simply transferred from the source domain into the target, but rather is a result of meaning composition and interaction of the two domains in the metaphor.", "title": "" }, { "docid": "1d03d6f7cd7ff9490dec240a36bf5f65", "text": "Responses generated by neural conversational models tend to lack informativeness and diversity. We present a novel adversarial learning method, called Adversarial Information Maximization (AIM) model, to address these two related but distinct problems. To foster response diversity, we leverage adversarial training that allows distributional matching of synthetic and real responses. To improve informativeness, we explicitly optimize a variational lower bound on pairwise mutual information between query and response. Empirical results from automatic and human evaluations demonstrate that our methods significantly boost informativeness and diversity.", "title": "" }, { "docid": "25a7f23c146add12bfab3f1fc497a065", "text": "One of the greatest puzzles of human evolutionary history concerns the how and why of the transition from small-scale, ‘simple’ societies to large-scale, hierarchically complex ones. This paper reviews theoretical approaches to resolving this puzzle. Our discussion integrates ideas and concepts from evolutionary biology, anthropology, and political science. The evolutionary framework of multilevel selection suggests that complex hierarchies can arise in response to selection imposed by intergroup conflict (warfare). The logical coherency of this theory has been investigated with mathematical models, and its predictions were tested empirically by constructing a database of the largest territorial states in the world (with the focus on the preindustrial era).", "title": "" }, { "docid": "bfbd291ce302fc2d7bd8909bd0f7e01a", "text": "The correlative change analysis of state parameters can provide powerful technical supports for safe, reliable, and high-efficient operation of the power transformers. However, the analysis methods are primarily based on a single or a few state parameters, and hence the potential failures can hardly be found and predicted. In this paper, a data-driven method of association rule mining for transformer state parameters has been proposed by combining the Apriori algorithm and probabilistic graphical model. In this method the disadvantage that whenever the frequent items are searched the whole data items have to be scanned cyclically has been overcame. This method is used in mining association rules of the numerical solutions of differential equations. The result indicates that association rules among the numerical solutions can be accurately mined. Finally, practical measured data of five 500 kV transformers is analyzed by the proposed method. The association rules of various state parameters have been excavated, and then the mined association rules are used in modifying the prediction results of single state parameters. The results indicate that the application of the mined association rules improves the accuracy of prediction. Therefore, the effectiveness and feasibility of the proposed method in association rule mining has been proved.", "title": "" }, { "docid": "da1ac93453bc9da937df4eb49902fbe5", "text": "A novel hierarchical multimodal attention-based model is developed in this paper to generate more accurate and descriptive captions for images. Our model is an \"end-to-end\" neural network which contains three related sub-networks: a deep convolutional neural network to encode image contents, a recurrent neural network to identify the objects in images sequentially, and a multimodal attention-based recurrent neural network to generate image captions. The main contribution of our work is that the hierarchical structure and multimodal attention mechanism is both applied, thus each caption word can be generated with the multimodal attention on the intermediate semantic objects and the global visual content. Our experiments on two benchmark datasets have obtained very positive results.", "title": "" }, { "docid": "d2f6b3fee7f40eb580451d9cc29b8aa6", "text": "Compositional Distributional Semantic methods model the distributional behavior of a compound word by exploiting the distributional behavior of its constituent words. In this setting, a constituent word is typically represented by a feature vector conflating all the senses of that word. However, not all the senses of a constituent word are relevant when composing the semantics of the compound. In this paper, we present two different methods for selecting the relevant senses of constituent words. The first one is based on Word Sense Induction and creates a static multi prototype vectors representing the senses of a constituent word. The second creates a single dynamic prototype vector for each constituent word based on the distributional properties of the other constituents in the compound. We use these prototype vectors for composing the semantics of noun-noun compounds and evaluate on a compositionality-based similarity task. Our results show that: (1) selecting relevant senses of the constituent words leads to a better semantic composition of the compound, and (2) dynamic prototypes perform better than static prototypes.", "title": "" }, { "docid": "29df7f7e7739bd78f0d72986d43e3adf", "text": "2009;53;992-1002; originally published online Feb 19, 2009; J. Am. Coll. Cardiol. and Leonard S. Gettes E. William Hancock, Barbara J. Deal, David M. Mirvis, Peter Okin, Paul Kligfield, International Society for Computerized Electrocardiology Endorsed by the Cardiology Foundation; and the Heart Rhythm Society Committee, Council on Clinical Cardiology; the American College of the American Heart Association Electrocardiography and Arrhythmias Associated With Cardiac Chamber Hypertrophy A Scientific Statement From Interpretation of the Electrocardiogram: Part V: Electrocardiogram Changes AHA/ACCF/HRS Recommendations for the Standardization and This information is current as of August 2, 2011 http://content.onlinejacc.org/cgi/content/full/53/11/992 located on the World Wide Web at: The online version of this article, along with updated information and services, is", "title": "" }, { "docid": "6b1dc94c4c70e1c78ea32a760b634387", "text": "3d reconstruction from a single image is inherently an ambiguous problem. Yet when we look at a picture, we can often infer 3d information about the scene. Humans perform single-image 3d reconstructions by using a variety of singleimage depth cues, for example, by recognizing objects and surfaces, and reasoning about how these surfaces are connected to each other. In this paper, we focus on the problem of automatic 3d reconstruction of indoor scenes, specifically ones (sometimes called “Manhattan worlds”) that consist mainly of orthogonal planes. We use a Markov random field (MRF) model to identify the different planes and edges in the scene, as well as their orientations. Then, an iterative optimization algorithm is applied to infer the most probable position of all the planes, and thereby obtain a 3d reconstruction. Our approach is fully automatic—given an input image, no human intervention is necessary to obtain an approximate 3d reconstruction.", "title": "" }, { "docid": "531e30bf9610b82f6fc650652e6fc836", "text": "A versatile microreactor platform featuring a novel chemical-resistant microvalve array has been developed using combined silicon/polymer micromachining and a special polymer membrane transfer process. The basic valve unit in the array has a typical ‘transistor’ structure and a PDMS/parylene double-layer valve membrane. A robust multiplexing algorithm is also proposed for individual addressing of a large array using a minimal number of signal inputs. The in-channel microvalve is leakproof upon pneumatic actuation. In open status it introduces small impedance to the fluidic flow, and allows a significantly larger dynamic range of flow rates (∼ml min−1) compared with most of the microvalves reported. Equivalent electronic circuits were established by modeling the microvalves as PMOS transistors and the fluidic channels as simple resistors to provide theoretical prediction of the device fluidic behavior. The presented microvalve/reactor array showed excellent chemical compatibility in the tests with several typical aggressive chemicals including those seriously degrading PDMS-based microfluidic devices. Combined with the multiplexing strategy, this versatile array platform can find a variety of lab-on-a-chip applications such as addressable multiplex biochemical synthesis/assays, and is particularly suitable for those requiring tough chemicals, large flow rates and/or high-throughput parallel processing. As an example, the device performance was examined through the addressed synthesis of 30-mer DNA oligonucleotides followed by sequence validation using on-chip hybridization. The results showed leakage-free valve array addressing and proper synthesis in target reactors, as well as uniform flow distribution and excellent regional reaction selectivity. (Some figures in this article are in colour only in the electronic version) 0960-1317/06/081433+11$30.00 © 2006 IOP Publishing Ltd Printed in the UK 1433", "title": "" }, { "docid": "b483d6fbe7d41af453e89c2d793eb1a2", "text": "Representing human decisions is of fundamental importance in agent-based models. However, the rationale for choosing a particular human decision model is often not sufficiently empirically or theoretically substantiated in the model documentation. Furthermore, it is difficult to compare models because the model descriptions are often incomplete, not transparent and difficult to understand. Therefore, we expand and refine the ‘ODD’ (Overview, Design Concepts and Details) protocol to establish a standard for describing ABMs that includes human decision-making (ODD+D). Because the ODD protocol originates mainly from an ecological perspective, some adaptations are necessary to better capture human decision-making. We extended and rearranged the design concepts and related guiding questions to differentiate and describe decision-making, adaptation and learning of the agents in a comprehensive and clearly structured way. The ODD+D protocol also incorporates a section on ‘Theoretical and Empirical Background’ to encourage model designs and model assumptions that are more closely related to theory. The application of the ODD+D protocol is illustrated with a description of a social-ecological ABM on water use. Although the ODD+D protocol was developed on the basis of example implementations within the socio-ecological scientific community, we believe that the ODD+D protocol may prove helpful for describing ABMs in general when human decisions are included.", "title": "" }, { "docid": "2f1862591d5f9ee80d7cdcb930f86d8d", "text": "In this research convolutional neural networks are used to recognize whether a car on a given image is damaged or not. Using transfer learning to take advantage of available models that are trained on a more general object recognition task, very satisfactory performances have been achieved, which indicate the great opportunities of this approach. In the end, also a promising attempt in classifying car damages into a few different classes is presented. Along the way, the main focus was on the influence of certain hyper-parameters and on seeking theoretically founded ways to adapt them, all with the objective of progressing to satisfactory results as fast as possible. This research open doors for future collaborations on image recognition projects in general and for the car insurance field in particular.", "title": "" }, { "docid": "9828a83e8b28b3b0d302a25da9120763", "text": "For robotic manipulators that are redundant or with high degrees of freedom (dof ), an analytical solution to the inverse kinematics is very difficult or impossible. Pioneer 2 robotic arm (P2Arm) is a recently developed and widely used 5-dof manipulator. There is no effective solution to its inverse kinematics to date. This paper presents a first complete analytical solution to the inverse kinematics of the P2Arm, which makes it possible to control the arm to any reachable position in an unstructured environment. The strategies developed in this paper could also be useful for solving the inverse kinematics problem of other types of robotic arms.", "title": "" }, { "docid": "4bfb6e5b039dd434e0c8aed461536acf", "text": "In many applications transactions between the elements of an information hierarchy occur over time. For example, the product offers of a department store can be organized into product groups and subgroups to form an information hierarchy. A market basket consisting of the products bought by a customer forms a transaction. Market baskets of one or more customers can be ordered by time into a sequence of transactions. Each item in a transaction is associated with a measure, for example, the amount paid for a product.\n In this paper we present a novel method for visualizing sequences of these kinds of transactions in information hierarchies. It uses a tree layout to draw the hierarchy and a timeline to represent progression of transactions in the hierarchy. We have developed several interaction techniques that allow the users to explore the data. Smooth animations help them to track the transitions between views. The usefulness of the approach is illustrated by examples from several very different application domains.", "title": "" }, { "docid": "716f8cadac94110c4a00bc81480a4b66", "text": "The last decade has witnessed the prevalence of sensor and GPS technologies that produce a sheer volume of trajectory data representing the motion history of moving objects. Measuring similarity between trajectories is undoubtedly one of the most important tasks in trajectory data management since it serves as the foundation of many advanced analyses such as similarity search, clustering, and classification. In this light, tremendous efforts have been spent on this topic, which results in a large number of trajectory similarity measures. Generally, each individual work introducing a new distance measure has made specific claims on the superiority of their proposal. However, for most works, the experimental study was focused on demonstrating the efficiency of the search algorithms, leaving the effectiveness aspect unverified empirically. In this paper, we conduct a comparative experimental study on the effectiveness of six widely used trajectory similarity measures based on a real taxi trajectory dataset. By applying a variety of transformations we designed for each original trajectory, our experimental observations demonstrate the advantages and drawbacks of these similarity measures in different circumstances.", "title": "" }, { "docid": "d8d91ea6fe6ce56a357a9b716bdfe849", "text": "Over the last years, automatic music classification has become a standard benchmark problem in the machine learning community. This is partly due to its inherent difficulty, and also to the impact that a fully automated classification system can have in a commercial application. In this paper we test the efficiency of a relatively new learning tool, Extreme Learning Machines (ELM), for several classification tasks on publicly available song datasets. ELM is gaining increasing attention, due to its versatility and speed in adapting its internal parameters. Since both of these attributes are fundamental in music classification, ELM provides a good alternative to standard learning models. Our results support this claim, showing a sustained gain of ELM over a feedforward neural network architecture. In particular, ELM provides a great decrease in computational training time, and has always higher or comparable results in terms of efficiency.", "title": "" }, { "docid": "abba5d320a4b6bf2a90ba2b836019660", "text": "We aim at segmenting small organs (e.g., the pancreas) from abdominal CT scans. As the target often occupies a relatively small region in the input image, deep neural networks can be easily confused by the complex and variable background. To alleviate this, researchers proposed a coarse-to-fine approach [46], which used prediction from the first (coarse) stage to indicate a smaller input region for the second (fine) stage. Despite its effectiveness, this algorithm dealt with two stages individually, which lacked optimizing a global energy function, and limited its ability to incorporate multi-stage visual cues. Missing contextual information led to unsatisfying convergence in iterations, and that the fine stage sometimes produced even lower segmentation accuracy than the coarse stage. This paper presents a Recurrent Saliency Transformation Network. The key innovation is a saliency transformation module, which repeatedly converts the segmentation probability map from the previous iteration as spatial weights and applies these weights to the current iteration. This brings us two-fold benefits. In training, it allows joint optimization over the deep networks dealing with different input scales. In testing, it propagates multi-stage visual information throughout iterations to improve segmentation accuracy. Experiments in the NIH pancreas segmentation dataset demonstrate the state-of-the-art accuracy, which outperforms the previous best by an average of over 2%. Much higher accuracies are also reported on several small organs in a larger dataset collected by ourselves. In addition, our approach enjoys better convergence properties, making it more efficient and reliable in practice.", "title": "" }, { "docid": "69624e1501b897bf1a9f9a5a84132da3", "text": "360° videos and Head-Mounted Displays (HMDs) are geŠing increasingly popular. However, streaming 360° videos to HMDs is challenging. Œis is because only video content in viewers’ Fieldof-Views (FoVs) is rendered, and thus sending complete 360° videos wastes resources, including network bandwidth, storage space, and processing power. Optimizing the 360° video streaming to HMDs is, however, highly data and viewer dependent, and thus dictates real datasets. However, to our best knowledge, such datasets are not available in the literature. In this paper, we present our datasets of both content data (such as image saliency maps and motion maps derived from 360° videos) and sensor data (such as viewer head positions and orientations derived from HMD sensors). We put extra e‚orts to align the content and sensor data using the timestamps in the raw log €les. Œe resulting datasets can be used by researchers, engineers, and hobbyists to either optimize existing 360° video streaming applications (like rate-distortion optimization) and novel applications (like crowd-driven cameramovements). We believe that our dataset will stimulate more research activities along this exciting new research direction. ACM Reference format: Wen-Chih Lo, Ching-Ling Fan, Jean Lee, Chun-Ying Huang, Kuan-Ta Chen, and Cheng-Hsin Hsu. 2017. 360° Video Viewing Dataset in Head-Mounted Virtual Reality. In Proceedings ofMMSys’17, Taipei, Taiwan, June 20-23, 2017, 6 pages. DOI: hŠp://dx.doi.org/10.1145/3083187.3083219 CCS Concept • Information systems→Multimedia streaming", "title": "" }, { "docid": "68c7509ec0261b1ddccef7e3ad855629", "text": "This research comprehensively illustrates the design, implementation and evaluation of a novel marker less environment tracking technology for an augmented reality based indoor navigation application, adapted to efficiently operate on a proprietary head-mounted display. Although the display device used, Google Glass, had certain pitfalls such as short battery life, slow processing speed, and lower quality visual display but the tracking technology was able to complement these limitations by rendering a very efficient, precise, and intuitive navigation experience. The performance assessments, conducted on the basis of efficiency and accuracy, substantiated the utility of the device for everyday navigation scenarios, whereas a later conducted subjective evaluation of handheld and wearable devices also corroborated the wearable as the preferred device for indoor navigation.", "title": "" }, { "docid": "75e5308959bfed2cf54af052b66798b2", "text": "This article describes a design and implementation of an augmented desk system, named EnhancedDesk, which smoothly integrates paper and digital information on a desk. The system provides users an intelligent environment that automatically retrieves and displays digital information corresponding to the real objects (e.g., books) on the desk by using computer vision. The system also provides users direct manipulation of digital information by using the users' own hands and fingers for more natural and more intuitive interaction. Based on the experiments with our first prototype system, some critical issues on augmented desk systems were identified when trying to pursue rapid and fine recognition of hands and fingers. To overcome these issues, we developed a novel method for realtime finger tracking on an augmented desk system by introducing a infrared camera, pattern matching with normalized correlation, and a pan-tilt camera. We then show an interface prototype on EnhancedDesk. It is an application to a computer-supported learning environment, named Interactive Textbook. The system shows how effective the integration of paper and digital information is and how natural and intuitive direct manipulation of digital information with users' hands and fingers is.", "title": "" }, { "docid": "c4b17bc4c36ce3792c6b560f75cc66e9", "text": "We examined the association among anxiety, religiosity, meaning of life and mental health in a nonclinical sample from a Chinese society. Four hundred fifty-one Taiwanese adults (150 males and 300 females) ranging in age from 17 to 73 years (M = 28.9, SD = 11.53) completed measures of Beck Anxiety Inventory, Medical Outcomes Study Health Survey, Perceived Stress Scale, Social Support Scale, and Personal Religiosity Scale (measuring religiosity and meaning of life). Meaning of life has a significant negative correlation with anxiety and a significant positive correlation with mental health and religiosity; however, religiosity does not correlate significantly anxiety and mental health after controlling for demographic measures, social support and physical health. Anxiety explains unique variance in mental health above meaning of life. Meaning of life was found to partially mediate the relationship between anxiety and mental health. These findings suggest that benefits of meaning of life for mental health can be at least partially accounted for by the effects of underlying anxiety.", "title": "" } ]
scidocsrr
6cfc7d6395dedf0a03524325127533b3
Three-phase Flyback-Boost DC-DC converter with three-phase high frequency isolation
[ { "docid": "00ac09dab67200f6b9df78a480d6dbd8", "text": "In this paper, a new three-phase current-fed push-pull DC-DC converter is proposed. This converter uses a high-frequency three-phase transformer that provides galvanic isolation between the power source and the load. The three active switches are connected to the same reference, which simplifies the gate drive circuitry. Reduction of the input current ripple and the output voltage ripple is achieved by means of an inductor and a capacitor, whose volumes are smaller than in equivalent single-phase topologies. The three-phase DC-DC conversion also helps in loss distribution, allowing the use of lower cost switches. These characteristics make this converter suitable for applications where low-voltage power sources are used and the associated currents are high, such as in fuel cells, photovoltaic arrays, and batteries. The theoretical analysis, a simplified design example, and the experimental results for a 1-kW prototype will be presented for two operation regions. The prototype was designed for a switching frequency of 40 kHz, an input voltage of 120 V, and an output voltage of 400 V.", "title": "" } ]
[ { "docid": "c77186175130cae151641690ef0564dd", "text": "This paper considers the security problem of outsourcing storage from user devices to the cloud. A secure searchable encryption scheme is presented to enable searching of encrypted user data in the cloud. The scheme simultaneously supports fuzzy keyword searching and matched results ranking, which are two important factors in facilitating practical searchable encryption. A chaotic fuzzy transformation method is proposed to support secure fuzzy keyword indexing, storage and query. A secure posting list is also created to rank the matched results while maintaining the privacy and confidentiality of the user data, and saving the resources of the user mobile devices. Comprehensive tests have been performed and the experimental results show that the proposed scheme is efficient and suitable for a secure searchable cloud storage system.", "title": "" }, { "docid": "3c014a3e17f6d200a132e31b51ad7fad", "text": "This paper studies a fault tolerant control strategy for a four wheel skid steering mobile robot (SSMR). Through this work the fault diagnosis procedure is accomplished using structural analysis technique while fault accommodation is based on a Recursive Least Squares (RLS) approximation. The goal is to detect faults as early as possible and recalculate command inputs in order to achieve fault tolerance, which means that despites the faults occurrences the system is able to recover its original task with the same or degraded performance. Fault tolerance can be considered that it is constituted by two basic tasks, fault diagnosis and control redesign. In our research using the diagnosis approach presented in our previous work we addressed mainly to the second task proposing a framework for fault tolerant control, which allows retaining acceptable performance under systems faults. In order to prove the efficacy of the proposed method, an experimental procedure was carried out using a Pioneer 3-AT mobile robot.", "title": "" }, { "docid": "48303e0519f6fe8e2106318329b84b46", "text": "Endowing an intelligent agent with an episodic memory affords it a multitude of cognitive capabilities. However, providing efficient storage and retrieval in a task-independent episodic memory presents considerable theoretical and practical challenges. We characterize the computational issues bounding an episodic memory. We explore whether even with intractable asymptotic growth, it is possible to develop efficient algorithms and data structures for episodic memory systems that are practical for real-world tasks. We present and evaluate formal and empirical results using Soar-EpMem: a task-independent integration of episodic memory with Soar 9, providing a baseline for graph-based, taskindependent episodic memory systems.", "title": "" }, { "docid": "469c17aa0db2c70394f081a9a7c09be5", "text": "The potential of blockchain technology has received attention in the area of FinTech — the combination of finance and technology. Blockchain technology was first introduced as the technology behind the Bitcoin decentralized virtual currency, but there is the expectation that its characteristics of accurate and irreversible data transfer in a decentralized P2P network could make other applications possible. Although a precise definition of blockchain technology has not yet been given, it is important to consider how to classify different blockchain systems in order to better understand their potential and limitations. The goal of this paper is to add to the discussion on blockchain technology by proposing a classification based on two dimensions external to the system: (1) existence of an authority (without an authority and under an authority) and (2) incentive to participate in the blockchain (market-based and non-market-based). The combination of these elements results in four types of blockchains. We define these dimensions and describe the characteristics of the blockchain systems belonging to each classification.", "title": "" }, { "docid": "7d90646ca1b2b8f96fd808ef6f544b09", "text": "Tanagra is a mixed-initiative tool for level design, allowing a human and a computer to work together to produce a level for a 2-D platformer. An underlying, reactive level generator ensures that all levels created in the environment are playable, and provides the ability for a human designer to rapidly view many different levels that meet their specifications. The human designer can iteratively refine the level by placing and moving level geometry, as well as through directly manipulating the pacing of the level. This paper presents the design environment, its underlying architecture that integrates reactive planning and numerical constraint solving, and an evaluation of Tanagra's expressive range.", "title": "" }, { "docid": "5f52b31afe9bf18f009a10343ccedaf0", "text": "The preservation of image quality under various display conditions becomes more and more important in the multimedia era. A considerable amount of effort has been devoted to compensating the quality degradation caused by dim LCD backlight for mobile devices and desktop monitors. However, most previous enhancement methods for backlight-scaled images only consider the luminance component and overlook the impact of color appearance on image quality. In this paper, we propose a fast and elegant method that exploits the anchoring property of human visual system to preserve the color appearance of backlight-scaled images as much as possible. Our approach is distinguished from previous ones in many aspects. First, it has a sound theoretical basis. Second, it takes the luminance and chrominance components into account in an integral manner. Third, it has low complexity and can process 720p high-definition videos at 35 frames per second without flicker. The superior performance of the proposed method is verified through psychophysical tests.", "title": "" }, { "docid": "ada8d97fcedf2ef2053237acf686069a", "text": "Proof of stake is a consensus mechanism for digital currencies that is an alternative to proof of work used in Bitcoin. The main declared advantages of proof of stake approaches are the absence of expensive computations and hence a lower entry barrier for block generation rewards. In this report, we examine the pros and cons of both consensus systems and show that existing implementations of proof of stake are vulnerable to attacks which are highly unlikely in Bitcoin and proof of work approaches in general. Version History Version Date Change description 1.0 Sep 13, 2015 Initial version © 2015 Bitfury Group Limited Without permission, anyone may use, reproduce or distribute any material in this paper for noncommercial and educational use (i.e., other than for a fee or for commercial purposes) provided that the original source and the applicable copyright notice are cited. The underlying database structure for transactions of Bitcoin and other digital currencies is a decentralized ledger, called the blockchain, which stores the entire transaction history. The name stems from the fact that transactions are bundled into blocks; each block in the blockchain (except for the first i.e. genesis block) references a previous block. Each node participating in the Bitcoin network has its own copy of the blockchain, which is synchronized with other nodes using a peer-to-peer protocol1. Any implementation of digital currency must have a way to secure its blockchain against attacks. For example, an attacker may spend some money and then reverse the spending transaction by broadcasting his own version of the blockchain, which does not include this transaction; as security of the blockchain does not rely on a single authority, users have no prior knowledge as to which version of the ledger is valid. In Bitcoin, the security of the network relies on a proof of work (PoW) algorithm in the formof block mining. Each node that wants to participate in mining is required to solve a computationally difficult problem to ensure the validity of the newly mined block; solutions are rewarded with bitcoins. The protocol is fair in the sense that a miner with p fraction of the total computational power can win the reward and create a block with the probability p. An attacker is required to solve the same tasks as the rest of the Bitcoin network; i.e., an attack on Bitcoin will only be successful if the attacker can bring to bear significant computational resources. Operation of the Bitcoin protocol is such that security of the network is supported by physically scarce resources: • specialized hardware needed to run computations, and • electricity spent to power the hardware. This makes Bitcoin inefficient from a resource standpoint. To increase their share of rewards, Bitcoin miners are compelled to participate in an arms race to continuously deploymore resources inmining. While this makes the cost of an attack on Bitcoin prohibitively high, the ecological unfriendliness of the Bitcoin protocol has resulted in proposals to build similar systems that are much less resource intensive. One possible decentralized ledger implementation with security not based on expensive computations relies on proof of stake (PoS) algorithms. The idea behind proof of stake is simple: instead of mining power, the probability to create a block and receive the associated reward is proportional to a user’s ownership stake in the system. An individual stakeholderwho has p fraction of the total number of coins in circulation creates a new block with p probability. The rationale behind proof of stake is the following: users with the highest stakes in the system have the most interest to maintain a secure network, as they will suffer the most if the reputation and price of the cryptocurrency would diminish because of the attacks. To mount a successful attack, an Strictly speaking, there exist Bitcoin nodes that do not store the entire blockchain, but rather rely on simplified payment verification [1]. We don’t consider these nodes in the following research, as they do not contribute to the security of the network.", "title": "" }, { "docid": "e52c40a4fcb6cdb3d9b177e371127185", "text": "Over the last years, there has been substantial progress in robust manipulation in unstructured environments. The long-term goal of our work is to get away from precise, but very expensive robotic systems and to develop affordable, potentially imprecise, self-adaptive manipulator systems that can interactively perform tasks such as playing with children. In this paper, we demonstrate how a low-cost off-the-shelf robotic system can learn closed-loop policies for a stacking task in only a handful of trials—from scratch. Our manipulator is inaccurate and provides no pose feedback. For learning a controller in the work space of a Kinect-style depth camera, we use a model-based reinforcement learning technique. Our learning method is data efficient, reduces model bias, and deals with several noise sources in a principled way during long-term planning. We present a way of incorporating state-space constraints into the learning process and analyze the learning gain by exploiting the sequential structure of the stacking task.", "title": "" }, { "docid": "98e3279056e9bc15ce4b32c6dc027af9", "text": "Publication Information Bazrafkan, Shabab , Javidnia, Hossein , Lemley, Joseph , & Corcoran, Peter (2018). Semiparallel deep neural network hybrid architecture: first application on depth from monocular camera. Journal of Electronic Imaging, 27(4), 19. doi: 10.1117/1.JEI.27.4.043041 Publisher Society of Photo-optical Instrumentation Engineers (SPIE) Link to publisher's version https://dx.doi.org/10.1117/1.JEI.27.4.043041", "title": "" }, { "docid": "d88b845296811f881e46ed04e6caca31", "text": "OBJECTIVES\nThis study evaluated how patient characteristics and duplex ultrasound findings influence management decisions of physicians with specific expertise in the field of chronic venous disease.\n\n\nMETHODS\nWorldwide, 346 physicians with a known interest and experience in phlebology were invited to participate in an online survey about management strategies in patients with great saphenous vein (GSV) reflux and refluxing tributaries. The survey included two basic vignettes representing a 47 year old healthy male with GSV reflux above the knee and a 27 year old healthy female with a short segment refluxing GSV (CEAP classification C2sEpAs2,5Pr in both cases). Participants could choose one or more treatment options. Subsequently, the basic vignettes were modified according to different patient characteristics (e.g. older age, morbid obesity, anticoagulant treatment, peripheral arterial disease), clinical class (C4, C6), and duplex ultrasound findings (e.g. competent terminal valve, larger or smaller GSV diameter, presence of focal dilatation). The authors recorded the distribution of chosen management strategies; adjustment of strategies according to characteristics; and follow up strategies.\n\n\nRESULTS\nA total of 211 physicians (68% surgeons, 12% dermatologists, 12% angiologists, and 8% phlebologists) from 36 different countries completed the survey. In the basic case vignettes 1 and 2, respectively, 55% and 40% of participants proposed to perform endovenous thermal ablation, either with or without concomitant phlebectomies (p < .001). Looking at the modified case vignettes, between 20% and 64% of participants proposed to adapt their management strategy, opting for either a more or a less invasive treatment, depending on the modification introduced. The distribution of chosen management strategies changed significantly for all modified vignettes (p < .05).\n\n\nCONCLUSIONS\nThis study illustrates the worldwide variety in management preferences for treating patients with varicose veins (C2-C6). In clinical practice, patient related and duplex ultrasound related factors clearly influence therapeutic options.", "title": "" }, { "docid": "f1fe8a9d2e4886f040b494d76bc4bb78", "text": "The benefits of enhanced condition monitoring in the asset management of the electricity transmission infrastructure are increasingly being exploited by the grid operators. Adding more sensors helps to track the plant health more accurately. However, the installation or operating costs of any additional sensors could outweigh the benefits they bring due to the requirement for new cabling or battery maintenance. Energy harvesting devices are therefore being proposed to power a new generation of wireless sensors. The harvesting devices could enable the sensors to be maintenance free over their lifetime and substantially reduce the cost of installing and operating a condition monitoring system.", "title": "" }, { "docid": "33a436e4b987093fdd5f1fcc1a4b74cf", "text": "Observational methods are fundamental to the study of human behavior in the behavioral sciences. For example, in the context of research on intimate relationships, psychologists’ hypotheses are often empirically tested by video recording interactions of couples and manually coding relevant behaviors using standardized coding systems. This coding process can be time-consuming, and the resulting coded data may have a high degree of variability because of a number of factors (e.g., inter-evaluator differences). These challenges provide an opportunity to employ engineering methods to aid in automatically coding human behavioral data. In this work, we analyzed a large corpus of married couples’ problem-solving interactions. Each spouse was manually coded with multiple session-level behavioral observations (e.g., level of blame toward other spouse), and we used acoustic speech features to automatically classify extreme instances for six selected codes (e.g., “low” vs. “high” blame). Specifically, we extracted prosodic, spectral, and voice quality features to capture global acoustic properties for each spouse and trained gender-specific and gender-independent classifiers. The best overall automatic system correctly classified 74.1% of the instances, an improvement of 3.95% absolute (5.63% relative) over our previously reported best results. We compare performance for the various factors: across codes, gender, classifier type, and feature type. 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "4a5ced961de32d383427e8825bb5c41b", "text": "1. Top-down control can be an important determinant of ecosystem structure and function, but in oceanic ecosystems, where cascading effects of predator depletions, recoveries, and invasions could be significant, such effects had rarely been demonstrated until recently. 2. Here we synthesize the evidence for oceanic top-down control that has emerged over the last decade, focusing on large, high trophic-level predators inhabiting continental shelves, seas, and the open ocean. 3. In these ecosystems, where controlled manipulations are largely infeasible, 'pseudo-experimental' analyses of predator-prey interactions that treat independent predator populations as 'replicates', and temporal or spatial contrasts in predator populations and climate as 'treatments', are increasingly employed to help disentangle predator effects from environmental variation and noise. 4. Substantial reductions in marine mammals, sharks, and piscivorous fishes have led to mesopredator and invertebrate predator increases. Conversely, abundant oceanic predators have suppressed prey abundances. Predation has also inhibited recovery of depleted species, sometimes through predator-prey role reversals. Trophic cascades have been initiated by oceanic predators linking to neritic food webs, but seem inconsistent in the pelagic realm with effects often attenuating at plankton. 5. Top-down control is not uniformly strong in the ocean, and appears contingent on the intensity and nature of perturbations to predator abundances. Predator diversity may dampen cascading effects except where nonselective fisheries deplete entire predator functional groups. In other cases, simultaneous exploitation of predator and prey can inhibit prey responses. Explicit consideration of anthropogenic modifications to oceanic foodwebs should help inform predictions about trophic control. 6. Synthesis and applications. Oceanic top-down control can have important socio-economic, conservation, and management implications as mesopredators and invertebrates assume dominance, and recovery of overexploited predators is impaired. Continued research aimed at integrating across trophic levels is needed to understand and forecast the ecosystem effects of changing oceanic predator abundances, the relative strength of top-down and bottom-up control, and interactions with intensifying anthropogenic stressors such as climate change.", "title": "" }, { "docid": "6e100d0a1b213e1b04a0492f88c6d24a", "text": "Ovarian cysts over 5 and 15 cm in diameter are described as large and giant, respectively3. In addition, women having large cysts without regression in 6-8 weeks time are candidates for surgery. Although data has been published on laparoscopic or laparoscopy assisted management of large and giant cysts, midline laparotomy is still preferred by many surgeons, particularly in cases of giant cysts.", "title": "" }, { "docid": "1abcf9480879b3d29072f09d5be8609d", "text": "Warm restart techniques on training deep neural networks often achieve better recognition accuracies and can be regarded as easy methods to obtain multiple neural networks with no additional training cost from a single training process. Ensembles of intermediate neural networks obtained by warm restart techniques can provide higher accuracy than a single neural network obtained finally by a whole training process. However, existing methods on both of warm restart and its ensemble techniques use fixed cyclic schedules and have little degree of parameter adaption. This paper extends a class of possible schedule strategies of warm restart, and clarifies their effectiveness for recognition performance. Specifically, we propose parameterized functions and various cycle schedules to improve recognition accuracies by the use of deep neural networks with no additional training cost. Experiments on CIFAR-10 and CIFAR-100 show that our methods can achieve more accurate rates than the existing cyclic training and ensemble methods.", "title": "" }, { "docid": "5c5e0e1800afab7ce790f726fd5c5c8f", "text": "Transformative applications are computation intensive applications characterized by iterative dataflow behavior. Typical examples are image processing applications like JPEG, MPEG, etc. The performance of embedded hardware–software systems that implement transformative applications can be maximized by obtaining a pipelined design. We present a tool for hardware–software partitioning and pipelined scheduling of transformative applications. The tool uses iterative partitioning and pipelined scheduling to obtain optimal partitions that satisfy the timing and area constraints. The partitioner uses a branch and bound approach with a unique objective function that minimizes the initiation interval of the final design. We present techniques for generation of good initial solution and search-space limitation for the branch and bound algorithm. A candidate partition is evaluated by generating its pipelined schedule. The scheduler uses a novel retiming heuristic that optimizes the initiation interval, number of pipeline stages, and memory requirements of the particular design alternative. We evaluate the performance of the retiming heuristic by comparing it with an existing technique. The effectiveness of the entire tool is demonstrated by a case study of the JPEG image compression algorithm. We also evaluate the run time and design quality of the tool by experimentation with synthetic graphs.", "title": "" }, { "docid": "748d71e6832288cd0120400d6069bf50", "text": "This paper introduces the matrix formalism of optics as a useful approach to the area of “light fields”. It is capable of reproducing old results in Integral Photography, as well as generating new ones. Furthermore, we point out the equivalence between radiance density in optical phase space and the light field. We also show that linear transforms in matrix optics are applicable to light field rendering, and we extend them to affine transforms, which are of special importance to designing integral view cameras. Our main goal is to provide solutions to the problem of capturing the 4D light field with a 2D image sensor. From this perspective we present a unified affine optics view on all existing integral / light field cameras. Using this framework, different camera designs can be produced. Three new cameras are proposed. Figure 1: Integral view of a seagull", "title": "" }, { "docid": "971692db73441f7c68a0cc32927ae0b2", "text": "This letter presents a new lattice-form complex adaptive IIR notch filter to estimate and track the frequency of a complex sinusoid signal. The IIR filter is a cascade of a direct-form all-pole prefilter and an adaptive lattice-form all-zero filter. A complex domain exponentially weighted recursive least square algorithm is adopted instead of the widely used least mean square algorithm to increase the convergence rate. The convergence property of this algorithm is investigated, and an expression for the steady-state asymptotic bias is derived. Analysis results indicate that the frequency estimate for a single complex sinusoid is unbiased. Simulation results demonstrate that the proposed method achieves faster convergence and better tracking performance than all traditional algorithms.", "title": "" }, { "docid": "e2589af8d7cb0958ed9225d58be895df", "text": "The crowding of wireless band has necessitated the development of multiband and wideband wireless antennas. Because of the self similar characteristics, fractal concepts have emerged as a design methodology for compact multiband antennas. A Koch-like fractal curve is proposed to transform ultra-wideband (UWB) bow-tie into so called Koch-like sided fractal bow-tie dipole. A small isosceles triangle is cut off from center of each side of the initial isosceles triangle, then the procedure iterates along the sides like Koch curve does, forming the Koch-like fractal bow-tie geometry, used for multiband applications. ADS software is used to design the proposed antennna. It has covers the applications like GSM, wireless band and other wireless communications. Keywords—fractal; koch curve; bow tie antenna; ADS(Advanced Design System);", "title": "" }, { "docid": "9c008dc2f3da4453317ce92666184da0", "text": "In embedded system design, there is an increasing demand for modeling techniques that can provide both accurate measurements of delay and fast simulation speed. Modeling latency effects of a cache can greatly increase accuracy of the simulation and assist developers to optimize their software. Current solutions have not succeeded in balancing three important factors: speed, accuracy and usability. In this research, we created a cache simulation module inside a well-known instruction set simulator QEMU. Our implementation can simulate various cases of cache configuration and obtain every memory access. In full system simulation, speed is kept at around 73 MIPS on a personal host computer which is close to native execution of ARM Cortex-M3(125 MIPS at 100 MHz). Compared to the widely used cache simulation tool, Valgrind, our simulator is three time faster.", "title": "" } ]
scidocsrr
2e3f4dbfecdf6b4835e0c068b916cca7
What Motivates Consumers to Write Online Travel Reviews?
[ { "docid": "1993b540ff91922d381128e9c8592163", "text": "The use of the WWW as a venue for voicing opinions, complaints and recommendations on products and firms has been widely reported in the popular media. However little is known how consumers use these reviews and if they subsequently have any influence on evaluations and purchase intentions of products and retailers. This study examines the effect of negative reviews on retailer evaluation and patronage intention given that the consumer has already made a product/brand decision. Our results indicate that the extent of WOM search depends on the consumer’s reasons for choosing an online retailer. Further the influence of negative WOM information on perceived reliability and purchase intentions is determined largely by familiarity with the retailer and differs based on whether the retailer is a pure-Internet or clicks-and-mortar firm. Managerial implications for positioning strategies to minimize the effect of negative word-ofmouth have been discussed.", "title": "" }, { "docid": "c57cbe432fdab3f415d2c923bea905ff", "text": "Through Web-based consumer opinion platforms (e.g., epinions.com), the Internet enables customers to share their opinions on, and experiences with, goods and services with a multitude of other consumers; that is, to engage in electronic wordof-mouth (eWOM) communication. Drawing on findings from research on virtual communities and traditional word-of-mouth literature, a typology for motives of consumer online articulation is © 2004 Wiley Periodicals, Inc. and Direct Marketing Educational Foundation, Inc.", "title": "" } ]
[ { "docid": "39007b91989c42880ff96e7c5bdcf519", "text": "Feature selection has aroused considerable research interests during the last few decades. Traditional learning-based feature selection methods separate embedding learning and feature ranking. In this paper, we propose a novel unsupervised feature selection framework, termed as the joint embedding learning and sparse regression (JELSR), in which the embedding learning and sparse regression are jointly performed. Specifically, the proposed JELSR joins embedding learning with sparse regression to perform feature selection. To show the effectiveness of the proposed framework, we also provide a method using the weight via local linear approximation and adding the ℓ2,1-norm regularization, and design an effective algorithm to solve the corresponding optimization problem. Furthermore, we also conduct some insightful discussion on the proposed feature selection approach, including the convergence analysis, computational complexity, and parameter determination. In all, the proposed framework not only provides a new perspective to view traditional methods but also evokes some other deep researches for feature selection. Compared with traditional unsupervised feature selection methods, our approach could integrate the merits of embedding learning and sparse regression. Promising experimental results on different kinds of data sets, including image, voice data and biological data, have validated the effectiveness of our proposed algorithm.", "title": "" }, { "docid": "7e439ac3ff2304b6e1aaa098ff44b0cb", "text": "Geological structures, such as faults and fractures, appear as image discontinuities or lineaments in remote sensing data. Geologic lineament mapping is a very important issue in geo-engineering, especially for construction site selection, seismic, and risk assessment, mineral exploration and hydrogeological research. Classical methods of lineaments extraction are based on semi-automated (or visual) interpretation of optical data and digital elevation models. We developed a freely available Matlab based toolbox TecLines (Tectonic Lineament Analysis) for locating and quantifying lineament patterns using satellite data and digital elevation models. TecLines consists of a set of functions including frequency filtering, spatial filtering, tensor voting, Hough transformation, and polynomial fitting. Due to differences in the mathematical background of the edge detection and edge linking procedure as well as the breadth of the methods, we introduce the approach in two-parts. In this first study, we present the steps that lead to edge detection. We introduce the data pre-processing using selected filters in spatial and frequency domains. We then describe the application of the tensor-voting framework to improve position and length accuracies of the detected lineaments. We demonstrate the robustness of the approach in a complex area in the northeast of Afghanistan using a panchromatic QUICKBIRD-2 image with 1-meter resolution. Finally, we compare the results of TecLines with manual lineament extraction, and other lineament extraction algorithms, as well as a published fault map of the study area. OPEN ACCESS Remote Sens. 2014, 6 5939", "title": "" }, { "docid": "1feaf48291b7ea83d173b70c23a3b7c0", "text": "Machine learning plays a critical role in extracting meaningful information out of the zetabytes of sensor data collected every day. For some applications, the goal is to analyze and understand the data to identify trends (e.g., surveillance, portable/wearable electronics); in other applications, the goal is to take immediate action based the data (e.g., robotics/drones, self-driving cars, smart Internet of Things). For many of these applications, local embedded processing near the sensor is preferred over the cloud due to privacy or latency concerns, or limitations in the communication bandwidth. However, at the sensor there are often stringent constraints on energy consumption and cost in addition to throughput and accuracy requirements. Furthermore, flexibility is often required such that the processing can be adapted for different applications or environments (e.g., update the weights and model in the classifier). In many applications, machine learning often involves transforming the input data into a higher dimensional space, which, along with programmable weights, increases data movement and consequently energy consumption. In this paper, we will discuss how these challenges can be addressed at various levels of hardware design ranging from architecture, hardware-friendly algorithms, mixed-signal circuits, and advanced technologies (including memories and sensors).", "title": "" }, { "docid": "358423f8ef08080935f280d71ae921a0", "text": "Many of contemporary computer and machine vision applications require finding of corresponding points across multiple images. To that goal, among many features, the most commonly used are corner points. Corners are formed by two or more edges, and mark the boundaries of objects or boundaries between distinctive object parts. This makes corners the feature points that used in a wide range of tasks. Therefore, numerous corner detectors with different properties have been developed. In this paper, we present a complete FPGA architecture implementing corer detection. This architecture is based on the FAST algorithm. The proposed solution is capable of processing the incoming image data with the speed of hundreds of frames per second for a 512 × , 8-bit gray-scale image. The speed is comparable to the results achieved by top-of-the-shelf general purpose processors. However, the use of inexpensive FPGA allows to cut costs, power consumption and to reduce the footprint of a complete system solution. The paper includes also a brief description of the implemented algorithm, resource usage summary, resulting images, as well as block diagrams of the described architecture.", "title": "" }, { "docid": "6c07520a738f068f1bc3bdb8e3fda89b", "text": "We analyze the role of the Global Brain in the sharing economy, by synthesizing the notion of distributed intelligence with Goertzel’s concept of an offer network. An offer network is an architecture for a future economic system based on the matching of offers and demands without the intermediate of money. Intelligence requires a network of condition-action rules, where conditions represent challenges that elicit action in order to solve a problem or exploit an opportunity. In society, opportunities correspond to offers of goods or services, problems to demands. Tackling challenges means finding the best sequences of condition-action rules to connect all demands to the offers that can satisfy them. This can be achieved with the help of AI algorithms working on a public database of rules, demands and offers. Such a system would provide a universal medium for voluntary collaboration and economic exchange, efficiently coordinating the activities of all people on Earth. It would replace and subsume the patchwork of commercial and community-based sharing platforms presently running on the Internet. It can in principle resolve the traditional problems of the capitalist economy: poverty, inequality, externalities, poor sustainability and resilience, booms and busts, and the neglect of non-monetizable values.", "title": "" }, { "docid": "c49ed75ce48fb92db6e80e4fe8af7127", "text": "The One Class Classification (OCC) problem is different from the conventional binary/multi-class classification problem in the sense that in OCC, the negative class is either not present or not properly sampled. The problem of classifying positive (or target) cases in the absence of appropriately-characterized negative cases (or outliers) has gained increasing attention in recent years. Researchers have addressed the task of OCC by using different methodologies in a variety of application domains. In this paper we formulate a taxonomy with three main categories based on the way OCC has been envisaged, implemented and applied by various researchers in different application domains. We also present a survey of current state-of-the-art OCC algorithms, their importance, applications and limitations.", "title": "" }, { "docid": "7c10a44e5fa0f9e01951e89336c4b4d6", "text": "Previous studies have examined the online research behaviors of graduate students in terms of how they seek and retrieve research-related information on the Web across diverse disciplines. However, few have focused on graduate students’ searching activities, and particularly for their research tasks. Drawing on Kuiper, Volman, and Terwel’s (2008) three aspects of web literacy skills (searching, reading, and evaluating), this qualitative study aims to better understand a group of graduate engineering students’ searching, reading, and evaluating processes for research purposes. Through in-depth interviews and the think-aloud protocol, we compared the strategies employed by 22 Taiwanese graduate engineering students. The results showed that the students’ online research behaviors included seeking and obtaining, reading and interpreting, and assessing and evaluating sources. The findings suggest that specialized training for preparing novice researchers to critically evaluate relevant information or scholarly work to fulfill their research purposes is needed. Implications for enhancing the information literacy of engineering students are discussed.", "title": "" }, { "docid": "1a65a6e22d57bb9cd15ba01943eeaa25", "text": "+ optimal local factor – expensive for general obs. + exploit conj. graph structure + arbitrary inference queries + natural gradients – suboptimal local factor + fast for general obs. – does all local inference – limited inference queries – no natural gradients ± optimal given conj. evidence + fast for general obs. + exploit conj. graph structure + arbitrary inference queries + some natural gradients", "title": "" }, { "docid": "80a61f27dab6a8f71a5c27437254778b", "text": "5G will have to cope with a high degree of heterogeneity in terms of services and requirements. Among these latter, the flexible and efficient use of non-contiguous unused spectrum for different network deployment scenarios is considered a key challenge for 5G systems. To maximize spectrum efficiency, the 5G air interface technology will also need to be flexible and capable of mapping various services to the best suitable combinations of frequency and radio resources. In this work, we propose a comparison of several 5G waveform candidates (OFDM, UFMC, FBMC and GFDM) under a common framework. We assess spectral efficiency, power spectral density, peak-to-average power ratio and robustness to asynchronous multi-user uplink transmission. Moreover, we evaluate and compare the complexity of the different waveforms. In addition to the complexity analysis, in this work, we also demonstrate the suitability of FBMC for specific 5G use cases via two experimental implementations. The benefits of these new waveforms for the foreseen 5G use cases are clearly highlighted on representative criteria and experiments.", "title": "" }, { "docid": "8770cfba83e16454e5d7244201d47628", "text": "Representing documents is a crucial component in many NLP tasks, for instance predicting aspect ratings in reviews. Previous methods for this task treat documents globally, and do not acknowledge that target categories are often assigned by their authors with generally no indication of the specific sentences that motivate them. To address this issue, we adopt a weakly supervised learning model, which jointly learns to focus on relevant parts of a document according to the context along with a classifier for the target categories. Derived from the weighted multiple-instance regression (MIR) framework, the model learns decomposable document vectors for each individual category and thus overcomes the representational bottleneck in previous methods due to a fixed-length document vector. During prediction, the estimated relevance or saliency weights explicitly capture the contribution of each sentence to the predicted rating, thus offering an explanation of the rating. Our model achieves state-of-the-art performance on multi-aspect sentiment analysis, improving over several baselines. Moreover, the predicted saliency weights are close to human estimates obtained by crowdsourcing, and increase the performance of lexical and topical features for review segmentation and summarization.", "title": "" }, { "docid": "5e86f40cfc3b2e9664ea1f7cc5bf730c", "text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.", "title": "" }, { "docid": "80b514540933a9cc31136c8cb86ec9b3", "text": "We tackle the problem of detecting occluded regions in a video stream. Under assumptions of Lambertian reflection and static illumination, the task can be posed as a variational optimization problem, and its solution approximated using convex minimization. We describe efficient numerical schemes that reach the global optimum of the relaxed cost functional, for any number of independently moving objects, and any number of occlusion layers. We test the proposed algorithm on benchmark datasets, expanded to enable evaluation of occlusion detection performance, in addition to optical flow.", "title": "" }, { "docid": "18fd966db335ee53ff4d82781c2f81d8", "text": "Disastrous events are cordially involved with the momentum of nature. As such mishaps have been showing off own mastery, situations have gone beyond the control of human resistive mechanisms far ago. Fortunately, several technologies are in service to gain affirmative knowledge and analysis of a disaster’s occurrence. Recently, Internet of Things (IoT) paradigm has opened a promising door toward catering of multitude problems related to agriculture, industry, security, and medicine due to its attractive features, such as heterogeneity, interoperability, light-weight, and flexibility. This paper surveys existing approaches to encounter the relevant issues with disasters, such as early warning, notification, data analytics, knowledge aggregation, remote monitoring, real-time analytics, and victim localization. Simultaneous interventions with IoT are also given utmost importance while presenting these facts. A comprehensive discussion on the state-of-the-art scenarios to handle disastrous events is presented. Furthermore, IoT-supported protocols and market-ready deployable products are summarized to address these issues. Finally, this survey highlights open challenges and research trends in IoT-enabled disaster management systems.", "title": "" }, { "docid": "ca932a0b6b71f009f95bad6f2f3f8a38", "text": "Page 13 Supply chain management is increasingly being recognized as the integration of key business processes across the supply chain. For example, Hammer argues that now that companies have implemented processes within the firm, they need to integrate them between firms: Streamlining cross-company processes is the next great frontier for reducing costs, enhancing quality, and speeding operations. It is where this decade’s productivity wars will be fought. The victors will be those companies that are able to take a new approach to business, working closely with partners to design and manage processes that extend across traditional corporate boundaries. They will be the ones that make the leap from efficiency to super efficiency [1]. Monczka and Morgan also focus on the importance of process integration in supply chain management [2]. The piece that seems to be missing from the literature is a comprehensive definition of the processes that constitute supply chain management. How can companies achieve supply chain integration if there is not a common understanding of the key business processes? It seems that in order to build links between supply chain members it is necessary for companies to implement a standard set of supply chain processes. Practitioners and educators need a common definition of supply chain management, and a shared understanding of the processes. We recommend the definition of supply chain management developed and used by The Global Supply Chain Forum: Supply Chain Management is the integration of key business processes from end user through original suppliers that provides products, services, and information that add value for customers and other stakeholders [3]. The Forum members identified eight key processes that need to be implemented within and across firms in the supply chain. To date, The Supply Chain Management Processes", "title": "" }, { "docid": "8f957dab2aa6b186b61bc309f3f2b5c3", "text": "Learning deeper convolutional neural networks has become a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be attained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, which encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015 Scene Classification Challenge. Extensive experiments on two challenging large scale datasets demonstrate the effectiveness of our method is not restricted to a specific dataset or network architecture.", "title": "" }, { "docid": "f509e4c35a4dbc7b7ba88711d8a7b0ea", "text": "The promises and potential of Big Data in transforming digital government services, governments, and the interaction between governments, citizens, and the business sector, are substantial. From \"smart\" government to transformational government, Big Data can foster collaboration; create real-time solutions to challenges in agriculture, health, transportation, and more; and usher in a new era of policy- and decision-making. There are, however, a range of policy challenges to address regarding Big Data, including access and dissemination; digital asset management, archiving and preservation; privacy; and security. This paper selectively reviews and analyzes the U.S. policy context regarding Big Data and offers recommendations aimed at facilitating Big Data initiatives.", "title": "" }, { "docid": "9e44f467f7fbcd2ab1c6886bbb0099c0", "text": "Email has become one of the fastest and most economical forms of communication. However, the increase of email users have resulted in the dramatic increase of spam emails during the past few years. In this paper, email data was classified using four different classifiers (Neural Network, SVM classifier, Naïve Bayesian Classifier, and J48 classifier). The experiment was performed based on different data size and different feature size. The final classification result should be ‘1’ if it is finally spam, otherwise, it should be ‘0’. This paper shows that simple J48 classifier which make a binary tree, could be efficient for the dataset which could be classified as binary tree.", "title": "" }, { "docid": "96508fe94ab9e47534f2cc09b4b186a8", "text": "A 300 GHz frequency synthesizer incorporating a triple-push VCO with Colpitts-based active varactor (CAV) and a divider with three-phase injection is introduced. The CAV provides frequency tunability, enhances harmonic power, and buffers/injects the VCO fundamental signal from/to the divider. The locking range of the divider is vastly improved due to the fact that the three-phase injection introduces larger allowable phase change and injection power into the divider loop. Implemented in 90 nm SiGe BiCMOS, the synthesizer achieves a phase-noise of -77.8 dBc/Hz (-82.5 dBc/Hz) at 100 kHz (1 MHz) offset with a crystal reference, and an overall locking range of 280.32-303.36 GHz (7.9%).", "title": "" }, { "docid": "7816f9fc22866f2c4f12313715076a20", "text": "Image-to-image translation has been made much progress with embracing Generative Adversarial Networks (GANs). However, it’s still very challenging for translation tasks that require high quality, especially at high-resolution and photorealism. In this paper, we present Discriminative Region Proposal Adversarial Networks (DRPAN) for highquality image-to-image translation. We decompose the procedure of imageto-image translation task into three iterated steps, first is to generate an image with global structure but some local artifacts (via GAN), second is using our DRPnet to propose the most fake region from the generated image, and third is to implement “image inpainting” on the most fake region for more realistic result through a reviser, so that the system (DRPAN) can be gradually optimized to synthesize images with more attention on the most artifact local part. Experiments on a variety of image-to-image translation tasks and datasets validate that our method outperforms state-of-the-arts for producing high-quality translation results in terms of both human perceptual studies and automatic quantitative measures.", "title": "" } ]
scidocsrr
1c134e6fa0f2c18e9624284fb32eda81
The Fallacy of the Net Promoter Score : Customer Loyalty Predictive Model
[ { "docid": "7401c7f3a396a76e9a806863bef7ff7c", "text": "Complexity surrounding the holistic nature of customer experience has made measuring customer perceptions of interactive service experiences, challenging. At the same time, advances in technology and changes in methods for collecting explicit customer feedback are generating increasing volumes of unstructured textual data, making it difficult for managers to analyze and interpret this information. Consequently, text mining, a method enabling automatic extraction of information from textual data, is gaining in popularity. However, this method has performed below expectations in terms of depth of analysis of customer experience feedback and accuracy. In this study, we advance linguistics-based text mining modeling to inform the process of developing an improved framework. The proposed framework incorporates important elements of customer experience, service methodologies and theories such as co-creation processes, interactions and context. This more holistic approach for analyzing feedback facilitates a deeper analysis of customer feedback experiences, by encompassing three value creation elements: activities, resources, and context (ARC). Empirical results show that the ARC framework facilitates the development of a text mining model for analysis of customer textual feedback that enables companies to assess the impact of interactive service processes on customer experiences. The proposed text mining model shows high accuracy levels and provides flexibility through training. As such, it can evolve to account for changing contexts over time and be deployed across different (service) business domains; we term it an “open learning” model. The ability to timely assess customer experience feedback represents a pre-requisite for successful co-creation processes in a service environment. Accepted as: Ordenes, F. V., Theodoulidis, B., Burton, J., Gruber, T., & Zaki, M. (2014). Analyzing Customer Experience Feedback Using Text Mining A Linguistics-Based Approach. Journal of Service Research, August, 17(3) 278-295.", "title": "" } ]
[ { "docid": "32f55ca936d96b92c1bf38d51cd183b3", "text": "Traditionally, a Certification Authority (CA) is required to sign, manage, verify and revoke public key certificates. Multiple CAs together form the CA-based Public Key Infrastructure (PKI). The use of a PKI forces one to place trust in the CAs, which have proven to be a single point-of-failure on multiple occasions. Blockchain has emerged as a transformational technology that replaces centralized trusted third parties with a decentralized, publicly verifiable, peer-to-peer data store which maintains data integrity among nodes through various consensus protocols. In this paper, we deploy three blockchain-based alternatives to the CA-based PKI for supporting IoT devices, based on Emercoin Name Value Service (NVS), smart contracts by Ethereum blockchain, and Ethereum Light Sync client. We compare these approaches with CA-based PKI and show that they are much more efficient in terms of computational and storage requirements in addition to providing a more robust and scalable PKI.", "title": "" }, { "docid": "48c28572e5eafda1598a422fa1256569", "text": "Future power networks will be characterized by safe and reliable functionality against physical and cyber attacks. This paper proposes a unified framework and advanced monitoring procedures to detect and identify network components malfunction or measurements corruption caused by an omniscient adversary. We model a power system under cyber-physical attack as a linear time-invariant descriptor system with unknown inputs. Our attack model generalizes the prototypical stealth, (dynamic) false-data injection and replay attacks. We characterize the fundamental limitations of both static and dynamic procedures for attack detection and identification. Additionally, we design provably-correct (dynamic) detection and identification procedures based on tools from geometric control theory. Finally, we illustrate the effectiveness of our method through a comparison with existing (static) detection algorithms, and through a numerical study.", "title": "" }, { "docid": "bc0294e230abff5c47d5db0d81172bbc", "text": "Pulse radiolysis experiments were used to characterize the intermediates formed from ibuprofen during electron beam irradiation in a solution of 0.1mmoldm(-3). For end product characterization (60)Co γ-irradiation was used and the samples were evaluated either by taking their UV-vis spectra or by HPLC with UV or MS detection. The reactions of OH resulted in hydroxycyclohexadienyl type radical intermediates. The intermediates produced in further reactions hydroxylated the derivatives of ibuprofen as final products. The hydrated electron attacked the carboxyl group. Ibuprofen degradation is more efficient under oxidative conditions than under reductive conditions. The ecotoxicity of the solution was monitored by Daphnia magna standard microbiotest and Vibrio fischeri luminescent bacteria test. The toxic effect of the aerated ibuprofen solution first increased upon irradiation indicating a higher toxicity of the first degradation products, then decreased with increasing absorbed dose.", "title": "" }, { "docid": "92625cb17367de65a912cb59ea767a19", "text": "With the fast progression of data exchange in electronic way, information security is becoming more important in data storage and transmission. Because of widely using images in industrial process, it is important to protect the confidential image data from unauthorized access. In this paper, we analyzed current image encryption algorithms and compression is added for two of them (Mirror-like image encryption and Visual Cryptography). Implementations of these two algorithms have been realized for experimental purposes. The results of analysis are given in this paper. Keywords—image encryption, image cryptosystem, security, transmission.", "title": "" }, { "docid": "b67fadb3f5dca0e74bebc498260f99a4", "text": "The interactive computation paradigm is reviewed and a particular example is extended to form the stochastic analog of a computational process via a transcription of a minimal Turing Machine into an equivalent asynchronous Cellular Automaton with an exponential waiting times distribution of effective transitions. Furthermore, a special toolbox for analytic derivation of recursive relations of important statistical and other quantities is introduced in the form of an Inductive Combinatorial Hierarchy.", "title": "" }, { "docid": "ff9ca485a07dca02434396eca0f0c94f", "text": "Clustering is a NP-hard problem that is used to find the relationship between patterns in a given set of patterns. It is an unsupervised technique that is applied to obtain the optimal cluster centers, especially in partitioned based clustering algorithms. On the other hand, cat swarm optimization (CSO) is a new metaheuristic algorithm that has been applied to solve various optimization problems and it provides better results in comparison to other similar types of algorithms. However, this algorithm suffers from diversity and local optima problems. To overcome these problems, we are proposing an improved version of the CSO algorithm by using opposition-based learning and the Cauchy mutation operator. We applied the opposition-based learning method to enhance the diversity of the CSO algorithm and we used the Cauchy mutation operator to prevent the CSO algorithm from trapping in local optima. The performance of our proposed algorithm was tested with several artificial and real datasets and compared with existing methods like K-means, particle swarm optimization, and CSO. The experimental results show the applicability of our proposed method.", "title": "" }, { "docid": "f60e01205f1760c3aac261a05dfd7695", "text": "The recommendation system is one of the core technologies for implementing personalization services. Recommendation systems in ubiquitous computing environment should have the capability of context-awareness. In this research, we developed a music recommendation system, which we shall call C_Music, which utilizes not only the user’s demographics and behavioral patterns but also the user’s context. For a specific user in a specific context, the C_Music recommends the music that the similar users listened most in the similar context. In evaluating the performance of C_Music using a real world data, it outperforms the comparative system that utilizes the user’s demographics and behavioral patterns only.", "title": "" }, { "docid": "dcc55431a2da871c60abfd53ce270bad", "text": "Synchrophasor Standards have evolved since the introduction of the first one, IEEE Standard 1344, in 1995. IEEE Standard C37.118-2005 introduced measurement accuracy under steady state conditions as well as interference rejection. In 2009, the IEEE started a joint project with IEC to harmonize real time communications in IEEE Standard C37.118-2005 with the IEC 61850 communication standard. These efforts led to the need to split the C37.118 into 2 different standards: IEEE Standard C37.118.1-2011 that now includes performance of synchrophasors under dynamic systems conditions; and IEEE Standard C37.118.2-2011 Synchrophasor Data Transfer for Power Systems, the object of this paper.", "title": "" }, { "docid": "3371fe8778b813360debc384040c510e", "text": "Medication non-adherence is a major concern in the healthcare industry and has led to increases in health risks and medical costs. For many neurological diseases, adherence to medication regimens can be assessed by observing movement patterns. However, physician observations are typically assessed based on visual inspection of movement and are limited to clinical testing procedures. Consequently, medication adherence is difficult to measure when patients are away from the clinical setting. The authors propose a data mining driven methodology that uses low cost, non-wearable multimodal sensors to model and predict patients' adherence to medication protocols, based on variations in their gait. The authors conduct a study involving Parkinson's disease patients that are \"on\" and \"off\" their medication in order to determine the statistical validity of the methodology. The data acquired can then be used to quantify patients' adherence while away from the clinic. Accordingly, this data-driven system may allow for early warnings regarding patient safety. Using whole-body movement data readings from the patients, the authors were able to discriminate between PD patients on and off medication, with accuracies greater than 97% for some patients using an individually customized model and accuracies of 78% for a generalized model containing multiple patient gait data. The proposed methodology and study demonstrate the potential and effectiveness of using low cost, non-wearable hardware and data mining models to monitor medication adherence outside of the traditional healthcare facility. These innovations may allow for cost effective, remote monitoring of treatment of neurological diseases.", "title": "" }, { "docid": "216698730aa68b3044f03c64b77e0e62", "text": "Portable biomedical instrumentation has become an important part of diagnostic and treatment instrumentation. Low-voltage and low-power tendencies prevail. A two-electrode biopotential amplifier, designed for low-supply voltage (2.7–5.5 V), is presented. This biomedical amplifier design has high differential and sufficiently low common mode input impedances achieved by means of positive feedback, implemented with an original interface stage. The presented circuit makes use of passive components of popular values and tolerances. The amplifier is intended for use in various two-electrode applications, such as Holter monitors, external defibrillators, ECG monitors and other heart beat sensing biomedical devices.", "title": "" }, { "docid": "dce032d1568e8012053de20fa7063c25", "text": "Radial visualization continues to be a popular design choice in information visualization systems, due perhaps in part to its aesthetic appeal. However, it is an open question whether radial visualizations are truly more effective than their Cartesian counterparts. In this paper, we describe an initial user trial from an ongoing empirical study of the SQiRL (Simple Query interface with a Radial Layout) visualization system, which supports both radial and Cartesian projections of stacked bar charts. Participants were shown 20 diagrams employing a mixture of radial and Cartesian layouts and were asked to perform basic analysis on each. The participants' speed and accuracy for both visualization types were recorded. Our initial findings suggest that, in spite of the widely perceived advantages of Cartesian visualization over radial visualization, both forms of layout are, in fact, equally usable. Moreover, radial visualization may have a slight advantage over Cartesian for certain tasks. In a follow-on study, we plan to test users' ability to create, as well as read and interpret, radial and Cartesian diagrams in SQiRL.", "title": "" }, { "docid": "b151343a4c1e365ede70a71880065aab", "text": "Cardiovascular disease (CVD) and depression are common. Patients with CVD have more depression than the general population. Persons with depression are more likely to eventually develop CVD and also have a higher mortality rate than the general population. Patients with CVD, who are also depressed, have a worse outcome than those patients who are not depressed. There is a graded relationship: the more severe the depression, the higher the subsequent risk of mortality and other cardiovascular events. It is possible that depression is only a marker for more severe CVD which so far cannot be detected using our currently available investigations. However, given the increased prevalence of depression in patients with CVD, a causal relationship with either CVD causing more depression or depression causing more CVD and a worse prognosis for CVD is probable. There are many possible pathogenetic mechanisms that have been described, which are plausible and that might well be important. However, whether or not there is a causal relationship, depression is the main driver of quality of life and requires prevention, detection, and management in its own right. Depression after an acute cardiac event is commonly an adjustment disorder than can improve spontaneously with comprehensive cardiac management. Additional management strategies for depressed cardiac patients include cardiac rehabilitation and exercise programmes, general support, cognitive behavioural therapy, antidepressant medication, combined approaches, and probably disease management programmes.", "title": "" }, { "docid": "e45c07c42c1a7f235dd5cb511c131d30", "text": "This paper is about mapping images to continuous output spaces using powerful Bayesian learning techniques. A sparse, semi-supervised Gaussian process regression model (S3GP) is introduced which learns a mapping using only partially labelled training data. We show that sparsity bestows efficiency on the S3GP which requires minimal CPU utilization for real-time operation; the predictions of uncertainty made by the S3GP are more accurate than those of other models leading to considerable performance improvements when combined with a probabilistic filter; and the ability to learn from semi-supervised data simplifies the process of collecting training data. The S3GP uses a mixture of different image features: this is also shown to improve the accuracy and consistency of the mapping. A major application of this work is its use as a gaze tracking system in which images of a human eye are mapped to screen coordinates: in this capacity our approach is efficient, accurate and versatile.", "title": "" }, { "docid": "637ca0ccdc858c9e84ffea1bd3531024", "text": "We propose a method to facilitate search through the storyline of TV series episodes. To this end, we use human written, crowdsourced descriptions—plot synopses—of the story conveyed in the video. We obtain such synopses from websites such as Wikipedia and propose various methods to align each sentence of the plot to shots in the video. Thus, the semantic story-based video retrieval problem is transformed into a much simpler text-based search. Finally, we return the set of shots aligned to the sentences as the video snippet corresponding to the query. The alignment is performed by first computing a similarity score between every shot and sentence through cues such as character identities and keyword matches between plot synopses and subtitles. We then formulate the alignment as an optimization problem and solve it efficiently using dynamic programming. We evaluate our methods on the fifth season of a TV series Buffy the Vampire Slayer and show encouraging results for both the alignment and the retrieval of story events.", "title": "" }, { "docid": "b7851d3e08d29d613fd908d930afcd6b", "text": "Word sense embeddings represent a word sense as a low-dimensional numeric vector. While this representation is potentially useful for NLP applications, its interpretability is inherently limited. We propose a simple technique that improves interpretability of sense vectors by mapping them to synsets of a lexical resource. Our experiments with AdaGram sense embeddings and BabelNet synsets show that it is possible to retrieve synsets that correspond to automatically learned sense vectors with Precision of 0.87, Recall of 0.42 and AUC of 0.78.", "title": "" }, { "docid": "e9f9a7c506221bacf966808f54c4f056", "text": "Reconfigurable antennas, with the ability to radiate more than one pattern at different frequencies and polarizations, are necessary in modern telecommunication systems. The requirements for increased functionality (e.g., direction finding, beam steering, radar, control, and command) within a confined volume place a greater burden on today's transmitting and receiving systems. Reconfigurable antennas are a solution to this problem. This paper discusses the different reconfigurable components that can be used in an antenna to modify its structure and function. These reconfiguration techniques are either based on the integration of radio-frequency microelectromechanical systems (RF-MEMS), PIN diodes, varactors, photoconductive elements, or on the physical alteration of the antenna radiating structure, or on the use of smart materials such as ferrites and liquid crystals. Various activation mechanisms that can be used in each different reconfigurable implementation to achieve optimum performance are presented and discussed. Several examples of reconfigurable antennas for both terrestrial and space applications are highlighted, such as cognitive radio, multiple-input-multiple-output (MIMO) systems, and satellite communication.", "title": "" }, { "docid": "282480e24a35a922a6498dbf88f34603", "text": "BACKGROUND\nThere is a growing awareness of the need for easily administered, psychometrically sound screening tools to identify individuals with elevated levels of psychological distress. Although support has been found for the psychometric properties of the Depression, Anxiety and Stress Scales (DASS) using classical test theory approaches it has not been subjected to Rasch analysis. The aim of this study was to use Rasch analysis to assess the psychometric properties of the DASS-21 scales, using two different administration modes.\n\n\nMETHODS\nThe DASS-21 was administered to 420 participants with half the sample responding to a web-based version and the other half completing a traditional pencil-and-paper version. Conformity of DASS-21 scales to a Rasch partial credit model was assessed using the RUMM2020 software.\n\n\nRESULTS\nTo achieve adequate model fit it was necessary to remove one item from each of the DASS-21 subscales. The reduced scales showed adequate internal consistency reliability, unidimensionality and freedom from differential item functioning for sex, age and mode of administration. Analysis of all DASS-21 items combined did not support its use as a measure of general psychological distress. A scale combining the anxiety and stress items showed satisfactory fit to the Rasch model after removal of three items.\n\n\nCONCLUSION\nThe results provide support for the measurement properties, internal consistency reliability, and unidimensionality of three slightly modified DASS-21 scales, across two different administration methods. The further use of Rasch analysis on the DASS-21 in larger and broader samples is recommended to confirm the findings of the current study.", "title": "" }, { "docid": "7b6c039783091260cee03704ce9748d8", "text": "We describe Algorithm 2 in detail. Algorithm 2 takes as input the sample set S, the query sequence F , the sensitivity of query ∆, the threshold τ , and the stop parameter s. Algorithm 2 outputs the result of each comparison with the threshold. In Algorithm 2, each noisy query output is compred with a noisy threshold at line 4 and outputs the result of comparison. Let ⊤ mean that fk(S) > τ . Algorithm 2 is terminated if outputs ⊤ s times.", "title": "" }, { "docid": "21e536e7197ad878db7938c636d1640b", "text": "The Cloud computing has become the fast spread in the field of computing, research and industry in the last few years. As part of the service offered, there are new possibilities to build applications and provide various services to the end user by virtualization through the internet. Task scheduling is the most significant matter in the cloud computing because the user has to pay for resource using on the basis of time, which acts to distribute the load evenly among the system resources by maximizing utilization and reducing task execution Time. Many heuristic algorithms have been existed to resolve the task scheduling problem such as a Particle Swarm Optimization algorithm (PSO), Genetic Algorithm (GA), Ant Colony Optimization (ACO) and Cuckoo search (CS) algorithms, etc. In this paper, a Dynamic Adaptive Particle Swarm Optimization algorithm (DAPSO) has been implemented to enhance the performance of the basic PSO algorithm to optimize the task runtime by minimizing the makespan of a particular task set, and in the same time, maximizing resource utilization. Also, .a task scheduling algorithm has been proposed to schedule the independent task over the Cloud Computing. The proposed algorithm is considered an amalgamation of the Dynamic PSO (DAPSO) algorithm and the Cuckoo search (CS) algorithm; called MDAPSO. According to the experimental results, it is found that MDAPSO and DAPSO algorithms outperform the original PSO algorithm. Also, a comparative study has been done to evaluate the performance of the proposed MDAPSO with respect to the original PSO.", "title": "" }, { "docid": "ad854ceb89e437ca59099453db33fa41", "text": "Semi-supervised learning has recently emerged as a new paradigm in the machine learning community. It aims at exploiting simultaneously labeled and unlabeled data for classification. We introduce here a new semi-supervised algorithm. Its originality is that it relies on a discriminative approach to semisupervised learning rather than a generative approach, as it is usually the case. We present in details this algorithm for a logistic classifier and show that it can be interpreted as an instance of the Classification Expectation Maximization algorithm. We also provide empirical results on two data sets for sentence classification tasks and analyze the behavior of our methods.", "title": "" } ]
scidocsrr
6ebfa259ce68060dd4a8057689f40df1
Linear Algebraic Structure of Word Senses, with Applications to Polysemy
[ { "docid": "fe99cf42e35cc0b7523247e126f3d8a3", "text": "Current distributed representations of words show little resemblance to theories of lexical semantics. The former are dense and uninterpretable, the latter largely based on familiar, discrete classes (e.g., supersenses) and relations (e.g., synonymy and hypernymy). We propose methods that transform word vectors into sparse (and optionally binary) vectors. The resulting representations are more similar to the interpretable features typically used in NLP, though they are discovered automatically from raw corpora. Because the vectors are highly sparse, they are computationally easy to work with. Most importantly, we find that they outperform the original vectors on benchmark tasks.", "title": "" } ]
[ { "docid": "b87cf41b31b8d163d6e44c9b1fa68cae", "text": "This paper gives a security analysis of Microsoft's ASP.NET technology. The main part of the paper is a list of threats which is structured according to an architecture of Web services and attack points. We also give a reverse table of threats against security requirements as well as a summary of security guidelines for IT developers. This paper has been worked out in collaboration with five University teams each of which is focussing on a different security problem area. We use the same architecture for Web services and attack points.", "title": "" }, { "docid": "49fed572de904ac3bb9aab9cdc874cc6", "text": "Factorized Hidden Layer (FHL) adaptation has been proposed for speaker adaptation of deep neural network (DNN) based acoustic models. In FHL adaptation, a speaker-dependent (SD) transformation matrix and an SD bias are included in addition to the standard affine transformation. The SD transformation is a linear combination of rank-1 matrices whereas the SD bias is a linear combination of vectors. Recently, the Long ShortTerm Memory (LSTM) Recurrent Neural Networks (RNNs) have shown to outperform DNN acoustic models in many Automatic Speech Recognition (ASR) tasks. In this work, we investigate the effectiveness of SD transformations for LSTM-RNN acoustic models. Experimental results show that when combined with scaling of LSTM cell states’ outputs, SD transformations achieve 2.3% and 2.1% absolute improvements over the baseline LSTM systems for the AMI IHM and AMI SDM tasks respectively.", "title": "" }, { "docid": "aeda16415cb3414745493f1c356ffd99", "text": "Recent estimates based on the 1991 census (Schuring 1993) indicate that approximately 45 per cent of the South African population have a speaking knowledge of English (the majority of the population speaking an African language, such as Zulu, Xhosa, Tswana, or Venda, as home language). The number of individuals who cite English as a home language appears to be, however, only about 10 per cent of the population. Of this figure it would seem that at least one in three English-speakers come from ethnic groups other than the white one (in proportionally descending order, from the South African Indian, Coloured, and Black ethnic groups). This figure has shown some increase in recent years.", "title": "" }, { "docid": "6a9e30fd08b568ef6607158cab4f82b2", "text": "Expertise with unfamiliar objects (‘greebles’) recruits face-selective areas in the fusiform gyrus (FFA) and occipital lobe (OFA). Here we extend this finding to other homogeneous categories. Bird and car experts were tested with functional magnetic resonance imaging during tasks with faces, familiar objects, cars and birds. Homogeneous categories activated the FFA more than familiar objects. Moreover, the right FFA and OFA showed significant expertise effects. An independent behavioral test of expertise predicted relative activation in the right FFA for birds versus cars within each group. The results suggest that level of categorization and expertise, rather than superficial properties of objects, determine the specialization of the FFA.", "title": "" }, { "docid": "a9ac1250c9be5c7f95086f82251d5157", "text": "In 3D reconstruction, the recovery of the calibration parameters of the cameras is paramount since it provides metric information about the observed scene, e.g., measures of angles and ratios of distances. Autocalibration enables the estimation of the camera parameters without using a calibration device, but by enforcing simple constraints on the camera parameters. In the absence of information about the internal camera parameters such as the focal length and the principal point, the knowledge of the camera pixel shape is usually the only available constraint. Given a projective reconstruction of a rigid scene, we address the problem of the autocalibration of a minimal set of cameras with known pixel shape and otherwise arbitrarily varying intrinsic and extrinsic parameters. We propose an algorithm that only requires 5 cameras (the theoretical minimum), thus halving the number of cameras required by previous algorithms based on the same constraint. To this purpose, we introduce as our basic geometric tool the six-line conic variety (SLCV), consisting in the set of planes intersecting six given lines of 3D space in points of a conic. We show that the set of solutions of the Euclidean upgrading problem for three cameras with known pixel shape can be parameterized in a computationally efficient way. This parameterization is then used to solve autocalibration from five or more cameras, reducing the three-dimensional search space to a two-dimensional one. We provide experiments with real images showing the good performance of the technique.", "title": "" }, { "docid": "bd960da75daf8c268d4def33ada5964d", "text": "(SCADA), have lately gained the attention of IT security researchers as critical components of modern industrial infrastructure. One main reason for this attention is that ICS have not been built with security in mind and are thus particularly vulnerable when they are connected to computer networks and the Internet. ICS consists of SCADA, Programmable Logic Controller (PLC), Human-Machine Interfaces (HMI), sensors, and actuators such as motors. These components are connected to each other over fieldbus or IP-based protocols. In this thesis, we have developed methods and tools for assessing the security of ICSs. By applying the STRIDE threat modeling methodology, we have conducted a high level threat analysis of ICSs. Based on the threat analysis, we created security analysis guidelines for Industrial Control System devices. These guidelines can be applied to many ICS devices and are mostly vendor independent. Moreover, we have integrated support for Modbus/TCP in the Scapy packet manipulation library, which can be used for robustness testing of ICS software. In a case study, we applied our security-assessment methodology to a detailed security analysis of a demonstration ICS, consisting of current products. As a result of the analysis, we discovered several security weaknesses. Most of the discovered vulnerabilities were common IT security problems, such as web-application and software-update issues, but some are specific to ICS. For example, we show how the data visualized by the Human-Machine Interface can be altered and modified without limit. Furthermore, sensor data, such as temperature values, can be spoofed within the PLC. Moreover, we show that input validation is critical for security also in the ICS world. Thus, we disclose several security vulnerabilities in production devices. However, in the interest of responsible disclosure of security flaws, the most severe security flaws found are not detailed in the thesis. Our analysis guidelines and the case study provide a basis for conducting vulnerability assessment on further ICS devices and entire systems. In addition, we briefly describe existing solutions for securing ICSs. Acknowledgements I would like to thank Nixu Oy and the colleagues (especially Lauri Vuornos, Juhani Mäkelä and Michael Przybilski) for making it possible to conduct my thesis on Industrial Control Systems. The industrial environment enabled us to take advantage of the research and to apply it to practical projects. Moreover, without the help and involvement of Schneider Electric such an applied analysis would not have been possible. Furthermore, I would like to thank Tuomas …", "title": "" }, { "docid": "554fc3e28147738a9faa80f593ffe9df", "text": "The issue of cyberbullying is a social concern that has arisen due to the prevalent use of computer technology today. In this paper, we present a multi-faceted solution to mitigate the effects of cyberbullying, one that uses computer technology in order to combat the problem. We propose to provide assistance for various groups affected by cyberbullying (the bullied and the bully, both). Our solution was developed through a series of group projects and includes i) technology to detect the occurrence of cyberbullying ii) technology to enable reporting of cyberbullying iii) proposals to integrate third-party assistance when cyberbullying is detected iv) facilities for those with authority to manage online social networks or to take actions against detected bullies. In all, we demonstrate how this important social problem which arises due to computer technology can also leverage computer technology in order to take steps to better cope with the undesirable effects that have arisen.", "title": "" }, { "docid": "6ddf62a60b0d56c76b54ca6cd0b28ab9", "text": "Improvement of vehicle safety performance is one of the targets of ITS development. A pre-crash safety system has been developed that utilizes ITS technologies. The Pre-crash Safety system reduces collision injury by estimating TTC(time-tocollision) to preemptively activate safety devices, which consist of “Pre-crash Seatbelt” system and “Pre-crash Brake Assist” system. The key technology of these systems is a “Pre-crash Sensor” to detect obstacles and estimate TTC. In this paper, the Pre-crash Sensor is presented. The Pre-crash Sensor uses millimeter-wave radar to detect preceding vehicles, oncoming vehicles, roadside objects, etc. on the road ahead. Furthermore, by using a phased array system as a vehicle radar for the first time, a compact electronically scanned millimeter-wave radar with high recognition performance has been achieved. With respect to the obstacle determination algorithm, a crash determination algorithm has been newly developed, taking into account estimation of the direction of advance of the vehicle, in addition to the distance, relative speed and direction of the object.", "title": "" }, { "docid": "13ee1c00203fd12486ee84aa4681dc60", "text": "Mobile crowdsensing has emerged as an efficient sensing paradigm which combines the crowd intelligence and the sensing power of mobile devices, e.g., mobile phones and Internet of Things (IoT) gadgets. This article addresses the contradicting incentives of privacy preservation by crowdsensing users and accuracy maximization and collection of true data by service providers. We firstly define the individual contributions of crowdsensing users based on the accuracy in data analytics achieved by the service provider from buying their data. We then propose a truthful mechanism for achieving high service accuracy while protecting the privacy based on the user preferences. The users are incentivized to provide true data by being paid based on their individual contribution to the overall service accuracy. Moreover, we propose a coalition strategy which allows users to cooperate in providing their data under one identity, increasing their anonymity privacy protection, and sharing the resulting payoff. Finally, we outline important open research directions in mobile and people-centric crowdsensing.", "title": "" }, { "docid": "bd7a011f47fd48e19e2bbdb2f426ae1d", "text": "In social networks, link prediction predicts missing links in current networks and new or dissolution links in future networks, is important for mining and analyzing the evolution of social networks. In the past decade, many works have been done about the link prediction in social networks. The goal of this paper is to comprehensively review, analyze and discuss the state-of-the-art of the link prediction in social networks. A systematical category for link prediction techniques and problems is presented. Then link prediction techniques and problems are analyzed and discussed. Typical applications of link prediction are also addressed. Achievements and roadmaps of some active research groups are introduced. Finally, some future challenges of the link prediction in social networks are discussed. 对社交网络中的链接预测研究现状进行系统回顾、分析和讨论, 并指出未来研究挑战. 在动态社交网络中, 链接预测是挖掘和分析网络演化的一项重要任务, 其目的是预测当前未知的链接以及未来链接的变化. 过去十余年中, 在社交网络链接预测问题上已有大量研究工作. 本文旨在对该问题的研究现状和趋势进行全面回顾、分析和讨论. 提出一种分类法组织链接预测技术和问题. 详细分析和讨论了链接预测的技术、问题和应用. 介绍了该问题的活跃研究组. 分析和讨论了社交网络链接预测研究的未来挑战.", "title": "" }, { "docid": "1efdb6ff65c1aa8f8ecb95b4d466335f", "text": "This paper provides a linguistic and pragmatic analysis of the phenomenon of irony in order to represent how Twitter’s users exploit irony devices within their communication strategies for generating textual contents. We aim to measure the impact of a wide-range of pragmatic phenomena in the interpretation of irony, and to investigate how these phenomena interact with contexts local to the tweet. Informed by linguistic theories, we propose for the first time a multi-layered annotation schema for irony and its application to a corpus of French, English and Italian tweets.We detail each layer, explore their interactions, and discuss our results according to a qualitative and quantitative perspective.", "title": "" }, { "docid": "b495407cb455186ecad9a45aa88ec509", "text": "This article provides a comprehensive introduction into the field of robotic mapping, with a focus on indoor mapping. It describes and compares various probabilistic techniques, as they are presently being applied to a vast array of mobile robot mapping problems. The history of robotic mapping is also described, along with an extensive list of open research problems. This research is sponsored by by DARPA’s MARS Program (Contract number N66001-01-C-6018) and the National Science Foundation (CAREER grant number IIS-9876136 and regular grant number IIS-9877033), all of which is gratefully acknowledged. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing official policies or endorsements, either expressed or implied, of the United States Government or any of the sponsoring institutions.", "title": "" }, { "docid": "194db5da505acab27bbe14232b255d09", "text": "Latent Dirichlet allocation defines hidden topics to capture latent semantics in text documents. However, it assumes that all the documents are represented by the same topics, resulting in the “forced topic” problem. To solve this problem, we developed a group latent Dirichlet allocation (GLDA). GLDA uses two kinds of topics: local topics and global topics. The highly related local topics are organized into groups to describe the local semantics, whereas the global topics are shared by all the documents to describe the background semantics. GLDA uses variational inference algorithms for both offline and online data. We evaluated the proposed model for topic modeling and document clustering. Our experimental results indicated that GLDA can achieve a competitive performance when compared with state-of-the-art approaches.", "title": "" }, { "docid": "09b273c9e77f6fc1b2de20f50227c44d", "text": "Age and gender are complementary soft biometric traits for face recognition. Successful estimation of age and gender from facial images taken under real-world conditions can contribute improving the identification results in the wild. In this study, in order to achieve robust age and gender classification in the wild, we have benefited from Deep Convolutional Neural Networks based representation. We have explored transferability of existing deep convolutional neural network (CNN) models for age and gender classification. The generic AlexNet-like architecture and domain specific VGG-Face CNN model are employed and fine-tuned with the Adience dataset prepared for age and gender classification in uncontrolled environments. In addition, task specific GilNet CNN model has also been utilized and used as a baseline method in order to compare with transferred models. Experimental results show that both transferred deep CNN models outperform the GilNet CNN model, which is the state-of-the-art age and gender classification approach on the Adience dataset, by an absolute increase of 7% and 4.5% in accuracy, respectively. This outcome indicates that transferring a deep CNN model can provide better classification performance than a task specific CNN model, which has a limited number of layers and trained from scratch using a limited amount of data as in the case of GilNet. Domain specific VGG-Face CNN model has been found to be more useful and provided better performance for both age and gender classification tasks, when compared with generic AlexNet-like model, which shows that transfering from a closer domain is more useful.", "title": "" }, { "docid": "7a9572c3c74f9305ac0d817b04e4399a", "text": "Due to the limited length and freely constructed sentence structures, it is a difficult classification task for short text classification. In this paper, a short text classification framework based on Siamese CNNs and few-shot learning is proposed. The Siamese CNNs will learn the discriminative text encoding so as to help classifiers distinguish those obscure or informal sentence. The different sentence structures and different descriptions of a topic are viewed as ‘prototypes’, which will be learned by few-shot learning strategy to improve the classifier’s generalization. Our experimental results show that the proposed framework leads to better results in accuracies on twitter classifications and outperforms some popular traditional text classification methods and a few deep network approaches.", "title": "" }, { "docid": "9721f7f54bfcfcf8c3efb10257002ad9", "text": "Audio description (AD) provides linguistic descriptions of movies and allows visually impaired people to follow a movie along with their peers. Such descriptions are by design mainly visual and thus naturally form an interesting data source for computer vision and computational linguistics. In this work we propose a novel dataset which contains transcribed ADs, which are temporally aligned to full length movies. In addition we also collected and aligned movie scripts used in prior work and compare the two sources of descriptions. We introduce the Large Scale Movie Description Challenge (LSMDC) which contains a parallel corpus of 128,118 sentences aligned to video clips from 200 movies (around 150 h of video in total). The goal of the challenge is to automatically generate descriptions for the movie clips. First we characterize the dataset by benchmarking different approaches for generating video descriptions. Comparing ADs to scripts, we find that ADs are more visual and describe precisely what is shown rather than what should happen according to the scripts created prior to movie production. Furthermore, we present and compare the results of several teams who participated in the challenges organized in the context of two workshops at ICCV 2015 and ECCV 2016.", "title": "" }, { "docid": "00b2d45d6810b727ab531f215d2fa73e", "text": "Parental preparation for a child's discharge from the hospital sets the stage for successful transitioning to care and recovery at home. In this study of 135 parents of hospitalized children, the quality of discharge teaching, particularly the nurses' skills in \"delivery\" of parent teaching, was associated with increased parental readiness for discharge, which was associated with less coping difficulty during the first 3 weeks postdischarge. Parental coping difficulty was predictive of greater utilization of posthospitalization health services. These results validate the role of the skilled nurse as a teacher in promoting positive outcomes at discharge and beyond the hospitalization.", "title": "" }, { "docid": "70f35b19ba583de3b9942d88c94b9148", "text": "ARCHEOGUIDE (Augmented Reality-based Cultural Heritage On-site GUIDE) is an IST project, funded by the EU, aiming at providing a personalized Virtual Reality guide and tour assistant to archaeological site visitors and a multimedia repository and information system for archaeologists and site curators. The system provides monument reconstructions, ancient life simulation, and database tools for creating and archiving archaeological multimedia material.", "title": "" }, { "docid": "b27038accdabab12d8e0869aba20a083", "text": "Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large-scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best-performing GNN architecture.", "title": "" }, { "docid": "7bac448a5754c168c897125a4f080548", "text": "BACKGROUND\nOne of the main methods for evaluation of fetal well-being is analysis of Doppler flow velocity waveform of fetal vessels. Evaluation of Doppler wave of the middle cerebral artery can predict most of the at-risk fetuses in high-risk pregnancies. In this study, we tried to determine the normal ranges and their trends during pregnancy of Doppler flow velocity indices (resistive index, pulsatility index, systolic-to-diastolic ratio, and peak systolic velocity) of middle cerebral artery in 20 - 40 weeks normal pregnancies in Iranians.\n\n\nMETHODS\nIn this cross-sectional study, 1037 women with normal pregnancy and gestational age of 20 to 40 weeks were investigated for fetal middle cerebral artery Doppler examination.\n\n\nRESULTS\nResistive index, pulsatility index, and systolic-to-diastolic ratio values of middle cerebral artery decreased in a parabolic pattern while the peak systolic velocity value increased linearly with progression of the gestational age. These changes were statistically significant (P<0.001 for all four variables) and were more characteristic during late weeks of pregnancy. The mean fetal heart rate was also significantly (P<0.001) reduced in correlation with the gestational age.\n\n\nCONCLUSION\nDoppler waveform indices of fetal middle cerebral artery are useful means for determining fetal well-being. Herewith, the normal ranges of Doppler waveform indices for an Iranian population are presented.", "title": "" } ]
scidocsrr
a7fab56e5dbc06d39ff0ec4046a3cb94
Benchmark Machine Learning Approaches with Classical Time Series Approaches on the Blood Glucose Level Prediction Challenge
[ { "docid": "83f970bc22a2ada558aaf8f6a7b5a387", "text": "The imputeTS package specializes on univariate time series imputation. It offers multiple state-of-the-art imputation algorithm implementations along with plotting functions for time series missing data statistics. While imputation in general is a well-known problem and widely covered by R packages, finding packages able to fill missing values in univariate time series is more complicated. The reason for this lies in the fact, that most imputation algorithms rely on inter-attribute correlations, while univariate time series imputation instead needs to employ time dependencies. This paper provides an introduction to the imputeTS package and its provided algorithms and tools. Furthermore, it gives a short overview about univariate time series imputation in R. Introduction In almost every domain from industry (Billinton et al., 1996) to biology (Bar-Joseph et al., 2003), finance (Taylor, 2007) up to social science (Gottman, 1981) different time series data are measured. While the recorded datasets itself may be different, one common problem are missing values. Many analysis methods require missing values to be replaced with reasonable values up-front. In statistics this process of replacing missing values is called imputation. Time series imputation thereby is a special sub-field in the imputation research area. Most popular techniques like Multiple Imputation (Rubin, 1987), Expectation-Maximization (Dempster et al., 1977), Nearest Neighbor (Vacek and Ashikaga, 1980) and Hot Deck (Ford, 1983) rely on interattribute correlations to estimate values for the missing data. Since univariate time series do not possess more than one attribute, these algorithms cannot be applied directly. Effective univariate time series imputation algorithms instead need to employ the inter-time correlations. On CRAN there are several packages solving the problem of imputation of multivariate data. Most popular and mature (among others) are AMELIA (Honaker et al., 2011), mice (van Buuren and Groothuis-Oudshoorn, 2011), VIM (Kowarik and Templ, 2016) and missMDA (Josse and Husson, 2016). However, since these packages are designed for multivariate data imputation only they do not work for univariate time series. At the moment imputeTS (Moritz, 2016a) is the only package on CRAN that is solely dedicated to univariate time series imputation and includes multiple algorithms. Nevertheless, there are some other packages that include imputation functions as addition to their core package functionality. Most noteworthy being zoo (Zeileis and Grothendieck, 2005) and forecast (Hyndman, 2016). Both packages offer also some advanced time series imputation functions. The packages spacetime (Pebesma, 2012), timeSeries (Rmetrics Core Team et al., 2015) and xts (Ryan and Ulrich, 2014) should also be mentioned, since they contain some very simple but quick time series imputation methods. For a broader overview about available time series imputation packages in R see also (Moritz et al., 2015). In this technical report we evaluate the performance of several univariate imputation functions in R on different time series. This paper is structured as follows: Section Overview imputeTS package gives an overview, about all features and functions included in the imputeTS package. This is followed by Usage examples of the different provided functions. The paper ends with a Conclusions section. Overview imputeTS package The imputeTS package can be found on CRAN and is an easy to use package that offers several utilities for ’univariate, equi-spaced, numeric time series’. Univariate means there is just one attribute that is observed over time. Which leads to a sequence of single observations o1, o2, o3, ... on at successive points t1, t2, t3, ... tn in time. Equi-spaced means, that time increments between successive data points are equal |t1 − t2| = |t2 − t3| = ... = |tn−1 − tn|. Numeric means that the observations are measurable quantities that can be described as a number. In the first part of this section, a general overview about all available functions and datasets is given. The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 Contributed research article 2 This is followed by more detailed overviews about the three areas covered by the package: ’Plots & Statistics’, ’Imputation’ and ’Datasets’. Information about how to apply these functions and tools can be found later in the Usage examples section. General overview As can be seen in Table 1, beyond several imputation algorithm implementations the package also includes plotting functions and datasets. The imputation algorithms can be divided into rather simple but fast approaches like mean imputation and more advanced algorithms that need more computation time like kalman smoothing on a structural model. Simple Imputation Imputation Plots & Statistics Datasets na.locf na.interpolation plotNA.distribution tsAirgap na.mean na.kalman plotNA.distributionBar tsAirgapComplete na.random na.ma plotNA.gapsize tsHeating na.replace na.seadec plotNA.imputations tsHeatingComplete na.remove na.seasplit statsNA tsNH4 tsNH4Complete Table 1: General Overview imputeTS package As a whole, the package aims to support the user in the complete process of replacing missing values in time series. This process starts with analyzing the distribution of the missing values using the statsNA function and the plots of plotNA.distribution, plotNA.distributionBar, plotNA.gapsize. In the next step the actual imputation can take place with one of the several algorithm options. Finally, the imputation results can be visualized with the plotNA.imputations function. Additionally, the package contains three datasets, each in a version with and without missing values, that can be used to test imputation algorithms. Plots & Statistics functions An overview about the available plots and statistics functions can be found in Table 2. To get a good impression what the plots look like section Usage examples is recommended. Function Description plotNA.distribution Visualize Distribution of Missing Values plotNA.distributionBar Visualize Distribution of Missing Values (Barplot) plotNA.gapsize Visualize Distribution of NA gap sizes plotNA.imputations Visualize Imputed Values statsNA Print Statistics about the Missing Data Table 2: Overview Plots & Statistics The statsNA function calculates several missing data statistics of the input data. This includes overall percentage of missing values, absolute amount of missing values, amount of missing value in different sections of the data, longest series of consecutive NAs and occurrence of consecutive NAs. The plotNA.distribution function visualizes the distribution of NAs in a time series. This is done using a standard time series plot, in which areas with missing data are colored red. This enables the user to see at first sight where in the series most of the missing values are located. The plotNA.distributionBar function provides the same insights to users, but is designed for very large time series. This is necessary for time series with 1000 and more observations, where it is not possible to plot each observation as a single point. The plotNA.gapsize function provides information about consecutive NAs by showing the most common NA gap sizes in the time series. The plotNA.imputations function is designated for visual inspection of the results after applying an imputation algorithm. Therefore, newly imputed observations are shown in a different color than the rest of the series. The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 Contributed research article 3 Imputation functions An overview about all available imputation algorithms can be found in Table 3. Even if these functions are really easy applicable, some examples can be found later in section Usage examples. More detailed information about the theoretical background of the algorithms can be found in the imputeTS manual (Moritz, 2016b). Function Option Description na.interpolation linear Imputation by Linear Interpolation spline Imputation by Spline Interpolation stine Imputation by Stineman Interpolation na.kalman StructTS Imputation by Structural Model & Kalman Smoothing auto.arima Imputation by ARIMA State Space Representation & Kalman Sm. na.locf locf Imputation by Last Observation Carried Forward nocb Imputation by Next Observation Carried Backward na.ma simple Missing Value Imputation by Simple Moving Average linear Missing Value Imputation by Linear Weighted Moving Average exponential Missing Value Imputation by Exponential Weighted Moving Average na.mean mean MissingValue Imputation by Mean Value median Missing Value Imputation by Median Value mode Missing Value Imputation by Mode Value na.random Missing Value Imputation by Random Sample na.replace Replace Missing Values by a Defined Value na.seadec Seasonally Decomposed Missing Value Imputation na.seasplit Seasonally Splitted Missing Value Imputation na.remove Remove Missing Values Table 3: Overview Imputation Algorithms For convenience similar algorithms are available under one function name as parameter option. For example linear, spline and stineman interpolation are all included in the na.interpolation function. The na.mean, na.locf, na.replace, na.random functions are all simple and fast. In comparison, na.interpolation, na.kalman, na.ma, na.seasplit, na.seadec are more advanced algorithms that need more computation time. The na.remove function is a special case, since it only deletes all missing values. Thus, it is not really an imputation function. It should be handled with care since removing observations may corrupt the time information of the series. The na.seasplit and na.seadec functions are as well exceptions. These perform seasonal split / decomposition operations as a preprocessing step. For the imputation itself, one out of the other imputation algorithms can be used (which one can be set as option). Looking at all available imputation methods, no single overall best method can b", "title": "" }, { "docid": "68295a432f68900911ba29e5a6ca5e42", "text": "In many forecasting applications, it is valuable to predict not only the value of a signal at a certain time point in the future, but also the values leading up to that point. This is especially true in clinical applications, where the future state of the patient can be less important than the patient's overall trajectory. This requires multi-step forecasting, a forecasting variant where one aims to predict multiple values in the future simultaneously. Standard methods to accomplish this can propagate error from prediction to prediction, reducing quality over the long term. In light of these challenges, we propose multi-output deep architectures for multi-step forecasting in which we explicitly model the distribution of future values of the signal over a prediction horizon. We apply these techniques to the challenging and clinically relevant task of blood glucose forecasting. Through a series of experiments on a real-world dataset consisting of 550K blood glucose measurements, we demonstrate the effectiveness of our proposed approaches in capturing the underlying signal dynamics. Compared to existing shallow and deep methods, we find that our proposed approaches improve performance individually and capture complementary information, leading to a large improvement over the baseline when combined (4.87 vs. 5.31 absolute percentage error (APE)). Overall, the results suggest the efficacy of our proposed approach in predicting blood glucose level and multi-step forecasting more generally.", "title": "" } ]
[ { "docid": "fb904fc99acf8228ae7585e29074f96c", "text": "One of the biggest problems in manufacturing is the failure of machine tools due to loss of surface material in cutting operations like drilling and milling. Carrying on the process with a dull tool may damage the workpiece material fabricated. On the other hand, it is unnecessary to change the cutting tool if it is still able to continue cutting operation. Therefore, an effective diagnosis mechanism is necessary for the automation of machining processes so that production loss and downtime can be avoided. This study concerns with the development of a tool wear condition-monitoring technique based on a two-stage fuzzy logic scheme. For this, signals acquired from various sensors were processed to make a decision about the status of the tool. In the first stage of the proposed scheme, statistical parameters derived from thrust force, machine sound (acquired via a very sensitive microphone) and vibration signals were used as inputs to fuzzy process; and the crisp output values of this process were then taken as the input parameters of the second stage. Conclusively, outputs of this stage were taken into a threshold function, the output of which is used to assess the condition of the tool. r 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4dcdb2520ec5f9fc9c32f2cbb343808c", "text": "Shannon’s mathematical theory of communication defines fundamental limits on how much information can be transmitted between the different components of any man-made or biological system. This paper is an informal but rigorous introduction to the main ideas implicit in Shannon’s theory. An annotated reading list is provided for further reading.", "title": "" }, { "docid": "356a72153f61311546f6ff874ee79bb4", "text": "In this paper, an object cosegmentation method based on shape conformability is proposed. Different from the previous object cosegmentation methods which are based on the region feature similarity of the common objects in image set, our proposed SaCoseg cosegmentation algorithm focuses on the shape consistency of the foreground objects in image set. In the proposed method, given an image set where the implied foreground objects may be varied in appearance but share similar shape structures, the implied common shape pattern in the image set can be automatically mined and regarded as the shape prior of those unsatisfactorily segmented images. The SaCoseg algorithm mainly consists of four steps: 1) the initial Grabcut segmentation; 2) the shape mapping by coherent point drift registration; 3) the common shape pattern discovery by affinity propagation clustering; and 4) the refinement by Grabcut with common shape constraint. To testify our proposed algorithm and establish a benchmark for future work, we built the CoShape data set to evaluate the shape-based cosegmentation. The experiments on CoShape data set and the comparison with some related cosegmentation algorithms demonstrate the good performance of the proposed SaCoseg algorithm.", "title": "" }, { "docid": "528ef696a9932f87763d66264da515af", "text": "Ethical, philosophical and religious values are central to the continuing controversy over capital punishment. Nevertheless, factual evidence can and should inform policy making. The evidence for capital punishment as an uniquely effective deterrent to murder is especially important, since deterrence is the only major pragmatic argument on the pro-death penalty side.1 The purpose of this paper is to survey and evaluate the evidence for deterrence.", "title": "" }, { "docid": "43ec6774e1352443f41faf8d3780059b", "text": "Cloud computing is currently one of the most hyped information technology fields and it has become one of the fastest growing segments of IT. Cloud computing allows us to scale our servers in magnitude and availability in order to provide services to a greater number of end users. Moreover, adopters of the cloud service model are charged based on a pay-per-use basis of the cloud's server and network resources, aka utility computing. With this model, a conventional DDoS attack on server and network resources is transformed in a cloud environment to a new breed of attack that targets the cloud adopter's economic resource, namely Economic Denial of Sustainability attack (EDoS). In this paper, we advocate a novel solution, named EDoS-Shield, to mitigate the Economic Denial of Sustainability (EDoS) attack in the cloud computing systems. We design a discrete simulation experiment to evaluate its performance and the results show that it is a promising solution to mitigate the EDoS.", "title": "" }, { "docid": "1dc4a8f02dfe105220db5daae06c2229", "text": "Photosynthesis begins with light harvesting, where specialized pigment-protein complexes transform sunlight into electronic excitations delivered to reaction centres to initiate charge separation. There is evidence that quantum coherence between electronic excited states plays a role in energy transfer. In this review, we discuss how quantum coherence manifests in photosynthetic light harvesting and its implications. We begin by examining the concept of an exciton, an excited electronic state delocalized over several spatially separated molecules, which is the most widely available signature of quantum coherence in light harvesting. We then discuss recent results concerning the possibility that quantum coherence between electronically excited states of donors and acceptors may give rise to a quantum coherent evolution of excitations, modifying the traditional incoherent picture of energy transfer. Key to this (partially) coherent energy transfer appears to be the structure of the environment, in particular the participation of non-equilibrium vibrational modes. We discuss the open questions and controversies regarding quantum coherent energy transfer and how these can be addressed using new experimental techniques.", "title": "" }, { "docid": "8dee3ada764a40fce6b5676287496ccd", "text": "We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image translation problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without modeling temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a video-to-video synthesis approach under the generative adversarial learning framework. Through carefully-designed generators and discriminators, coupled with a spatio-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results on a diverse set of input formats including segmentation masks, sketches, and poses. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, which significantly advances the state-of-the-art of video synthesis. Finally, we apply our method to future video prediction, outperforming several competing systems. Code, models, and more results are available at our website.", "title": "" }, { "docid": "1fdb9fdea37c042187407451aef02297", "text": "Websites have gained vital importance for organizations along with the growing competition in the world market. It is known that usability requirements heavily depend on the type, audience and purpose of websites. For the e-commerce environment, usability assessment of a website is required to figure out the impact of website design on customer purchases. Thus, usability assessment and design of online pages have become the subject of many scientific studies. However, in any of these studies, design parameters were not identified in such a detailed way, and they were not classified in line with customer expectations to assess the overall usability of an e-commerce website. This study therefore aims to analyze and classify design parameters according to customer expectations in order to evaluate the usability of e-commerce websites in a more comprehensive manner. Four websites are assessed using the proposed novel approach with respect to the identified design parameters and the usability scores of the websites are examined. It is revealed that the websites with high usability score are more preferred by customers. Therefore, it is indicated that usability of e-commerce websites affects customer purchases.", "title": "" }, { "docid": "1af028a0cf88d0ac5c52e84019554d51", "text": "Robots exhibit life-like behavior by performing intelligent actions. To enhance human-robot interaction it is necessary to investigate and understand how end-users perceive such animate behavior. In this paper, we report an experiment to investigate how people perceived different robot embodiments in terms of animacy and intelligence. iCat and Robovie II were used as the two embodiments in this experiment. We conducted a between-subject experiment where robot type was the independent variable, and perceived animacy and intelligence of the robot were the dependent variables. Our findings suggest that a robots perceived intelligence is significantly correlated with animacy. The correlation between the intelligence and the animacy of a robot was observed to be stronger in the case of the iCat embodiment. Our results also indicate that the more animated the face of the robot, the more likely it is to attract the attention of a user. We also discuss the possible and probable explanations of the results obtained.", "title": "" }, { "docid": "cc3b5ee3c8c890499f3d52db00520563", "text": "We report results from an oyster hatchery on the Oregon coast, where intake waters experienced variable carbonate chemistry (aragonite saturation state , 0.8 to . 3.2; pH , 7.6 to . 8.2) in the early summer of 2009. Both larval production and midstage growth (, 120 to , 150 mm) of the oyster Crassostrea gigas were significantly negatively correlated with the aragonite saturation state of waters in which larval oysters were spawned and reared for the first 48 h of life. The effects of the initial spawning conditions did not have a significant effect on early-stage growth (growth from D-hinge stage to , 120 mm), suggesting a delayed effect of water chemistry on larval development. Rising atmospheric carbon dioxide (CO2) driven by anthropogenic emissions has resulted in the addition of over 140 Pg-C (1 Pg 5 1015 g) to the ocean (Sabine et al. 2011). The thermodynamics of the reactions between carbon dioxide and water require this addition to cause a decline of ocean pH and carbonate ion concentrations ([CO3 ]). For the observed change between current-day and preindustrial atmospheric CO2, the surface oceans have lost approximately 16% of their [CO3 ] and decreased in pH by 0.1 unit, although colder surface waters are likely to have experienced a greater effect (Feely et al. 2009). Projections for the open ocean suggest that wide areas, particularly at high latitudes, could reach low enough [CO3 ] levels that dissolution of biogenic carbonate minerals is thermodynamically favored by the end of the century (Feely et al. 2009; Steinacher et al. 2009), with implications for commercially significant higher trophic levels (Aydin et al. 2005). There is considerable spatial and temporal variability in ocean carbonate chemistry, and there is evidence that these natural variations affect marine biota, with ecological assemblages next to cold-seep high-CO2 sources having been shown to be distinct from those nearby but less affected by the elevated CO2 levels (Hall-Spencer et al. 2008). Coastal environments that are subject to upwelling events also experience exposure to elevated CO2 conditions where deep water enriched by additions of respiratory CO2 is brought up from depth to the nearshore surface by physical processes. Feely et al. (2008) showed that upwelling on the Pacific coast of central North America markedly increased corrosiveness for calcium carbonate minerals in surface nearshore waters. A small but significant amount of anthropogenic CO2 present in the upwelled source waters provided enough additional CO2 to cause widespread corrosiveness on the continental shelves (Feely et al. 2008). Because the source water for upwelling on the North American Pacific coast takes on the order of decades to transit from the point of subduction to the upwelling locales (Feely et al. 2008), this anthropogenic CO2 was added to the water under a substantially lowerCO2 atmosphere than today’s, and water already en route to this location is likely carrying an increasing burden of anthropogenic CO2. Understanding the effects of natural variations in CO2 in these waters on the local fauna is critical for anticipating how more persistently corrosive conditions will affect marine ecosystems. The responses of organisms to rising CO2 are potentially numerous and include negative effects on respiration, motility, and fertility (Portner 2008). From a geochemical perspective, however, the easiest process to understand conceptually is that of solid calcium carbonate (CaCO3,s) mineral formation. In nearly all ocean surface waters, formation of CaCO3,s is thermodynamically favored by the abundance of the reactants, dissolved calcium ([Ca2+]), and carbonate ([CO3 ]) ions. While oceanic [Ca 2+] is relatively constant at high levels that are well described by conservative relationships with salinity, ocean [CO 3 ] decreases as atmospheric CO2 rises, lowering the energetic favorability of CaCO3,s formation. This energetic favorability is proportional to the saturation state, V, defined by", "title": "" }, { "docid": "30bc96451dd979a8c08810415e4a2478", "text": "An adaptive circulator fabricated on a 130 nm CMOS is presented. Circulator has two adaptive blocks for gain and phase mismatch correction and leakage cancelation. The impedance matching circuit corrects mismatches for antenna, divider, and LNTA. The cancelation block cancels the Tx leakage. Measured isolation between transmitter and receiver for single tone at 2.4 GHz is 90 dB, and for a 40 MHz wide-band signal is 50dB. The circulator Rx gain is 10 dB, with NF = 4.7 dB and 5 dB insertion loss.", "title": "" }, { "docid": "33dedeabc83271223a1b3fb50bfb1824", "text": "Quantum computers can be used to address electronic-structure problems and problems in materials science and condensed matter physics that can be formulated as interacting fermionic problems, problems which stretch the limits of existing high-performance computers. Finding exact solutions to such problems numerically has a computational cost that scales exponentially with the size of the system, and Monte Carlo methods are unsuitable owing to the fermionic sign problem. These limitations of classical computational methods have made solving even few-atom electronic-structure problems interesting for implementation using medium-sized quantum computers. Yet experimental implementations have so far been restricted to molecules involving only hydrogen and helium. Here we demonstrate the experimental optimization of Hamiltonian problems with up to six qubits and more than one hundred Pauli terms, determining the ground-state energy for molecules of increasing size, up to BeH2. We achieve this result by using a variational quantum eigenvalue solver (eigensolver) with efficiently prepared trial states that are tailored specifically to the interactions that are available in our quantum processor, combined with a compact encoding of fermionic Hamiltonians and a robust stochastic optimization routine. We demonstrate the flexibility of our approach by applying it to a problem of quantum magnetism, an antiferromagnetic Heisenberg model in an external magnetic field. In all cases, we find agreement between our experiments and numerical simulations using a model of the device with noise. Our results help to elucidate the requirements for scaling the method to larger systems and for bridging the gap between key problems in high-performance computing and their implementation on quantum hardware.", "title": "" }, { "docid": "ba7081afe9e734c5895ccbe7307c8707", "text": "Research effort in ontology visualization has largely focused on developing new visualization techniques. At the same time, researchers have paid less attention to investigating the usability of common visualization techniques that many practitioners regularly use to visualize ontological data. In this paper, we focus on two popular ontology visualization techniques: indented tree and graph. We conduct a controlled usability study with an emphasis on the effectiveness, efficiency, workload and satisfaction of these visualization techniques in the context of assisting users during evaluation of ontology mappings. Findings from this study have revealed both strengths and weaknesses of each visualization technique. In particular, while the indented tree visualization is more organized and familiar to novice users, subjects found the graph visualization to be more controllable and intuitive without visual redundancy, particularly for ontologies with multiple inheritance.", "title": "" }, { "docid": "c05fc37d9f33ec94f4c160b3317dda00", "text": "We consider the coordination control for multiagent systems in a very general framework where the position and velocity interactions among agents are modeled by independent graphs. Different algorithms are proposed and analyzed for different settings, including the case without leaders and the case with a virtual leader under fixed position and velocity interaction topologies, as well as the case with a group velocity reference signal under switching velocity interaction. It is finally shown that the proposed algorithms are feasible in achieving the desired coordination behavior provided the interaction topologies satisfy the weakest possible connectivity conditions. Such conditions relate only to the structure of the interactions among agents while irrelevant to their magnitudes and thus are easy to verify. Rigorous convergence analysis is preformed based on a combined use of tools from algebraic graph theory, matrix analysis as well as the Lyapunov stability theory.", "title": "" }, { "docid": "464439e2c9e45045aeee5ca0b88b90e1", "text": "We calculate the average number of critical points of a Gaussian field on a high-dimensional space as a function of their energy and their index. Our results give a complete picture of the organization of critical points and are of relevance to glassy and disordered systems and landscape scenarios coming from the anthropic approach to string theory.", "title": "" }, { "docid": "1d9361cffd8240f3b691c887def8e2f5", "text": "Twenty seven essential oils, isolated from plants representing 11 families of Portuguese flora, were screened for their nematicidal activity against the pinewood nematode (PWN), Bursaphelenchus xylophilus. The essential oils were isolated by hydrodistillation and the volatiles by distillation-extraction, and both were analysed by GC and GC-MS. High nematicidal activity was achieved with essential oils from Chamaespartium tridentatum, Origanum vulgare, Satureja montana, Thymbra capitata, and Thymus caespititius. All of these essential oils had an estimated minimum inhibitory concentration ranging between 0.097 and 0.374 mg/ml and a lethal concentration necessary to kill 100% of the population (LC(100)) between 0.858 and 1.984 mg/ml. Good nematicidal activity was also obtained with the essential oil from Cymbopogon citratus. The dominant components of the effective oils were 1-octen-3-ol (9%), n-nonanal, and linalool (both 7%) in C. tridentatum, geranial (43%), neral (29%), and β-myrcene (25%) in C. citratus, carvacrol (36% and 39%), γ-terpinene (24% and 40%), and p-cymene (14% and 7%) in O. vulgare and S. montana, respectively, and carvacrol (75% and 65%, respectively) in T. capitata and T. caespititius. The other essential oils obtained from Portuguese flora yielded weak or no activity. Five essential oils with nematicidal activity against PWN are reported for the first time.", "title": "" }, { "docid": "0e644fc1c567356a2e099221a774232c", "text": "We present a coupled two-way clustering approach to gene microarray data analysis. The main idea is to identify subsets of the genes and samples, such that when one of these is used to cluster the other, stable and significant partitions emerge. The search for such subsets is a computationally complex task. We present an algorithm, based on iterative clustering, that performs such a search. This analysis is especially suitable for gene microarray data, where the contributions of a variety of biological mechanisms to the gene expression levels are entangled in a large body of experimental data. The method was applied to two gene microarray data sets, on colon cancer and leukemia. By identifying relevant subsets of the data and focusing on them we were able to discover partitions and correlations that were masked and hidden when the full dataset was used in the analysis. Some of these partitions have clear biological interpretation; others can serve to identify possible directions for future research.", "title": "" }, { "docid": "3207a4b3d199db8f43d96f1096e8eb81", "text": "Recently, a branch of machine learning algorithms called deep learning gained huge attention to boost up accuracy of a variety of sensing applications. However, execution of deep learning algorithm such as convolutional neural network on mobile processor is non-trivial due to intensive computational requirements. In this paper, we present our early design of DeepSense - a mobile GPU-based deep convolutional neural network (CNN) framework. For its design, we first explored the differences between server-class and mobile-class GPUs, and studied effectiveness of various optimization strategies such as branch divergence elimination and memory vectorization. Our results show that DeepSense is able to execute a variety of CNN models for image recognition, object detection and face recognition in soft real time with no or marginal accuracy tradeoffs. Experiments also show that our framework is scalable across multiple devices with different GPU architectures (e.g. Adreno and Mali).", "title": "" }, { "docid": "7143c97b6ea484566f521e36a3fa834e", "text": "To determine the reliability and concurrent validity of a visual analogue scale (VAS) for disability as a single-item instrument measuring disability in chronic pain patients was the objective of the study. For the reliability study a test-retest design and for the validity study a cross-sectional design was used. A general rehabilitation centre and a university rehabilitation centre was the setting for the study. The study population consisted of patients over 18 years of age, suffering from chronic musculoskeletal pain; 52 patients in the reliability study, 344 patients in the validity study. Main outcome measures were as follows. Reliability study: Spearman's correlation coefficients (rho values) of the test and retest data of the VAS for disability; validity study: rho values of the VAS disability scores with the scores on four domains of the Short-Form Health Survey (SF-36) and VAS pain scores, and with Roland-Morris Disability Questionnaire scores in chronic low back pain patients. Results were as follows: in the reliability study rho values varied from 0.60 to 0.77; and in the validity study rho values of VAS disability scores with SF-36 domain scores varied from 0.16 to 0.51, with Roland-Morris Disability Questionnaire scores from 0.38 to 0.43 and with VAS pain scores from 0.76 to 0.84. The conclusion of the study was that the reliability of the VAS for disability is moderate to good. Because of a weak correlation with other disability instruments and a strong correlation with the VAS for pain, however, its validity is questionable.", "title": "" }, { "docid": "d9b8c9c1427fc68f9e40e24ae517c7e8", "text": "Although studies have shown that Instagram use and young adults' mental health are cross-sectionally associated, longitudinal evidence is lacking. In addition, no study thus far examined this association, or the reverse, among adolescents. To address these gaps, we set up a longitudinal panel study among 12- to 19-year-old Flemish adolescents to investigate the reciprocal relationships between different types of Instagram use and depressed mood. Self-report data from 671 adolescent Instagram users (61% girls; MAge = 14.96; SD = 1.29) were used to examine our research question and test our hypotheses. Structural equation modeling showed that Instagram browsing at Time 1 was related to increases in adolescents' depressed mood at Time 2. In addition, adolescents' depressed mood at Time 1 was related to increases in Instagram posting at Time 2. These relationships were similar among boys and girls. Potential explanations for the study findings and suggestions for future research are discussed.", "title": "" } ]
scidocsrr
85113b73b358b234c110373fc41f594e
Impact of Nurse Managers' Leadership Styles on Staff Nurses' Intent to Turnover
[ { "docid": "d31cd5f7dbdbd3dd7e5d8895d359a958", "text": "AIM\nThe aim of this cross-sectional descriptive study was to compare the different leadership styles based on perceptions of nurse managers and their staff.\n\n\nBACKGROUND\nNurse managers' styles are fundamental to improving subordinates' performance and achieving goals at health-care institutions.\n\n\nMETHODS\nThis was a cross-sectional study. A questionnaire developed by Ekvall & Arvonen, considering three leadership domains (Change, Production and Employee relations), was administered to all nurse managers and to their subordinates at a city hospital in north-east Italy.\n\n\nRESULTS\nThe comparison between the leadership styles actually adopted and those preferred by the nurse managers showed that the preferred style always scored higher than the style adopted, the difference reaching statistical significance for Change and Production. The leadership styles preferred by subordinates always scored higher than the styles their nurse managers actually adopted; in the subordinates' opinion, the differences being statistically significant in all three leadership domains.\n\n\nIMPLICATION FOR NURSING MANAGEMENT\nThe study showed that nurse managers' expectations in relation to their leadership differ from those of their subordinates. These findings should be borne in mind when selecting and training nurse managers and other personnel, and they should influence the hospital's strategic management of nurses.", "title": "" } ]
[ { "docid": "ae0cd5f9060fdc4247d4338023022355", "text": "Modeling disease spread and distribution using social media data has become an increasingly popular research area. While Twitter data has recently been investigated for estimating disease spread, the extent to which it is representative of disease spread and distribution in a macro perspective is still an open question. In this paper, we focus on macroscale modeling of influenza-like illnesses (ILI) using a large dataset containing 8,961,932 tweets from Australia collected in 2015. We first propose modifications of the state-of-theart ILI-related tweet detection approaches to acquire a more refined dataset. We normalize the number of detected ILIrelated tweets with Internet access and Twitter penetration rates in each state. Then, we establish a state-level linear regression model between the number of ILI-related tweets and the number of real influenza notifications. The Pearson correlation coefficient of the model is 0.93. Our results indicate that: 1) a strong positive linear correlation exists between the number of ILI-related tweets and the number of recorded influenza notifications at state scale; 2) Twitter data has promising ability in helping detect influenza outbreaks; 3) taking into account the population, Internet access and Twitter penetration rates in each state enhances the prevalence modeling analysis.", "title": "" }, { "docid": "0a4a124589dffca733fa9fa87dc94b35", "text": "where ri is the reward in cycle i of a given history, and the expected value is taken over all possible interaction histories of π and μ. The choice of γi is a subtle issue that controls how greedy or far sighted the agent should be. Here we use the near-harmonic γi := 1/i2 as this produces an agent with increasing farsightedness of the order of its current age [Hutter2004]. As we desire an extremely general definition of intelligence for arbitrary systems, our space of environments should be as large as possible. An obvious choice is the space of all probability measures, however this causes serious problems as we cannot even describe some of these measures in a finite way.", "title": "" }, { "docid": "93a9fdca133adfd8b6e7b8f030e95622", "text": "Prostate segmentation from Magnetic Resonance (MR) images plays an important role in image guided intervention. However, the lack of clear boundary specifically at the apex and base, and huge variation of shape and texture between the images from different patients make the task very challenging. To overcome these problems, in this paper, we propose a deeply supervised convolutional neural network (CNN) utilizing the convolutional information to accurately segment the prostate from MR images. The proposed model can effectively detect the prostate region with additional deeply supervised layers compared with other approaches. Since some information will be abandoned after convolution, it is necessary to pass the features extracted from early stages to later stages. The experimental results show that significant segmentation accuracy improvement has been achieved by our proposed method compared to other reported approaches.", "title": "" }, { "docid": "d4c8acbbee72b8a9e880e2bce6e2153a", "text": "This paper presents a simple linear operator that accurately estimates the position and parameters of ellipse features. Based on the dual conic model, the operator avoids the intermediate stage of precisely extracting individual edge points by exploiting directly the raw gradient information in the neighborhood of an ellipse's boundary. Moreover, under the dual representation, the dual conic can easily be constrained to a dual ellipse when minimizing the algebraic distance. The new operator is assessed and compared to other estimation approaches in simulation as well as in real situation experiments and shows better accuracy than the best approaches, including those limited to the center position.", "title": "" }, { "docid": "8ee1abcf16433d333e530f83be29722f", "text": "Since the evolution of the internet, many small and large companies have moved their businesses to the internet to provide services to customers worldwide. Cyber credit‐card fraud or no card present fraud is increasingly rampant in the recent years for the reason that the credit‐card i s majorly used to request payments by these companies on the internet. Therefore the need to ensure secured transactions for credit-card owners when consuming their credit cards to make electronic payments for goods and services provided on the internet is a criterion. Data mining has popularly gained recognition in combating cyber credit-card fraud because of its effective artificial intelligence (AI) techniques and algorithms that can be implemented to detect or predict fraud through Knowledge Discovery from unusual patterns derived from gathered data. In this study, a system’s model for cyber credit card fraud detection is discussed and designed. This system implements the supervised anomaly detection algorithm of Data mining to detect fraud in a real time transaction on the internet, and thereby classifying the transaction as legitimate, suspicious fraud and illegitimate transaction. The anomaly detection algorithm is designed on the Neural Networks which implements the working principal of the human brain (as we humans learns from past experience and then make our present day decisions on what we have learned from our past experience). To understand how cyber credit card fraud are being committed, in this study the different types of cyber fraudsters that commit cyber credit card fraud and the techniques used by these cyber fraudsters to commit fraud on the internet is discussed.", "title": "" }, { "docid": "11747931101b7dd3fed01380396b8fa5", "text": "Unsupervised word translation from nonparallel inter-lingual corpora has attracted much research interest. Very recently, neural network methods trained with adversarial loss functions achieved high accuracy on this task. Despite the impressive success of the recent techniques, they suffer from the typical drawbacks of generative adversarial models: sensitivity to hyper-parameters, long training time and lack of interpretability. In this paper, we make the observation that two sufficiently similar distributions can be aligned correctly with iterative matching methods. We present a novel method that first aligns the second moment of the word distributions of the two languages and then iteratively refines the alignment. Our simple linear method is able to achieve better or equal performance to recent state-of-theart deep adversarial approaches and typically does a little better than the supervised baseline. Our method is also efficient, easy to parallelize and interpretable.", "title": "" }, { "docid": "0d2b961b5546091f05ed7a8eff5f1d7f", "text": "Initial Cryptoasset Offering (ICO), also often called Initial Coin Offering or Initial Token Offering (ITO) is a new means of fundraising through blockchain technology, which allows startups to raise large amounts of funds from the crowd in an unprecedented speed. However it is not easy for ordinary investors to distinguish genuine fundraising activities through ICOs from scams. Different websites that gather and evaluate ICOs at different stages have emerged as a solution to this issue. What remains unclear is how these websites are evaluating ICOs, and consequently how reliable and credible their evaluations are. In this paper we present the first findings of an analysis of a set of 28 ICO evaluation websites, aiming at revealing the state of the practice in terms of ICO evaluation. Key information about ICOs collected by these websites are categorised, and key factors that differentiate the evaluation mechanisms employed by these evaluation websites are identified. The findings of our study could help a better understanding of what entails to properly evaluate ICOs. It is also a first step towards discovering the key success factors of ICOs.", "title": "" }, { "docid": "dddc0b6196a81de7c24c8bfc9dc0af7e", "text": "Microblog, such as Weibo and Twitter, has become an important platform where people share their opinions. Much research has been done to detect topics and events in microblogs. Due to the dynamic nature of events, it is more crucial to monitor the evolution and trace the development of the events. People pay more attention to the whole evolution chain of the events rather than a single event. In this paper, we propose a method to automatically discover event evolution chain in microblogs based on multiple similarity measures including contents, locations and participants. We build a 5-tuple event description model specifically for events detected from microblogs and analyze their relationships. Inverted index and locality-sensitive hashing are used to improve the efficiency of the algorithm. Experiment shows that our method gain a 143.33% speed up against method without locality-sensitive hashing. In comparison with the ground truth and a baseline method, the result illustrates that it effectively covers ground truth and outperforms the baseline method especially in dealing with the long-term spanning events.", "title": "" }, { "docid": "ffb65e7e1964b9741109c335f37ff607", "text": "To build a redundant medium-voltage converter, the semiconductors must be able to turn OFF different short circuits. The most challenging one is a hard turn OFF of a diode which is called short-circuit type IV. Without any protection measures this short circuit destroys the high-voltage diode. Therefore, a novel three-level converter with an increased short-circuit inductance is used. In this paper several short-circuit measurements on a 6.5 kV diode are presented which explain the effect of the protection measures. Moreover, the limits of the protection scheme are presented.", "title": "" }, { "docid": "fb162c94248297f35825ff1022ad2c59", "text": "This article traces the evolution of ambulance location and relocation models proposed over the past 30 years. The models are classified in two main categories. Deterministic models are used at the planning stage and ignore stochastic considerations regarding the availability of ambulances. Probabilistic models reflect the fact that ambulances operate as servers in a queueing system and cannot always answer a call. In addition, dynamic models have been developed to repeatedly relocate ambulances throughout the day. 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "7275ce89ea2f5ab8eb8b6651e2487dcb", "text": "A major challenge of semantic parsing is the vocabulary mismatch problem between natural language and target ontology. In this paper, we propose a sentence rewriting based semantic parsing method, which can effectively resolve the mismatch problem by rewriting a sentence into a new form which has the same structure with its target logical form. Specifically, we propose two sentence-rewriting methods for two common types of mismatch: a dictionary-based method for 1N mismatch and a template-based method for N-1 mismatch. We evaluate our sentence rewriting based semantic parser on the benchmark semantic parsing dataset – WEBQUESTIONS. Experimental results show that our system outperforms the base system with a 3.4% gain in F1, and generates logical forms more accurately and parses sentences more robustly.", "title": "" }, { "docid": "69dfe6a7a3738a00d32eb18491a83f5c", "text": "Real-time transmission of video data in network environments, such as wireless and Internet, is a challenging task, as it requires high compression efficiency and network friendly design. H.264/AVC is the newest international video coding standard, jointly developed by groups from ISO/IEC and ITU-T, which aims at achieving improved compression performance and a network-friendly video representation for different types of applications, such as conversational, storage, and streaming. In this paper, we discuss various error resiliency schemes employed by H.264/AVC. The related topics such as nonnormative error concealment and network environment are also described. Some experimental results are discussed to show the performance of error resiliency schemes.", "title": "" }, { "docid": "a6adb587df9f688ceb0930c84aac01ba", "text": "Existing knowledge-based question answering systems often rely on small annotated training data. While shallow methods like information extraction techniques are robust to data scarcity, they are less expressive than deep understanding methods, thereby failing at answering questions involving multiple constraints. Here we alleviate this problem by empowering a relation extraction method with additional evidence from Wikipedia. We first present a novel neural network based relation extractor to retrieve the candidate answers from Freebase, and then develop a refinement model to validate answers using Wikipedia. We achieve 53.3 F1 on WEBQUESTIONS, a substantial improvement over the state-of-theart.", "title": "" }, { "docid": "1c7027cc8086830709ea2d5a41d13d20", "text": "Hypervisor-based virtualization technology has been successfully used to deploy high-performance and scalable infrastructure for Hadoop, and now Spark applications. Container-based virtualization techniques are becoming an important option, which is increasingly used due to their lightweight operation and better scaling when compared to Virtual Machines (VM). With containerization techniques such as Docker becoming mature and promising better performance, we can use Docker to speed-up big data applications. However, as applications have different behaviors and resource requirements, before replacing traditional hypervisor-based virtual machines with Docker, it is important to analyze and compare performance of applications running in the cloud with VMs and Docker containers. VM provides distributed resource management for different virtual machines running with their own allocated resources, while Docker relies on shared pool of resources among all containers. Here, we investigate the performance of different Apache Spark applications using both Virtual Machines (VM) and Docker containers. While others have looked at Docker's performance, this is the first study that compares these different virtualization frameworks for a big data enterprise cloud environment using Apache Spark. In addition to makespan and execution time, we also analyze different resource utilization (CPU, disk, memory, etc.) by Spark applications. Our results show that Spark using Docker can obtain speed-up of over 10 times when compared to using VM. However, we observe that this may not apply to all applications due to different workload patterns and different resource management schemes performed by virtual machines and containers. Our work can guide application developers, system administrators and researchers to better design and deploy big data applications on their platforms to improve the overall performance.", "title": "" }, { "docid": "7709fa95a26a1d8a45250cf850c92755", "text": "Metric learning aims to learn a distance function to measure the similarity of samples, which plays an important role in many visual understanding applications. Generally, the optimal similarity functions for different visual understanding tasks are task specific because the distributions for data used in different tasks are usually different. It is generally believed that learning a metric from training data can obtain more encouraging performances than handcrafted metrics [1]-[3], e.g., the Euclidean and cosine distances. A variety of metric learning methods have been proposed in the literature [2]-[5], and many of them have been successfully employed in visual understanding tasks such as face recognition [6], [7], image classification [2], [3], visual search [8], [9], visual tracking [10], [11], person reidentification [12], cross-modal matching [13], image set classification [14], and image-based geolocalization [15]-[17].", "title": "" }, { "docid": "671952f18fb9041e7335f205666bf1f5", "text": "This new handbook is an efficient way to keep up with the continuing advances in antenna technology and applications. The handbook is uniformly well written, up-to-date, and filled with a wealth of practical information. This makes it a useful reference for most antenna engineers and graduate students.", "title": "" }, { "docid": "012434b92f2d3f83b7f9397f990a96b0", "text": "Error estimation accuracy is the salient issue regarding the validity of a classifier model. When samples are small, training-data-based error estimates tend to suffer from inaccuracy and quantification of error estimation accuracy is difficult. Numerous methods have been proposed for estimating confidence intervals for the true error based on the estimated error. This paper surveys proposed methods and quantifies their performance. Monte Carlo methods are used to obtain accurate estimates of the true confidence intervals and compare these to the intervals estimated from samples. We consider different error estimators and several proposed confidence-bound estimators. Both synthetic and real genomic data are employed. Our simulations show the majority of the confidence intervals methods have poor performance because of the difference of shape between true and estimated intervals. According to our results, the best estimation strategy is to use the 10-time 10-fold cross-validation with a confidence interval based on the standard deviation. & 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "cc12bd6dcd844c49c55f4292703a241b", "text": "Eleven cases of sudden death of men restrained in a prone position by police officers are reported. Nine of the men were hogtied, one was tied to a hospital gurney, and one was manually held prone. All subjects were in an excited delirious state when restrained. Three were psychotic, whereas the others were acutely delirious from drugs (six from cocaine, one from methamphetamine, and one from LSD). Two were shocked with stun guns shortly before death. The literature is reviewed and mechanisms of death are discussed.", "title": "" }, { "docid": "5a9563f3186414cace353bb261792118", "text": "Solid waste management is one of major aspect which has to be considered in terms of making urban area environment healthier. The common dustbins placed by the municipal corporation are leading no. of health, environmental and social issues. Various causes are there like improper dustbin placement in city, improper system of collecting waste by City Corporation, and more specifically people are not aware enough to use dustbins in proper way. These various major causes are leading serious problems like, an unhygienic condition, air pollution, and unhealthy environment creating health disease. Up till now, research has been carried out by developing a Software Applications for indicating dustbin status, another by Shortest path method for garbage collecting vehicles by integrating RFID, GSM, GIS system; but no any active efforts has been taken paying attention towards managing such waste in atomized way. Considering all these major factors, a smart solid waste management system is designed that will check status and give alert of dustbin fullness and more significantly system has a feature to literate people to use dustbin properly and to automatically sense and clean garbage present outside the dustbin. Thus presented solution achieves smart solid waste management satisfying goal of making Indian cities clean, healthy and hygienic.", "title": "" }, { "docid": "ceaa0ceb14034ecc2840425a627a3c71", "text": "In this article, we present a novel class of robots that are able to move by growing and building their own structure. In particular, taking inspiration by the growing abilities of plant roots, we designed and developed a plant root-like robot that creates its body through an additive manufacturing process. Each robotic root includes a tubular body, a growing head, and a sensorized tip that commands the robot behaviors. The growing head is a customized three-dimensional (3D) printer-like system that builds the tubular body of the root in the format of circular layers by fusing and depositing a thermoplastic material (i.e., polylactic acid [PLA] filament) at the tip level, thus obtaining movement by growing. A differential deposition of the material can create an asymmetry that results in curvature of the built structure, providing the possibility of root bending to follow or escape from a stimulus or to reach a desired point in space. Taking advantage of these characteristics, the robotic roots are able to move inside a medium by growing their body. In this article, we describe the design of the growing robot together with the modeling of the deposition process and the description of the implemented growing movement strategy. Experiments were performed in air and in an artificial medium to verify the functionalities and to evaluate the robot performance. The results showed that the robotic root, with a diameter of 50 mm, grows with a speed of up to 4 mm/min, overcoming medium pressure of up to 37 kPa (i.e., it is able to lift up to 6 kg) and bending with a minimum radius of 100 mm.", "title": "" } ]
scidocsrr
a97966858719eff8599ad5fbb8b7286a
LineNet: a Zoomable CNN for Crowdsourced High Definition Maps Modeling in Urban Environments
[ { "docid": "830f36268b9220d378d9aafaf52f5144", "text": "Deep Convolutional Neural Networks (DCNNs) achieve invariance to domain transformations (deformations) by using multiple `max-pooling' (MP) layers. In this work we show that alternative methods of modeling deformations can improve the accuracy and efficiency of DCNNs. First, we introduce epitomic convolution as an alternative to the common convolution-MP cascade of DCNNs, that comes with the same computational cost but favorable learning properties. Second, we introduce a Multiple Instance Learning algorithm to accommodate global translation and scaling in image classification, yielding an efficient algorithm that trains and tests a DCNN in a consistent manner. Third we develop a DCNN sliding window detector that explicitly, but efficiently, searches over the object's position, scale, and aspect ratio. We provide competitive image classification and localization results on the ImageNet dataset and object detection results on Pascal VOC2007.", "title": "" }, { "docid": "d01fe3897f0f09fc023d943ece518e6e", "text": "In this paper, we propose an efficient lane detection algorithm for lane departure detection; this algorithm is suitable for low computing power systems like automobile black boxes. First, we extract candidate points, which are support points, to extract a hypotheses as two lines. In this step, Haar-like features are used, and this enables us to use an integral image to remove computational redundancy. Second, our algorithm verifies the hypothesis using defined rules. These rules are based on the assumption that the camera is installed at the center of the vehicle. Finally, if a lane is detected, then a lane departure detection step is performed. As a result, our algorithm has achieved 90.16% detection rate; the processing time is approximately 0.12 milliseconds per frame without any parallel computing.", "title": "" }, { "docid": "b9b194410824bd769b708baef7953aaf", "text": "Road and lane detection play an important role in autonomous driving and commercial driver-assistance systems. Vision-based road detection is an essential step towards autonomous driving, yet a challenging task due to illumination and complexity of the visual scenery. Urban scenes may present additional challenges such as intersections, multi-lane scenarios, or clutter due to heavy traffic. This paper presents an integrative approach to ego-lane detection that aims to be as simple as possible to enable real-time computation while being able to adapt to a variety of urban and rural traffic scenarios. The approach at hand combines and extends a road segmentation method in an illumination-invariant color image, lane markings detection using a ridge operator, and road geometry estimation using RANdom SAmple Consensus (RANSAC). Employing the segmented road region as a prior for lane markings extraction significantly improves the execution time and success rate of the RANSAC algorithm, and makes the detection of weakly pronounced ridge structures computationally tractable, thus enabling ego-lane detection even in the absence of lane markings. Segmentation performance is shown to increase when moving from a color-based to a histogram correlation-based model. The power and robustness of this algorithm has been demonstrated in a car simulation system as well as in the challenging KITTI data base of real-world urban traffic scenarios.", "title": "" }, { "docid": "5b4e2380172b90c536eb974268a930b6", "text": "This paper addresses the problem of road scene segmentation in conventional RGB images by exploiting recent advances in semantic segmentation via convolutional neural networks (CNNs). Segmentation networks are very large and do not currently run at interactive frame rates. To make this technique applicable to robotics we propose several architecture refinements that provide the best trade-off between segmentation quality and runtime. This is achieved by a new mapping between classes and filters at the expansion side of the network. The network is trained end-to-end and yields precise road/lane predictions at the original input resolution in roughly 50ms. Compared to the state of the art, the network achieves top accuracies on the KITTI dataset for road and lane segmentation while providing a 20× speed-up. We demonstrate that the improved efficiency is not due to the road segmentation task. Also on segmentation datasets with larger scene complexity, the accuracy does not suffer from the large speed-up.", "title": "" } ]
[ { "docid": "f5d9d701bcc3b629dc90db57448c443c", "text": "IoT is a driving force for the next generation of cyber-physical manufacturing systems. The construction and operation of these systems is a big challenge. In this paper, a framework that exploits model driven engineering to address the increasing complexity in this kind of systems is presented. The framework utilizes the model driven engineering paradigm to define a domain specific development environment that allows the control engineer, a) to transform the mechanical units of the plant to Industrial Automation Things (IAT), i.e., to IoT-compliant manufacturing cyber-physical components, and, b) to specify the cyber components, which implement the plant processes, as physical mashups, i.e., compositions of plant services provided by IATs. The UML4IoT profile is extended to address the requirements of the framework. The approach was successfully applied on a laboratory case study to demonstrate its effectiveness in terms of flexibility and responsiveness.", "title": "" }, { "docid": "5e9e62b69b0e98e81f5eec77bbcc0f73", "text": "The Conners' Parent Rating Scale (CPRS) is a popular research and clinical tool for obtaining parental reports of childhood behavior problems. The present study introduces a revised CPRS (CPRS-R) which has norms derived from a large, representative sample of North American children, uses confirmatory factor analysis to develop a definitive factor structure, and has an updated item content to reflect recent knowledge and developments concerning childhood behavior problems. Exploratory and confirmatory factor-analytic results revealed a seven-factor model including the following factors: Cognitive Problems, Oppositional, Hyperactivity-Impulsivity, Anxious-Shy, Perfectionism, Social Problems, and Psychosomatic. The psychometric properties of the revised scale appear adequate as demonstrated by good internal reliability coefficients, high test-retest reliability, and effective discriminatory power. Advantages of the CPRS-R include a corresponding factor structure with the Conners' Teacher Rating Scale-Revised and comprehensive symptom coverage for attention deficit hyperactivity disorder (ADHD) and related disorders. Factor congruence with the original CPRS as well as similarities with other parent rating scales are discussed.", "title": "" }, { "docid": "a28c91e46099d49f45360501969d6514", "text": "Mobile forensics is an exciting new field of research. An increasing number of Open source and commercial digital forensics tools are focusing on less time during digital forensic examination. There is a major issue affecting some mobile forensic tools that allow the tools to spend much time during the forensic examination. It is caused by implementation of poor file searching algorithms by some forensic tool developers. This research is focusing on reducing the time taken to search for a file by proposing a novel, multi-pattern signature matching algorithm called M-Aho-Corasick which is adapted from the original Aho-Corasick algorithm. Experiments are conducted on five different datasets which one of the data sets is obtained from Digital Forensic Research Workshop (DFRWS 2010). Comparisons are made between M-Aho-Corasick using M_Triage with Dec0de, Lifter, XRY, and Xaver. The result shows that M-Aho-Corasick using M_Triage has reduced the searching time by 75% as compared to Dec0de, 36% as compared to Lifter, 28% as compared to XRY, and 71% as compared to Xaver. Thus, M-Aho-Corasick using M_Triage tool is more efficient than Dec0de, Lifter, XRY, and Xaver in avoiding the extraction of high number of false positive results. Keywords—mobile forensics; Images; Videos; M-AhoCorasick; (File Signature Pattern Matching)", "title": "" }, { "docid": "bc06b540765ddf762dc8cb72cae7ad41", "text": "We present a method to produce free, enormous corpora to train taggers for Named Entity Recognition (NER), the task of identifying and classifying names in text, often solved by statistical learning systems. Our approach utilises the text of Wikipedia, a free online encyclopedia, transforming links between Wikipedia articles into entity annotations. Having derived a baseline corpus, we found that altering Wikipedia’s links and identifying classes of capitalised non-entity terms would enable the corpus to conform more closely to gold-standard annotations, increasing performance by up to 32% F score. The evaluation of our method is novel since the training corpus is not usually a variable in NER experimentation. We therefore develop a number of methods for analysing and comparing training corpora. Gold-standard training corpora for NER perform poorly (F score up to 32% lower) when evaluated on test data from a different gold-standard corpus. Our Wikipedia-derived data can outperform manually-annotated corpora on this cross-corpus evaluation task by up to 7% on held-out test data. These experimental results show that Wikipedia is viable as a source of automatically-annotated training corpora, which have wide domain coverage applicable to a broad range of NLP applications.", "title": "" }, { "docid": "5abcd733dce7e8ced901830cbcaad56b", "text": "Stored-value cards, or prepaid cards, are increasingly popular. Like credit cards, their use is vulnerable to fraud, costing merchants and card processors millions of dollars. Prior techniques to automate fraud detection rely on a priori rules or specialized learned models associated with the customer. Mostly, these techniques do not consider fraud sequences or changing behavior, which can lead to false alarms. This study demonstrates how a transaction model can be dynamically created and updated, and fraud can be automatically detected for prepaid cards. A card processing company creates models of the store terminals rather than the customers, in part, because of the anonymous nature of prepaid cards. The technique automatically creates, updates, and compares hidden Markov models (HMM) of merchant terminals. We present fraud detection and experiments on real transactional data, showing the efficiency and effectiveness of the approach. In the fraud test cases, derived from known fraud cases, the technique has a good F-score. The technique can detect fraud in real-time for merchants, as card transactions are processed by a modern transaction processing system. © 2017 Published by Elsevier Ltd.", "title": "" }, { "docid": "c406d734f32cc4b88648c037d9d10e46", "text": "In this paper, we review the state-of-the-art technologies for driver inattention monitoring, which can be classified into the following two main categories: 1) distraction and 2) fatigue. Driver inattention is a major factor in most traffic accidents. Research and development has actively been carried out for decades, with the goal of precisely determining the drivers' state of mind. In this paper, we summarize these approaches by dividing them into the following five different types of measures: 1) subjective report measures; 2) driver biological measures; 3) driver physical measures; 4) driving performance measures; and 5) hybrid measures. Among these approaches, subjective report measures and driver biological measures are not suitable under real driving conditions but could serve as some rough ground-truth indicators. The hybrid measures are believed to give more reliable solutions compared with single driver physical measures or driving performance measures, because the hybrid measures minimize the number of false alarms and maintain a high recognition rate, which promote the acceptance of the system. We also discuss some nonlinear modeling techniques commonly used in the literature.", "title": "" }, { "docid": "c65050bb98a071fa8b60fa262536a476", "text": "Proliferative periostitis is a pathologic lesion that displays an osteo-productive and proliferative inflammatory response of the periosteum to infection or other irritation. This lesion is a form of chronic osteomyelitis that is often asymptomatic, occurring primarily in children, and found only in the mandible. The lesion can be odontogenic or non-odontogenic in nature. A 12 year-old boy presented with an unusual odontogenic proliferative periostitis that originated from the lower left first molar, however, the radiographic radiolucent area and proliferative response were discovered at the apices of the lower left second molar. The periostitis was treated by single-visit non-surgical endodontic treatment of lower left first molar without antibiotic therapy. The patient has been recalled regularly; the lesion had significantly reduced in size 3-months postoperatively. Extraoral symmetry occurred at approximately one year recall. At the last visit, 2 years after initial treatment, no problems or signs of complications have occurred; the radiographic examination revealed complete resolution of the apical lesion and apical closure of the lower left second molar. Odontogenic proliferative periostitis can be observed at the adjacent normal tooth. Besides, this case demonstrates that non-surgical endodontics is a viable treatment option for management of odontogenic proliferative periostitis.", "title": "" }, { "docid": "ba39f3a2b5ed9af6cdf4530176039e05", "text": "Survival analysis can be applied to build models fo r time to default on debt. In this paper we report an application of survival analysis to model default o n a large data set of credit card accounts. We exp lore the hypothesis that probability of default is affec ted by general conditions in the economy over time. These macroeconomic variables cannot readily be inc luded in logistic regression models. However, survival analysis provides a framework for their in clusion as time-varying covariates. Various macroeconomic variables, such as interest rate and unemployment rate, are included in the analysis. We show that inclusion of these indicators improves model fit and affects probability of default yielding a modest improvement in predictions of def ault on an independent test set.", "title": "" }, { "docid": "c4e80fd8e2c5b1795c016c9542f8f33e", "text": "Duckweeds, plants of the Lemnaceae family, have the distinction of being the smallest angiosperms in the world with the fastest doubling time. Together with its naturally ability to thrive on abundant anthropogenic wastewater, these plants hold tremendous potential to helping solve critical water, climate and fuel issues facing our planet this century. With the conviction that rapid deployment and optimization of the duckweed platform for biomass production will depend on close integration between basic and applied research of these aquatic plants, the first International Conference on Duckweed Research and Applications (ICDRA) was organized and took place in Chengdu, China, from October 7th to 10th of 2011. Co-organized with Rutgers University of New Jersey (USA), this Conference attracted participants from Germany, Denmark, Japan, Australia, in addition to those from the US and China. The following are concise summaries of the various oral presentations and final discussions over the 2.5 day conference that serve to highlight current research interests and applied research that are paving the way for the imminent deployment of this novel aquatic crop. We believe the sharing of this information with the broad Plant Biology community is an important step toward the renaissance of this excellent plant model that will have important impact on our quest for sustainable development of the world.", "title": "" }, { "docid": "519b0dbeb1193a14a06ba212790f49d4", "text": "In recent years, sign language recognition has attracted much attention in computer vision . A sign language is a means of conveying the message by using hand, arm, body, and face to convey thoughts and meanings. Like spoken languages, sign languages emerge and evolve naturally within hearing-impaired communities. However, sign languages are not universal. There is no internationally recognized and standardized sign language for all deaf people. As is the case in spoken language, every country has got its own sign language with high degree of grammatical variations. The sign language used in India is commonly known as Indian Sign Language (henceforth called ISL).", "title": "" }, { "docid": "88486271f9e455bdba5d02c99dcc19c3", "text": "TextCNN, the convolutional neural network for text, is a useful deep learning algorithm for sentence classification tasks such as sentiment analysis and question classification[2]. However, neural networks have long been known as black boxes because interpreting them is a challenging task. Researchers have developed several tools to understand a CNN for image classification by deep visualization[6], but research about deep TextCNNs is still insufficient. In this paper, we are trying to understand what a TextCNN learns on two classical NLP datasets. Our work focuses on functions of different convolutional kernels and correlations between convolutional kernels.", "title": "" }, { "docid": "c24550119d4251d6d7ce1219b8aa0ee4", "text": "This article considers the delivery of efficient and effective dental services for patients whose disability and/or medical condition may not be obvious and which consequently can present a hidden challenge in the dental setting. Knowing that the patient has a particular condition, what its features are and how it impacts on dental treatment and oral health, and modifying treatment accordingly can minimise the risk of complications. The taking of a careful medical history that asks the right questions in a manner that encourages disclosure is key to highlighting hidden hazards and this article offers guidance for treating those patients who have epilepsy, latex sensitivity, acquired or inherited bleeding disorders and patients taking oral or intravenous bisphosphonates.", "title": "" }, { "docid": "207d3e95d3f04cafa417478ed9133fcc", "text": "Urban growth is a worldwide phenomenon but the rate of urbanization is very fast in developing country like Egypt. It is mainly driven by unorganized expansion, increased immigration, rapidly increasing population. In this context, land use and land cover change are considered one of the central components in current strategies for managing natural resources and monitoring environmental changes. In Egypt, urban growth has brought serious losses of agricultural land and water bodies. Urban growth is responsible for a variety of urban environmental issues like decreased air quality, increased runoff and subsequent flooding, increased local temperature, deterioration of water quality, etc. Egypt possessed a number of fast growing cities. Mansoura and Talkha cities in Daqahlia governorate are expanding rapidly with varying growth rates and patterns. In this context, geospatial technologies and remote sensing methodology provide essential tools which can be applied in the analysis of land use change detection. This paper is an attempt to assess the land use change detection by using GIS in Mansoura and Talkha from 1985 to 2010. Change detection analysis shows that built-up area has been increased from 28 to 255 km by more than 30% and agricultural land reduced by 33%. Future prediction is done by using the Markov chain analysis. Information on urban growth, land use and land cover change study is very useful to local government and urban planners for the betterment of future plans of sustainable development of the city. 2015 The Gulf Organisation for Research and Development. Production and hosting by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "5cf396e42e8708d768235f95bc8f227f", "text": "This thesis examines how artificial neural networks can benefit a large vocabulary, speaker independent, continuous speech recognition system. Currently, most speech recognition systems are based on hidden Markov models (HMMs), a statistical framework that supports both acoustic and temporal modeling. Despite their state-of-the-art performance, HMMs make a number of suboptimal modeling assumptions that limit their potential effectiveness. Neural networks avoid many of these assumptions, while they can also learn complex functions, generalize effectively, tolerate noise, and support parallelism. While neural networks can readily be applied to acoustic modeling, it is not yet clear how they can be used for temporal modeling. Therefore, we explore a class of systems called NN-HMM hybrids, in which neural networks perform acoustic modeling, and HMMs perform temporal modeling. We argue that a NN-HMM hybrid has several theoretical advantages over a pure HMM system, including better acoustic modeling accuracy, better context sensitivity, more natural discrimination, and a more economical use of parameters. These advantages are confirmed experimentally by a NN-HMM hybrid that we developed, based on context-independent phoneme models, that achieved 90.5% word accuracy on the Resource Management database, in contrast to only 86.0% accuracy achieved by a pure HMM under similar conditions. In the course of developing this system, we explored two different ways to use neural networks for acoustic modeling: prediction and classification. We found that predictive networks yield poor results because of a lack of discrimination, but classification networks gave excellent results. We verified that, in accordance with theory, the output activations of a classification network form highly accurate estimates of the posterior probabilities P(class|input), and we showed how these can easily be converted to likelihoods P(input|class) for standard HMM recognition algorithms. Finally, this thesis reports how we optimized the accuracy of our system with many natural techniques, such as expanding the input window size, normalizing the inputs, increasing the number of hidden units, converting the network’s output activations to log likelihoods, optimizing the learning rate schedule by automatic search, backpropagating error from word level outputs, and using gender dependent networks.", "title": "" }, { "docid": "22d7464aaf0ad46e3bd04a30312ee659", "text": "Cities are drivers of economic development, providing infrastructure to support countless activities and services. Today, the world’s 750 biggest cities account for more than 57% of the global GDP and this number is expected to increase to 61% by 2030. More than half of the world’s population lives in cities, or urban areas, and this share will continue to growth. Rapid urban growth has posed both challenges and opportunities for city planners, not in the least when it comes to the design of transportation and logistic systems for freight. But urbanization also fosters innovation and sharing, which have led to new models for organizing movement of goods within the city. In this chapter, we highlight one of these new models: Crowd Logistics. We define the characterizing features of crowd logistics, review applications of crowd-based services within urban environments, and discuss research opportunities in the area of crowd logistics.", "title": "" }, { "docid": "03ce79214eb7e7f269464574b1e5c208", "text": "Variable draft is shown to be an essential feature for a research and survey SWATH ship large enough for unrestricted service worldwide. An ongoing semisubmerged (variable draft) SWATH can be designed for access to shallow harbors. Speed at transit (shallow) draft can be comparable to monohulls of the same power while assuring equal or better seakeeping characteristics. Seakeeping with the ship at deeper drafts can be superior to an equivalent SWATH that is designed for all operations at a single draft. The lower hulls of the semisubmerged SWATH ship can be devoid of fins. A practical target for interior clear spacing between the lower hulls is about 50 feet. Access to the sea surface for equipment can be provided astern, over the side, or from within a centerwell amidships. One of the lower hulls can be optimized to carry acoustic sounding equipment. A design is presented in this paper for a semisubmerged ship with a trial speed in excess of 15 knots, a scientific mission payload of 300 tons, and accommodations for 50 personnel. 1. SEMISUBMERGED SWATH TECHNOLOGY A single draft for the full range of operating conditions is a comon feature of typical SWATH ship designs. This constant draft characteristic is found in the SWATH ships built by Mitsuil” , most notably the KAIY03, and the SWATH T-AGOS4 which is now under construction for the U.S. Navy. The constant draft design for ships of this size (about 3,500 tons displacement) poses two significant drawbacks. One is that the draft must be at least 25 feet to satisfy seakeeping requirements. This draft is restrictive for access to many harbors that would be useful for research and survey functions. The second is that hull and column (strut) hydrodynamics generally result in the SWATH being a larger ship and having greater power requirements than for an equivalent monohull. The ship size and hull configuration, together with the necessity for a. President, Blue Sea Corporation b. President, Alan C. McClure Associates, Inc. stabilizing fins, usually leads to a higher capital cost than for a rougher riding, but otherwise equivalent, monohull. The distinguishing feature of the semisubmerged SWATH ship is variable draft. Sufficient allowance for ballast transfer is made to enable the ship to vary its draft under all load conditions. The shallowest draft is well within usual harbor limits and gives the lower hulls a slight freeboard. It also permits transit in low to moderate sea conditions using less propulsion power than is needed by a constant draft SWATH. The semisubmerged SWATH gives more design flexibility to provide for deep draft conditions that strike a balance between operating requirements and seakeeping characteristics. Intermediate “storm” drafts can be selected that are a compromise between seakeeping, speed, and upper hull clearance to avoid slamming. A discussion of these and other tradeoffs in semisubmerged SWATH ship design for oceanographic applications is given in a paper by Gaul and McClure’ . A more general discussion of design tradeoffs is given in a later paper6. The semisubmerged SWATH technology gives rise to some notable contrasts with constant draft SWATH ships. For any propulsion power applied, the semisubmerged SWATH has a range of speed that depends on draft. Highest speeds are obtained at minimum (transit) draft. Because the lower hull freeboard is small at transit draft, seakeeping at service speed can be made equal to or better than an equivalent monohull. The ship is designed for maximum speed at transit draft so the lower hull form is more akin to a surface craft than a submarine. This allows use of a nearly rectangular cross section for the lower hulls which provides damping of vertical motion. For moderate speeds at deeper drafts with the highly damped lower hull form, the ship need not be equipped with stabilizing fins. Since maximum speed is achieved with the columns of the water, it is practical (struts) out to use two c. President, Omega Marine Engineering Systems, Inc. d. Joint venture of Blue Sea Corporation and Martran Consultants, Inc. columns, rather than one, on each lower hull. The four column configuration at deep drafts minimizes the variation of ship motion response with change in course relative to surface wave direction. The width of the ship and lack of appendages on the lower hulls increases the utility of a large underside deck opening (moonpool) amidship. The basic Semisubmerged SWATH Research and Survey Ship design has evolved from requirements first stated by the Institute for Geophysics of the University of Texas (UTIG) in 1984. Blue Sea McClure provided the only SWATH configuration in a set of five conceptual designs procured competitively by the University. Woods Hole Oceanographic Institution, on behalf of the University-National Oceanographic Laboratory System, subsequently contracted for a revision of the UTIG design to meet requirements for an oceanographic research ship. The design was further refined to meet requirements posed by the U.S. Navy for an oceanographic research ship. The intent of this paper is to use this generic design to illustrate the main features of semisubmerged SWATH ships.", "title": "" }, { "docid": "c1a6b9df700226212dca8857e7001896", "text": "Knowing the location of a social media user and their posts is important for various purposes, such as the recommendation of location-based items/services, and locality detection of crisis/disasters. This paper describes our submission to the shared task “Geolocation Prediction in Twitter” of the 2nd Workshop on Noisy User-generated Text. In this shared task, we propose an algorithm to predict the location of Twitter users and tweets using a multinomial Naive Bayes classifier trained on Location Indicative Words and various textual features (such as city/country names, #hashtags and @mentions). We compared our approach against various baselines based on Location Indicative Words, city/country names, #hashtags and @mentions as individual feature sets, and experimental results show that our approach outperforms these baselines in terms of classification accuracy, mean and median error distance.", "title": "" }, { "docid": "3dbedb4539ac6438e9befbad366d1220", "text": "The main focus of this paper is to propose integration of dynamic and multiobjective algorithms for graph clustering in dynamic environments under multiple objectives. The primary application is to multiobjective clustering in social networks which change over time. Social networks, typically represented by graphs, contain information about the relations (or interactions) among online materials (or people). A typical social network tends to expand over time, with newly added nodes and edges being incorporated into the existing graph. We reflect these characteristics of social networks based on real-world data, and propose a suitable dynamic multiobjective evolutionary algorithm. Several variants of the algorithm are proposed and compared. Since social networks change continuously, the immigrant schemes effectively used in previous dynamic optimisation give useful ideas for new algorithms. An adaptive integration of multiobjective evolutionary algorithms outperformed other algorithms in dynamic social networks.", "title": "" }, { "docid": "653b44b98c78bed426c0e5630145c2ba", "text": "In the field of non-monotonic logics, the notion of rational closure is acknowledged as a landmark, and we are going to see that such a construction can be characterised by means of a simple method in the context of propositional logic. We then propose an application of our approach to rational closure in the field of Description Logics, an important knowledge representation formalism, and provide a simple decision procedure for this case.", "title": "" }, { "docid": "ab68f5a8b6a48423c8d8d01758cbd47d", "text": "Typical recommender systems use the root mean squared error (RMSE) between the predicted and actual ratings as the evaluation metric. We argue that RMSE is not an optimal choice for this task, especially when we will only recommend a few (top) items to any user. Instead, we propose using a ranking metric, namely normalized discounted cumulative gain (NDCG), as a better evaluation metric for this task. Borrowing ideas from the learning to rank community for web search, we propose novel models which approximately optimize NDCG for the recommendation task. Our models are essentially variations on matrix factorization models where we also additionally learn the features associated with the users and the items for the ranking task. Experimental results on a number of standard collaborative filtering data sets validate our claims. The results also show the accuracy and efficiency of our models and the benefits of learning features for ranking.", "title": "" } ]
scidocsrr
a61863ed5eb35a663276a1a23e705585
A Field-Based Representation of Surrounding Vehicle Motion from a Monocular Camera
[ { "docid": "3fa8b8a93716a85f8573bd1cb8d215f2", "text": "Vision-based research for intelligent vehicles have traditionally focused on specific regions around a vehicle, such as a front looking camera for, e.g., lane estimation. Traffic scenes are complex and vital information could be lost in unobserved regions. This paper proposes a framework that uses four visual sensors for a full surround view of a vehicle in order to achieve an understanding of surrounding vehicle behaviors. The framework will assist the analysis of naturalistic driving studies by automating the task of data reduction of the observed trajectories. To this end, trajectories are estimated using a vehicle detector together with a multiperspective optimized tracker in each view. The trajectories are transformed to a common ground plane, where they are associated between perspectives and analyzed to reveal tendencies around the ego-vehicle. The system is tested on sequences from 2.5 h of drive on US highways. The multiperspective tracker is tested in each view as well as for the ability to associate vehicles bet-ween views with a 92% recall score. A case study of vehicles approaching from the rear shows certain patterns in behavior that could potentially influence the ego-vehicle.", "title": "" }, { "docid": "fc2c995d20c83a72ea46f5055d1847a1", "text": "In this paper, we present a novel probabilistic compact representation of the on-road environment, i.e., the dynamic probabilistic drivability map (DPDM), and demonstrate its utility for predictive lane change and merge (LCM) driver assistance during highway and urban driving. The DPDM is a flexible representation and readily accepts data from a variety of sensor modalities to represent the on-road environment as a spatially coded data structure, encapsulating spatial, dynamic, and legal information. Using the DPDM, we develop a general predictive system for LCMs. We formulate the LCM assistance system to solve for the minimum-cost solution to merge or change lanes, which is solved efficiently using dynamic programming over the DPDM. Based on the DPDM, the LCM system recommends the required acceleration and timing to safely merge or change lanes with minimum cost. System performance has been extensively validated using real-world on-road data, including urban driving, on-ramp merges, and both dense and free-flow highway conditions.", "title": "" } ]
[ { "docid": "cdc77cc0dfb4dc9c91e20c3118b1d1ee", "text": "Maximum entropy models are considered by many to be one of the most promising avenues of language modeling research. Unfortunately, long training times make maximum entropy research difficult. We present a novel speedup technique: we change the form of the model to use classes. Our speedup works by creating two maximum entropy models, the first of which predicts the class of each word, and the second of which predicts the word itself. This factoring of the model leads to fewer nonzero indicator functions, and faster normalization, achieving speedups of up to a factor of 35 over one of the best previous techniques. It also results in typically slightly lower perplexities. The same trick can be used to speed training of other machine learning techniques, e.g. neural networks, applied to any problem with a large number of outputs, such as language modeling.", "title": "" }, { "docid": "7ebff2391401cef25b27d510675e9acd", "text": "We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann’s hierarchical clustering/aspect model, a translation model adapted from statistical machine translation (Brown et al.), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real c ©2003 Kobus Barnard, Pinar Duygulu, David Forsyth, Nando de Freitas, David Blei and Michael Jordan. BARNARD, DUYGULU, FORSYTH, DE FREITAS, BLEI AND JORDAN scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data.", "title": "" }, { "docid": "94f23b8710342512c84da0c7ab9492d8", "text": "Transferring knowledge across a sequence of related tasks is an important challenge in reinforcement learning. Despite much encouraging empirical evidence that shows benefits of transfer, there has been very little theoretical analysis. In this paper, we study a class of lifelong reinforcementlearning problems: the agent solves a sequence of tasks modeled as finite Markov decision processes (MDPs), each of which is from a finite set of MDPs with the same state/action spaces and different transition/reward functions. Inspired by the need for cross-task exploration in lifelong learning, we formulate a novel online discovery problem and give an optimal learning algorithm to solve it. Such results allow us to develop a new lifelong reinforcement-learning algorithm, whose overall sample complexity in a sequence of tasks is much smaller than that of single-task learning, with high probability, even if the sequence of tasks is generated by an adversary. Benefits of the algorithm are demonstrated in a simulated problem.", "title": "" }, { "docid": "4661b378eda6cd44c95c40ebf06b066b", "text": "Speech signal degradation in real environments mainly results from room reverberation and concurrent noise. While human listening is robust in complex auditory scenes, current speech segregation algorithms do not perform well in noisy and reverberant environments. We treat the binaural segregation problem as binary classification, and employ deep neural networks (DNNs) for the classification task. The binaural features of the interaural time difference and interaural level difference are used as the main auditory features for classification. The monaural feature of gammatone frequency cepstral coefficients is also used to improve classification performance, especially when interference and target speech are collocated or very close to one another. We systematically examine DNN generalization to untrained spatial configurations. Evaluations and comparisons show that DNN-based binaural classification produces superior segregation performance in a variety of multisource and reverberant conditions.", "title": "" }, { "docid": "a00201271997f398ec8e5eb4160fbe2e", "text": "We present a hybrid algorithm for detection and tracking of text in natural scenes that goes beyond the full-detection approaches in terms of time performance optimization. A state-of-the-art scene text detection module based on Maximally Stable Extremal Regions (MSER) is used to detect text asynchronously, while on a separate thread detected text objects are tracked by MSER propagation. The cooperation of these two modules yields real time video processing at high frame rates even on low-resource devices.", "title": "" }, { "docid": "6620aa5b1ecaac765112f0f1f15ef920", "text": "In this paper we present the tangible 3D tabletop and discuss the design potential of this novel interface. The tangible 3D tabletop combines tangible tabletop interaction with 3D projection in such a way that the tangible objects may be augmented with visual material corresponding to their physical shapes, positions, and orientation on the tabletop. In practice, this means that both the tabletop and the tangibles can serve as displays. We present the basic design principles for this interface, particularly concerning the interplay between 2D on the tabletop and 3D for the tangibles, and present examples of how this kind of interface might be used in the domain of maps and geolocalized data. We then discuss three central design considerations concerning 1) the combination and connection of content and functions of the tangibles and tabletop surface, 2) the use of tangibles as dynamic displays and input devices, and 3) the visual effects facilitated by the combination of the 2D tabletop surface and the 3D tangibles.", "title": "" }, { "docid": "52d31aa77302bbf50fa193759f37d393", "text": "Nonnegative matrix factorization (NMF) has been widely used for discovering physically meaningful latent components in audio signals to facilitate source separation. Most of the existing NMF algorithms require that the number of latent components is provided a priori, which is not always possible. In this paper, we leverage developments from the Bayesian nonparametrics and compressive sensing literature to propose a probabilistic Beta Process Sparse NMF (BP-NMF) model, which can automatically infer the proper number of latent components based on the data. Unlike previous models, BP-NMF explicitly assumes that these latent components are often completely silent. We derive a novel mean-field variational inference algorithm for this nonconjugate model and evaluate it on both synthetic data and real recordings on various tasks.", "title": "" }, { "docid": "19cb14825c6654101af1101089b66e16", "text": "Critical infrastructures, such as power grids and transportation systems, are increasingly using open networks for operation. The use of open networks poses many challenges for control systems. The classical design of control systems takes into account modeling uncertainties as well as physical disturbances, providing a multitude of control design methods such as robust control, adaptive control, and stochastic control. With the growing level of integration of control systems with new information technologies, modern control systems face uncertainties not only from the physical world but also from the cybercomponents of the system. The vulnerabilities of the software deployed in the new control system infrastructure will expose the control system to many potential risks and threats from attackers. Exploitation of these vulnerabilities can lead to severe damage as has been reported in various news outlets [1], [2]. More recently, it has been reported in [3] and [4] that a computer worm, Stuxnet, was spread to target Siemens supervisory control and data acquisition (SCADA) systems that are configured to control and monitor specific industrial processes.", "title": "" }, { "docid": "c77a3fcd6c689a58a8eebfef9a89af70", "text": "Previously, neural methods in grammatical error correction (GEC) did not reach state-ofthe-art results compared to phrase-based statistical machine translation (SMT) baselines. We demonstrate parallels between neural GEC and low-resource neural MT and successfully adapt several methods from low-resource MT to neural GEC. We further establish guidelines for trustable results in neural GEC and propose a set of model-independent methods for neural GEC that can be easily applied in most GEC settings. Proposed methods include adding source-side noise, domain-adaptation techniques, a GEC-specific training-objective, transfer learning with monolingual data, and ensembling of independently trained GEC models and language models. The combined effects of these methods result in better than state-of-the-art neural GEC models that outperform previously best neural GEC systems by more than 10% M on the CoNLL-2014 benchmark and 5.9% on the JFLEG test set. Non-neural state-of-the-art systems are outperformed by more than 2% on the CoNLL-2014 benchmark and by 4% on JFLEG.", "title": "" }, { "docid": "e04bc357c145c38ed555b3c1fa85c7da", "text": "This paper presents Hybrid (RSA & AES) encryption algorithm to safeguard data security in Cloud. Security being the most important factor in cloud computing has to be dealt with great precautions. This paper mainly focuses on the following key tasks: 1. Secure Upload of data on cloud such that even the administrator is unaware of the contents. 2. Secure Download of data in such a way that the integrity of data is maintained. 3. Proper usage and sharing of the public, private and secret keys involved for encryption and decryption. The use of a single key for both encryption and decryption is very prone to malicious attacks. But in hybrid algorithm, this problem is solved by the use of three separate keys each for encryption as well as decryption. Out of the three keys one is the public key, which is made available to all, the second one is the private key which lies only with the user. In this way, both the secure upload as well as secure download of the data is facilitated using the two respective keys. Also, the key generation technique used in this paper is unique in its own way. This has helped in avoiding any chances of repeated or redundant key.", "title": "" }, { "docid": "2a7c77985e3fca58ee8a69dd9b6f36d2", "text": "New types of machine learning hardware in development and entering the market hold the promise of revolutionizing deep learning in a manner as profound as GPUs. However, existing software frameworks and training algorithms for deep learning have yet to evolve to fully leverage the capability of the new wave of silicon. We already see the limitations of existing algorithms for models that exploit structured input via complex and instancedependent control flow, which prohibits minibatching. We present an asynchronous model-parallel (AMP) training algorithm that is specifically motivated by training on networks of interconnected devices. Through an implementation on multi-core CPUs, we show that AMP training converges to the same accuracy as conventional synchronous training algorithms in a similar number of epochs, but utilizes the available hardware more efficiently even for small minibatch sizes, resulting in significantly shorter overall training times. Our framework opens the door for scaling up a new class of deep learning models that cannot be efficiently trained today.", "title": "" }, { "docid": "2af670323d2857cd79ac967bd71c61c1", "text": "This paper describes a new architecture for synthetic aperture radar (SAR) automatic target recognition (ATR) based on the premise that the pose of the target is estimated within a high degree of precision. The advantage of our classifier design is that the input space complexity is decreased with the pose information, which enables fewer features to classify targets with a higher degree of accuracy. Moreover, the training of the classifier can be done discriminantely, which also improves performance and decreases the complexity of the classifier. Three strategies of learning and representation to build the pattern space and discriminant functions are compared: Vapnik's support vector machine (SVM), a newly developed quadratic mutual information (QMI) cost function for neural networks, and a principal component analysis extended recently with multi-resolution (PCA-M). Experimental results obtained in the MSTAR database show that the performance of our classifiers is better than that of standard template matching in the same dataset. We also rate the quality of the classifiers for detection using confusers, and show significant improvement in rejection.", "title": "" }, { "docid": "1b7d19d41164bda14c688224cce700d5", "text": "Urethral duplication is a rare congenital malformation affecting mainly boys. The authors report a case in a Cameroonian child who was diagnosed and managed at the Gynaeco-Obstetric and Paediatric Hospital, Yaounde. The malformation was characterized by the presence of an incontinent epispadic urethra and a normal apical urethra. We describe the difficulties faced in the management of this disorder in a developing country.", "title": "" }, { "docid": "5636a228fea893cd48cebe15f72c0bb0", "text": "A familicide is a multiple-victim homicide incident in which the killer’s spouse and one or more children are slain. National archives of Canadian and British homicides, containing 109 familicide incidents, permit some elucidation of the characteristic and epidemiology of this crime. Familicides were almost exclusively perpetrated by men, unlike other spouse-killings and other filicides. Half the familicidal men killed themselves as well, a much higher rate of suicide than among other uxoricidal or filicidal men. De facto unions were overrepresented, compared to their prevalence in the populations-atlarge, but to a much lesser extent in familicides than in other uxoricides. Stepchildren were overrepresented as familicide victims, compared to their numbers in the populations-at-large, but to a much lesser extent than in other filicides; unlike killers of their genetic offspring, men who killed their stepchildren were rarely suicidal. An initial binary categorization of familicides as accusatory versus despondent is tentatively proposed. @ 19% wiley-Liss, Inc.", "title": "" }, { "docid": "0368fdfe05918134e62e0f7b106130ee", "text": "Scientific charts are an effective tool to visualize numerical data trends. They appear in a wide range of contexts, from experimental results in scientific papers to statistical analyses in business reports. The abundance of scientific charts in the web has made it inevitable for search engines to include them as indexed content. However, the queries based on only the textual data used to tag the images can limit query results. Many studies exist to address the extraction of data from scientific diagrams in order to improve search results. In our approach to achieving this goal, we attempt to enhance the semantic labeling of the charts by using the original data values that these charts were designed to represent. In this paper, we describe a method to extract data values from a specific class of charts, bar charts. The extraction process is fully automated using image processing and text recognition techniques combined with various heuristics derived from the graphical properties of bar charts. The extracted information can be used to enrich the indexing content for bar charts and improve search results. We evaluate the effectiveness of our method on bar charts drawn from the web as well as charts embedded in digital documents.", "title": "" }, { "docid": "e0d553cc4ca27ce67116c62c49c53d23", "text": "We estimate a vehicle's speed, its wheelbase length, and tire track length by jointly estimating its acoustic wave pattern with a single passive acoustic sensor that records the vehicle's drive-by noise. The acoustic wave pattern is determined using the vehicle's speed, the Doppler shift factor, the sensor's distance to the vehicle's closest-point-of-approach, and three envelope shape (ES) components, which approximate the shape variations of the received signal's power envelope. We incorporate the parameters of the ES components along with estimates of the vehicle engine RPM, the number of cylinders, and the vehicle's initial bearing, loudness and speed to form a vehicle profile vector. This vector provides a fingerprint that can be used for vehicle identification and classification. We also provide possible reasons why some of the existing methods are unable to provide unbiased vehicle speed estimates using the same framework. The approach is illustrated using vehicle speed estimation and classification results obtained with field data.", "title": "" }, { "docid": "262302228a88025660c0add90d500518", "text": "Social network analysis provides meaningful information about behavior of network members that can be used for diverse applications such as classification, link prediction. However, network analysis is computationally expensive because of feature learning for different applications. In recent years, many researches have focused on feature learning methods in social networks. Network embedding represents the network in a lower dimensional representation space with the same properties which presents a compressed representation of the network. In this paper, we introduce a novel algorithm named “CARE” for network embedding that can be used for different types of networks including weighted, directed and complex. Current methods try to preserve local neighborhood information of nodes, whereas the proposed method utilizes local neighborhood and community information of network nodes to cover both local and global structure of social networks. CARE builds customized paths, which are consisted of local and global structure of network nodes, as a basis for network embedding and uses the Skip-gram model to learn representation vector of nodes. Subsequently, stochastic gradient descent is applied to optimize our objective function and learn the final representation of nodes. Our method can be scalable when new nodes are appended to network without information loss. Parallelize generation of customized random walks is also used for speeding up CARE. We evaluate the performance of CARE on multi label classification and link prediction tasks. Experimental results on various networks indicate that the proposed method outperforms others in both Micro and Macro-f1 measures for different size of training data.", "title": "" }, { "docid": "e86ce9f0a1beb982f8358930e8ef776d", "text": "We study the function g(n, y) := i≤n P (i)≤y gcd(i, n), where P (n) denotes the largest prime factor of n, and we derive some estimates for its summatory function.", "title": "" }, { "docid": "9953909d2e520abf8227fd9025260d55", "text": "Silicones are used in the plastics industry as additives for improving the processing and surface properties of plastics, as well as the rubber phase in a novel family of thermoplastic vulcanizate (TPV) materials. As additives, silicones, and in particular polydimethylsiloxane (PDMS), are used to improve mold filling, surface appearance, mold release, surface lubricity and wear resistance. As the rubber portion of a TPV, the cross-linked silicone rubber imparts novel properties, such as lower hardness, reduced coefficient of friction and improved low and high temperature properties.", "title": "" }, { "docid": "1839d9e6ef4bad29381105f0a604b731", "text": "Our focus is on the effects that dated ideas about the nature of science (NOS) have on curriculum, instruction and assessments. First we examine historical developments in teaching about NOS, beginning with the seminal ideas of James Conant. Next we provide an overview of recent developments in philosophy and cognitive sciences that have shifted NOS characterizations away from general heuristic principles toward cognitive and social elements. Next, we analyze two alternative views regarding ‘explicitly teaching’ NOS in pre-college programs. Version 1 is grounded in teachers presenting ‘Consensus-based Heuristic Principles’ in science lessons and activities. Version 2 is grounded in learners experience of ‘Building and Refining Model-Based Scientific Practices’ in critique and communication enactments that occur in longer immersion units and learning progressions. We argue that Version 2 is to be preferred over Version 1 because it develops the critical epistemic cognitive and social practices that scientists and science learners use when (1) developing and evaluating scientific evidence, explanations and knowledge and (2) critiquing and communicating scientific ideas and information; thereby promoting science literacy. 1 NOS and Science Education When and how did knowledge about science, as opposed to scientific content knowledge, become a targeted outcome of science education? From a US perspective, the decades of interest are the 1940s and 1950s when two major post-war developments in science education policy initiatives occurred. The first, in post secondary education, was the GI Bill An earlier version of this paper was presented as a plenary session by the first author at the ‘How Science Works—And How to Teach It’ workshop, Aarhus University, 23–25 June, 2011, Denmark. R. A. Duschl (&) The Pennsylvania State University, University Park, PA, USA e-mail: rad19@psu.edu R. Grandy Rice University, Houston, TX, USA 123 Sci & Educ DOI 10.1007/s11191-012-9539-4", "title": "" } ]
scidocsrr
e83088bb506326187a151acf48534dcf
Construal Levels and Psychological Distance: Effects on Representation, Prediction, Evaluation, and Behavior.
[ { "docid": "e992ffd4ebbf9d096de092caf476e37d", "text": "If self-regulation conforms to an energy or strength model, then self-control should be impaired by prior exertion. In Study 1, trying to regulate one's emotional response to an upsetting movie was followed by a decrease in physical stamina. In Study 2, suppressing forbidden thoughts led to a subsequent tendency to give up quickly on unsolvable anagrams. In Study 3, suppressing thoughts impaired subsequent efforts to control the expression of amusement and enjoyment. In Study 4, autobiographical accounts of successful versus failed emotional control linked prior regulatory demands and fatigue to self-regulatory failure. A strength model of self-regulation fits the data better than activation, priming, skill, or constant capacity models of self-regulation.", "title": "" } ]
[ { "docid": "50442aa4ef1d7c89822d77a5b3a0ee85", "text": "The utilization of an AC induction motor (ACIM) ranges from consumer to automotive applications, with a variety of power and sizes. From the multitude of possible applications, some require the achievement of high speed while having a high torque value only at low speeds. Two applications needing this requirement are washing machines in consumer applications and traction in powertrain applications. These requirements impose a certain type of approach for induction motor control, which is known as “field weakening.”", "title": "" }, { "docid": "49a2202592071a07109bd347563e4d6b", "text": "To model deformation of anatomical shapes, non-linear statistics are required to take into account the non-linear structure of the data space. Computer implementations of non-linear statistics and differential geometry algorithms often lead to long and complex code sequences. The aim of the paper is to show how the Theano framework can be used for simple and concise implementation of complex differential geometry algorithms while being able to handle complex and high-dimensional data structures. We show how the Theano framework meets both of these requirements. The framework provides a symbolic language that allows mathematical equations to be directly translated into Theano code, and it is able to perform both fast CPU and GPU computations on highdimensional data. We show how different concepts from non-linear statistics and differential geometry can be implemented in Theano, and give examples of the implemented theory visualized on landmark representations of Corpus Callosum shapes.", "title": "" }, { "docid": "6d2667dd550e14d4d46b24d9c8580106", "text": "Deficits in gratification delay are associated with a broad range of public health problems, such as obesity, risky sexual behavior, and substance abuse. However, 6 decades of research on the construct has progressed less quickly than might be hoped, largely because of measurement issues. Although past research has implicated 5 domains of delay behavior, involving food, physical pleasures, social interactions, money, and achievement, no published measure to date has tapped all 5 components of the content domain. Existing measures have been criticized for limitations related to efficiency, reliability, and construct validity. Using an innovative Internet-mediated approach to survey construction, we developed the 35-item 5-factor Delaying Gratification Inventory (DGI). Evidence from 4 studies and a large, diverse sample of respondents (N = 10,741) provided support for the psychometric properties of the measure. Specifically, scores on the DGI demonstrated strong internal consistency and test-retest reliability for the 35-item composite, each of the 5 domains, and a 10-item short form. The 5-factor structure fit the data well and had good measurement invariance across subgroups. Construct validity was supported by correlations with scores on closely related self-control measures, behavioral ratings, Big Five personality trait measures, and measures of adjustment and psychopathology, including those on the Minnesota Multiphasic Personality Inventory-2-Restructured Form. DGI scores also showed incremental validity in accounting for well-being and health-related variables. The present investigation holds implications for improving public health, accelerating future research on gratification delay, and facilitating survey construction research more generally by demonstrating the suitability of an Internet-mediated strategy.", "title": "" }, { "docid": "cf9d3c47ee93299f269484ffdbe44453", "text": "As the complexity and variety of computer system hardware increases, its suitability as a pedagogical tool in computer organization/architecture courses diminishes. As a consequence, many instructors are turning to simulators as teaching aids, often using valuable teaching/research time to construct them. Many of these simulators have been made freely available on the Internet, providing a useful and time-saving resource for other instructors. However, finding the right simulator for a particular course or topic can itself be a time-consuming process. The goal of this paper is to provide an easy-to-use survey of free and Internet-accessible computer system simulators as a resource for all instructors of computer organization and computer architecture courses.", "title": "" }, { "docid": "290869845a0ce3d1bf3722bfba7dd1c5", "text": "Supplier selection is an important and widely studied topic since it has significant impact on purchasing management in supply chain. Recently, support vector machine has received much more attention from researchers, while studies on supplier selection based on it are few. In this paper, a new support vector machine technology, potential support vector machine, is introduced and then combined with decision tree to address issues on supplier selection including feature selection, multiclass classification and so on. So, hierarchical potential support vector machine and hierarchical system of features are put forward in the paper, and experiments show the proposed methodology has much better generalization performance and less computation consumptions than standard support vector machine. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0d5ca0e11363cae0b4d7f335cf832e24", "text": "This paper presents an investigation into two fuzzy association rule mining models for enhancing prediction performance. The first model (the FCM-Apriori model) integrates Fuzzy C-Means (FCM) and the Apriori approach for road traffic performance prediction. FCM is used to define the membership functions of fuzzy sets and the Apriori approach is employed to identify the Fuzzy Association Rules (FARs). The proposed model extracts knowledge from a database for a Fuzzy Inference System (FIS) that can be used in prediction of a future value. The knowledge extraction process and the performance of the model are demonstrated through two case studies of road traffic data sets with different sizes. The experimental results show the merits and capability of the proposed KD model in FARs based knowledge extraction. The second model (the FCM-MSapriori model) integrates FCM and a Multiple Support Apriori (MSapriori) approach to extract the FARs. These FARs provide the knowledge base to be utilized within the FIS for prediction evaluation. Experimental results have shown that the FCM-MSapriori model predicted the future values effectively and outperformed the FCM-Apriori model and other models reported in the literature.", "title": "" }, { "docid": "7834f32e3d6259f92f5e0beb3a53cc04", "text": "An educational institution needs to have an approximate prior knowledge of enrolled students to predict their performance in future academics. This helps them to identify promising students and also provides them an opportunity to pay attention to and improve those who would probably get lower grades. As a solution, we have developed a system which can predict the performance of students from their previous performances using concepts of data mining techniques under Classification. We have analyzed the data set containing information about students, such as gender, marks scored in the board examinations of classes X and XII, marks and rank in entrance examinations and results in first year of the previous batch of students. By applying the ID3 (Iterative Dichotomiser 3) and C4.5 classification algorithms on this data, we have predicted the general and individual performance of freshly admitted students in future examinations.", "title": "" }, { "docid": "e1885f9c373c355a4df9307c6d90bf83", "text": "Ricinulei possess movable, slender pedipalps with small chelae. When ricinuleids walk, they occasionally touch the soil surface with the tips of their pedipalps. This behavior is similar to the exploration movements they perform with their elongated second legs. We studied the distal areas of the pedipalps of the cavernicolous Mexican species Pseudocellus pearsei with scanning and transmission electron microscopy. Five different surface structures are characteristic for the pedipalps: (1) slender sigmoidal setae with smooth shafts resembling gustatory terminal pore single-walled (tp-sw) sensilla; (2) conspicuous long, mechanoreceptive slit sensilla; (3) a single, short, clubbed seta inside a deep pit representing a no pore single walled (np-sw) sensillum; (4) a single pore organ containing one olfactory wall pore single-walled (wp-sw) sensillum; and (5) gustatory terminal pore sensilla in the fingers of the pedipalp chela. Additionally, the pedipalps bear sensilla which also occur on the other appendages. With this sensory equipment, the pedipalps are highly effective multimodal short range sensory organs which complement the long range sensory function of the second legs. In order to present the complete sensory equipment of all appendages of the investigated Pseudocellus a comparative overview is provided.", "title": "" }, { "docid": "995376c324ff12a0be273e34f44056df", "text": "Conventional Gabor representation and its extracted features often yield a fairly poor performance in retrieving the rotated and scaled versions of the texture image under query. To address this issue, existing methods exploit multiple stages of transformations for making rotation and/or scaling being invariant at the expense of high computational complexity and degraded retrieval performance. The latter is mainly due to the lost of image details after multiple transformations. In this paper, a rotation-invariant and a scale-invariant Gabor representations are proposed, where each representation only requires few summations on the conventional Gabor filter impulse responses. The optimum setting of the orientation parameter and scale parameter is experimentally determined over the Brodatz and MPEG-7 texture databases. Features are then extracted from these new representations for conducting rotation-invariant or scale-invariant texture image retrieval. Since the dimension of the new feature space is much reduced, this leads to a much smaller metadata storage space and faster on-line computation on the similarity measurement. Simulation results clearly show that our proposed invariant Gabor representations and their extracted invariant features significantly outperform the conventional Gabor representation approach for rotation-invariant and scale-invariant texture image retrieval. 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a5614379a447180fe0ab5ab83770dafb", "text": "This paper presents a novel method for performing an efficient cost aggregation in stereo matching. The cost aggregation problem is re-formulated with a perspective of a histogram, and it gives us a potential to reduce the complexity of the cost aggregation significantly. Different from the previous methods which have tried to reduce the complexity in terms of the size of an image and a matching window, our approach focuses on reducing the computational redundancy which exists among the search range, caused by a repeated filtering for all disparity hypotheses. Moreover, we also reduce the complexity of the window-based filtering through an efficient sampling scheme inside the matching window. The trade-off between accuracy and complexity is extensively investigated into parameters used in the proposed method. Experimental results show that the proposed method provides high-quality disparity maps with low complexity. This work provides new insights into complexity-constrained stereo matching algorithm design.", "title": "" }, { "docid": "c5a36e3b8196815fea6b5db825c09133", "text": "In this paper, solutions for developing low cost electronics for antenna transceivers that take advantage of the stable electrical properties of the organic substrate liquid crystal polymer (LCP) has been presented. Three important ingredients in RF wireless transceivers namely embedded passives, a dual band filter and a RFid antenna have been designed and fabricated on LCP. Test results of all 3 of the structures show good agreement between the simulated and measured results over their respective bandwidths, demonstrating stable performance of the LCP substrate.", "title": "" }, { "docid": "9faa8b39898eaa4ca0a0c23d29e7a0ff", "text": "Highly emphasized in entrepreneurial practice, business models have received limited attention from researchers. No consensus exists regarding the definition, nature, structure, and evolution of business models. Still, the business model holds promise as a unifying unit of analysis that can facilitate theory development in entrepreneurship. This article synthesizes the literature and draws conclusions regarding a number of these core issues. Theoretical underpinnings of a firm's business model are explored. A sixcomponent framework is proposed for characterizing a business model, regardless of venture type. These components are applied at three different levels. The framework is illustrated using a successful mainstream company. Suggestions are made regarding the manner in which business models might be expected to emerge and evolve over time. a c Purchase Export", "title": "" }, { "docid": "9ff22294cf279d757a84ae00d4e29473", "text": "We usually endow the investigated objects with pairwise relationships, which can be illustrated as graphs. In many real-world problems, however, relationships among the objects of our interest are more complex than pairwise. Naively squeezing the complex relationships into pairwise ones will inevitably lead to loss of information which can be expected valuable for our learning tasks however. Therefore we consider using hypergraphs instead to completely represent complex relationships among the objects of our interest, and thus the problem of learning with hypergraphs arises. Our main contribution in this paper is to generalize the powerful methodology of spectral clustering which originally operates on undirected graphs to hypergraphs, and further develop algorithms for hypergraph embedding and transductive classification on the basis of the spectral hypergraph clustering approach. Our experiments on a number of benchmarks showed the advantages of hypergraphs over usual graphs.", "title": "" }, { "docid": "dc33e4c6352c885fb27e08fa1c310fb3", "text": "Association rule mining algorithm is used to extract relevant information from database and transmit into simple and easiest form. Association rule mining is used in large set of data. It is used for mining frequent item sets in the database or in data warehouse. It is also one type of data mining procedure. In this paper some of the association rule mining algorithms such as apriori, partition, FP-growth, genetic algorithm etc., can be analyzed for generating frequent itemset in an effective manner. These association rule mining algorithms may differ depend upon their performance and effective pattern generation. So, this paper may concentrate on some of the algorithms used to generate efficient frequent itemset using some of association rule mining algorithms.", "title": "" }, { "docid": "7d713780dd3f7ad0abc5ec02f2a5d8f2", "text": "Pelvic discontinuity is a challenging complication encountered during revision total hip arthroplasty. Pelvic discontinuity is defined as a separation of the ilium superiorly from the ischiopubic segment inferiorly and is typically a chronic condition in failed total hip arthroplasties in the setting of bone loss. After a history and a physical examination have been completed and infection has been ruled out, appropriate imaging must be obtained, including plain hip radiographs, oblique Judet radiographs, and often a CT scan. The main management options are a hemispheric acetabular component with posterior column plating, a cup-cage construct, pelvic distraction, and a custom triflange construct. The techniques have unique pros and cons, but the goals are to obtain stable and durable acetabular component fixation and a healed or unitized pelvis while minimizing complications.", "title": "" }, { "docid": "81ec51ca319ab957c0e951c9de31859c", "text": "Photography has been striving to capture an ever increasing amount of visual information in a single image. Digital sensors, however, are limited to recording a small subset of the desired information at each pixel. A common approach to overcoming the limitations of sensing hardware is the optical multiplexing of high-dimensional data into a photograph. While this is a well-studied topic for imaging with color filter arrays, we develop a mathematical framework that generalizes multiplexed imaging to all dimensions of the plenoptic function. This framework unifies a wide variety of existing approaches to analyze and reconstruct multiplexed data in either the spatial or the frequency domain. We demonstrate many practical applications of our framework including high-quality light field reconstruction, the first comparative noise analysis of light field attenuation masks, and an analysis of aliasing in multiplexing applications.", "title": "" }, { "docid": "f43d024b61620a19cfbc3d76b6253332", "text": "Equipped with sensors that are capable of collecting physiological and environmental data continuously, wearable technologies have the potential to become a valuable component of personalized healthcare and health management. However, in addition to the potential benefits of wearable devices, the widespread and continuous use of wearables also poses many privacy challenges. In some instances, users may not be aware of the risks associated with wearable devices, while in other cases, users may be aware of the privacy-related risks, but may beunable to negotiate complicated privacy settings to meet their needs and preferences. This lack of awareness could have an adverse impact on users in the future, even becoming a \"skeleton in the closet.\" In this work, we conducted 32 semi-structured interviews to understand how users perceive privacy in wearable computing. Results suggest that user concerns toward wearable privacy have different levels of variety ranging from no concern to highly concerned. In addition, while user concerns and benefits are similar among participants in our study, these variablesshould be investigated more extensively for the development of privacy enhanced wearable technologies.", "title": "" }, { "docid": "bc90b1e4d456ca75b38105cc90d7d51d", "text": "Choosing a cloud storage system and specific operations for reading and writing data requires developers to make decisions that trade off consistency for availability and performance. Applications may be locked into a choice that is not ideal for all clients and changing conditions. Pileus is a replicated key-value store that allows applications to declare their consistency and latency priorities via consistency-based service level agreements (SLAs). It dynamically selects which servers to access in order to deliver the best service given the current configuration and system conditions. In application-specific SLAs, developers can request both strong and eventual consistency as well as intermediate guarantees such as read-my-writes. Evaluations running on a worldwide test bed with geo-replicated data show that the system adapts to varying client-server latencies to provide service that matches or exceeds the best static consistency choice and server selection scheme.", "title": "" }, { "docid": "e7bfafee5cfaaa1a6a41ae61bdee753d", "text": "Borderline personality disorder (BPD) has been shown to be a valid and reliable diagnosis in adolescents and associated with a decrease in both general and social functioning. With evidence linking BPD in adolescents to poor prognosis, it is important to develop a better understanding of factors and mechanisms contributing to the development of BPD. This could potentially enhance our knowledge and facilitate the design of novel treatment programs and interventions for this group. In this paper, we outline a theoretical model of BPD in adolescents linking the original mentalization-based theory of BPD, with recent extensions of the theory that focuses on hypermentalizing and epistemic trust. We then provide clinical case vignettes to illustrate this extended theoretical model of BPD. Furthermore, we suggest a treatment approach to BPD in adolescents that focuses on the reduction of hypermentalizing and epistemic mistrust. We conclude with an integration of theory and practice in the final section of the paper and make recommendations for future work in this area. (PsycINFO Database Record", "title": "" }, { "docid": "8f7368daec71ccb4b5c5a2daebda07be", "text": "This paper presents a novel inkjet-printed humidity sensor tag for passive radio-frequency identification (RFID) systems operating at ultrahigh frequencies (UHFs). During recent years, various humidity sensors have been developed by researchers around the world for HF and UHF RFID systems. However, to our best knowledge, the humidity sensor presented in this paper is one of the first passive UHF RFID humidity sensor tags fabricated using inkjet technology. This paper describes the structure and operation principle of the sensor tag as well as discusses the method of performing humidity measurements in practice. Furthermore, measurement results are presented, which include air humidity-sensitivity characterization and tag identification performance measurements.", "title": "" } ]
scidocsrr
6970436fc7413a5cf5b1ee436a820561
BabelRelate! A Joint Multilingual Approach to Computing Semantic Relatedness
[ { "docid": "86820c43e63066930120fa5725b5b56d", "text": "We introduce Wiktionary as an emerging lexical semantic resource that can be used as a substitute for expert-made resources in AI applications. We evaluate Wiktionary on the pervasive task of computing semantic relatedness for English and German by means of correlation with human rankings and solving word choice problems. For the first time, we apply a concept vector based measure to a set of different concept representations like Wiktionary pseudo glosses, the first paragraph of Wikipedia articles, English WordNet glosses, and GermaNet pseudo glosses. We show that: (i) Wiktionary is the best lexical semantic resource in the ranking task and performs comparably to other resources in the word choice task, and (ii) the concept vector based approach yields the best results on all datasets in both evaluations.", "title": "" } ]
[ { "docid": "a2622b1e0c1c58a535ec11a5075d1222", "text": "The condition of a machine can automatically be identified by creating and classifying features that summarize characteristics of measured signals. Currently, experts, in their respective fields, devise these features based on their knowledge. Hence, the performance and usefulness depends on the expert's knowledge of the underlying physics or statistics. Furthermore, if new and additional conditions should be detectable, experts have to implement new feature extraction methods. To mitigate the drawbacks of feature engineering, a method from the subfield of feature learning, i.e., deep learning (DL), more specifically convolutional neural networks (NNs), is researched in this paper. The objective of this paper is to investigate if and how DL can be applied to infrared thermal (IRT) video to automatically determine the condition of the machine. By applying this method on IRT data in two use cases, i.e., machine-fault detection and oil-level prediction, we show that the proposed system is able to detect many conditions in rotating machinery very accurately (i.e., 95 and 91.67% accuracy for the respective use cases), without requiring any detailed knowledge about the underlying physics, and thus having the potential to significantly simplify condition monitoring using complex sensor data. Furthermore, we show that by using the trained NNs, important regions in the IRT images can be identified related to specific conditions, which can potentially lead to new physical insights.", "title": "" }, { "docid": "2fd06457db3dfb09af108d22607a923d", "text": "An analysis of an on-chip buck converter is presented in this paper. A high switching frequency is the key design parameter that simultaneously permits monolithic integration and high efficiency. A model of the parasitic impedances of a buck converter is developed. With this model, a design space is determined that allows integration of active and passive devices on the same die for a target technology. An efficiency of 88.4% at a switching frequency of 477 MHz is demonstrated for a voltage conversion from 1.2–0.9 volts while supplying 9.5 A average current. The area occupied by the buck converter is 12.6 mm assuming an 80-nm CMOS technology. An estimate of the efficiency is shown to be within 2.4% of simulation at the target design point. Full integration of a high-efficiency buck converter on the same die with a dualmicroprocessor is demonstrated to be feasible.", "title": "" }, { "docid": "6ee0c9832d82d6ada59025d1c7bb540e", "text": "Advances in computational linguistics and discourse processing have made it possible to automate many language- and text-processing mechanisms. We have developed a computer tool called Coh-Metrix, which analyzes texts on over 200 measures of cohesion, language, and readability. Its modules use lexicons, part-of-speech classifiers, syntactic parsers, templates, corpora, latent semantic analysis, and other components that are widely used in computational linguistics. After the user enters an English text, CohMetrix returns measures requested by the user. In addition, a facility allows the user to store the results of these analyses in data files (such as Text, Excel, and SPSS). Standard text readability formulas scale texts on difficulty by relying on word length and sentence length, whereas Coh-Metrix is sensitive to cohesion relations, world knowledge, and language and discourse characteristics.", "title": "" }, { "docid": "41c35407c55878910f5dfc2dfe083955", "text": "This work deals with several aspects concerning the formal verification of SN P systems and the computing power of some variants. A methodology based on the information given by the transition diagram associated with an SN P system is presented. The analysis of the diagram cycles codifies invariants formulae which enable us to establish the soundness and completeness of the system with respect to the problem it tries to resolve. We also study the universality of asynchronous and sequential SN P systems and the capability these models have to generate certain classes of languages. Further, by making a slight modification to the standard SN P systems, we introduce a new variant of SN P systems with a special I/O mode, called SN P modules, and study their computing power. It is demonstrated that, as string language acceptors and transducers, SN P modules can simulate several types of computing devices such as finite automata, a-finite transducers, and systolic trellis automata.", "title": "" }, { "docid": "0b18f7966a57e266487023d3a2f3549d", "text": "A clear andpowerfulformalism for describing languages, both natural and artificial, follows f iom a method for expressing grammars in logic due to Colmerauer and Kowalski. This formalism, which is a natural extension o f context-free grammars, we call \"definite clause grammars\" (DCGs). A DCG provides not only a description of a language, but also an effective means for analysing strings o f that language, since the DCG, as it stands, is an executable program o f the programming language Prolog. Using a standard Prolog compiler, the DCG can be compiled into efficient code, making it feasible to implement practical language analysers directly as DCGs. This paper compares DCGs with the successful and widely used augmented transition network (ATN) formalism, and indicates how ATNs can be translated into DCGs. It is argued that DCGs can be at least as efficient as ATNs, whilst the DCG formalism is clearer, more concise and in practice more powerful", "title": "" }, { "docid": "08255cbafcf9a3dd9dd9d084c1de543e", "text": "The sustained growth of data traffic volume calls for an introduction of an efficient and scalable transport platform for links of 100 Gb/s and beyond in the future optical network. In this article, after briefly reviewing the existing major technology options, we propose a novel, spectrum- efficient, and scalable optical transport network architecture called SLICE. The SLICE architecture enables sub-wavelength, superwavelength, and multiple-rate data traffic accommodation in a highly spectrum-efficient manner, thereby providing a fractional bandwidth service. Dynamic bandwidth variation of elastic optical paths provides network operators with new business opportunities offering cost-effective and highly available connectivity services through time-dependent bandwidth sharing, energy-efficient network operation, and highly survivable restoration with bandwidth squeezing. We also discuss an optical orthogonal frequency-division multiplexing-based flexible-rate transponder and a bandwidth-variable wavelength cross-connect as the enabling technologies of SLICE concept. Finally, we present the performance evaluation and technical challenges that arise in this new network architecture.", "title": "" }, { "docid": "7e2c5184ca6c738f3db3c0ada7cdf37a", "text": "DNA microarray technology has led to an explosion of oncogenomic analyses, generating a wealth of data and uncovering the complex gene expression patterns of cancer. Unfortunately, due to the lack of a unifying bioinformatic resource, the majority of these data sit stagnant and disjointed following publication, massively underutilized by the cancer research community. Here, we present ONCOMINE, a cancer microarray database and web-based data-mining platform aimed at facilitating discovery from genome-wide expression analyses. To date, ONCOMINE contains 65 gene expression datasets comprising nearly 48 million gene expression measurements form over 4700 microarray experiments. Differential expression analyses comparing most major types of cancer with respective normal tissues as well as a variety of cancer subtypes and clinical-based and pathology-based analyses are available for exploration. Data can be queried and visualized for a selected gene across all analyses or for multiple genes in a selected analysis. Furthermore, gene sets can be limited to clinically important annotations including secreted, kinase, membrane, and known gene-drug target pairs to facilitate the discovery of novel biomarkers and therapeutic targets.", "title": "" }, { "docid": "66f3db25d6cb91556b6dbfd5c0d2bf41", "text": "Many real-world applications wish to collect tamperevident logs for forensic purposes. This paper considers the case of an untrusted logger, serving a number of clients who wish to store their events in the log, and kept honest by a number of auditors who will challenge the logger to prove its correct behavior. We propose semantics of tamper-evident logs in terms of this auditing process. The logger must be able to prove that individual logged events are still present, and that the log, as seen now, is consistent with how it was seen in the past. To accomplish this efficiently, we describe a tree-based data structure that can generate such proofs with logarithmic size and space, improving over previous linear constructions. Where a classic hash chain might require an 800 MB trace to prove that a randomly chosen event is in a log with 80 million events, our prototype returns a 3 KB proof with the same semantics. We also present a flexible mechanism for the log server to present authenticated and tamper-evident search results for all events matching a predicate. This can allow large-scale log servers to selectively delete old events, in an agreed-upon fashion, while generating efficient proofs that no inappropriate events were deleted. We describe a prototype implementation and measure its performance on an 80 million event syslog trace at 1,750 events per second using a single CPU core. Performance improves to 10,500 events per second if cryptographic signatures are offloaded, corresponding to 1.1 TB of logging throughput per week.", "title": "" }, { "docid": "699f4b29e480d89b158326ec4c778f7b", "text": "Much attention is currently being paid in both the academic and practitioner literatures to the value that organisations could create through the use of big data and business analytics (Gillon et al, 2012; Mithas et al, 2013). For instance, Chen et al (2012, p. 1166–1168) suggest that business analytics and related technologies can help organisations to ‘better understand its business and markets’ and ‘leverage opportunities presented by abundant data and domain-specific analytics’. Similarly, LaValle et al (2011, p. 22) report that topperforming organisations ‘make decisions based on rigorous analysis at more than double the rate of lower performing organisations’ and that in such organisations analytic insight is being used to ‘guide both future strategies and day-to-day operations’. We argue here that while there is some evidence that investments in business analytics can create value, the thesis that ‘business analytics leads to value’ needs deeper analysis. In particular, we argue here that the roles of organisational decision-making processes, including resource allocation processes and resource orchestration processes (Helfat et al, 2007; Teece, 2009), need to be better understood in order to understand how organisations can create value from the use of business analytics. Specifically, we propose that the firstorder effects of business analytics are likely to be on decision-making processes and that improvements in organisational performance are likely to be an outcome of superior decision-making processes enabled by business analytics. This paper is set out as follows. Below, we identify prior research traditions in the Information Systems (IS) literature that discuss the potential of data and analytics to create value. This is to put into perspective the current excitement around ‘analytics’ and ‘big data’, and to position those topics within prior research traditions. We then draw on a number of existing literatures to develop a research agenda to understand the relationship between business analytics, decision-making processes and organisational performance. Finally, we discuss how the three papers in this Special Issue advance the research agenda. Disciplines Engineering | Science and Technology Studies Publication Details Sharma, R., Mithas, S. and Kankanhalli, A. (2014). Transforming decision-making processes: a research agenda for understanding the impact of business analytics on organisations. European Journal of Information Systems, 23 (4), 433-441. This journal article is available at Research Online: http://ro.uow.edu.au/eispapers/3231 EJISEditorialFinal 16 May 2014 RS.docx 1 of 17", "title": "" }, { "docid": "7babd48cd74c959c6630a7bc8d1150d7", "text": "This paper discusses a novel hybrid approach for text categorization that combines a machine learning algorithm, which provides a base model trained with a labeled corpus, with a rule-based expert system, which is used to improve the results provided by the previous classifier, by filtering false positives and dealing with false negatives. The main advantage is that the system can be easily fine-tuned by adding specific rules for those noisy or conflicting categories that have not been successfully trained. We also describe an implementation based on k-Nearest Neighbor and a simple rule language to express lists of positive, negative and relevant (multiword) terms appearing in the input text. The system is evaluated in several scenarios, including the popular Reuters-21578 news corpus for comparison to other approaches, and categorization using IPTC metadata, EUROVOC thesaurus and others. Results show that this approach achieves a precision that is comparable to top ranked methods, with the added value that it does not require a demanding human expert workload to train.", "title": "" }, { "docid": "1979fa5a3384477602c0e81ba62199da", "text": "Language style transfer is the problem of migrating the content of a source sentence to a target style. In many of its applications, parallel training data are not available and source sentences to be transferred may have arbitrary and unknown styles. Under this problem setting, we propose an encoder-decoder framework. First, each sentence is encoded into its content and style latent representations. Then, by recombining the content with the target style, we decode a sentence aligned in the target domain. To adequately constrain the encoding and decoding functions, we couple them with two loss functions. The first is a style discrepancy loss, enforcing that the style representation accurately encodes the style information guided by the discrepancy between the sentence style and the target style. The second is a cycle consistency loss, which ensures that the transferred sentence should preserve the content of the original sentence disentangled from its style. We validate the effectiveness of our model in three tasks: sentiment modification of restaurant reviews, dialog response revision with a romantic style, and sentence rewriting with a Shakespearean style.", "title": "" }, { "docid": "627b14801c8728adf02b75e8eb62896f", "text": "In the 45 years since Cattell used English trait terms to begin the formulation of his \"description of personality,\" a number of investigators have proposed an alternative structure based on 5 orthogonal factors. The generality of this 5-factor model is here demonstrated across unusually comprehensive sets of trait terms. In the first of 3 studies, 1,431 trait adjectives grouped into 75 clusters were analyzed; virtually identical structures emerged in 10 replications, each based on a different factor-analytic procedure. A 2nd study of 479 common terms grouped into 133 synonym clusters revealed the same structure in 2 samples of self-ratings and in 2 samples of peer ratings. None of the factors beyond the 5th generalized across the samples. In the 3rd study, analyses of 100 clusters derived from 339 trait terms suggest their potential utility as Big-Five markers in future studies.", "title": "" }, { "docid": "a79d4b0a803564f417236f2450658fe0", "text": "Dimensionality reduction has attracted increasing attention, because high-dimensional data have arisen naturally in numerous domains in recent years. As one popular dimensionality reduction method, nonnegative matrix factorization (NMF), whose goal is to learn parts-based representations, has been widely studied and applied to various applications. In contrast to the previous approaches, this paper proposes a novel semisupervised NMF learning framework, called robust structured NMF, that learns a robust discriminative representation by leveraging the block-diagonal structure and the <inline-formula> <tex-math notation=\"LaTeX\">$\\ell _{2,p}$ </tex-math></inline-formula>-norm (especially when <inline-formula> <tex-math notation=\"LaTeX\">$0<p\\leq 1$ </tex-math></inline-formula>) loss function. Specifically, the problems of noise and outliers are well addressed by the <inline-formula> <tex-math notation=\"LaTeX\">$\\ell _{2,p}$ </tex-math></inline-formula>-norm (<inline-formula> <tex-math notation=\"LaTeX\">$0<p\\leq 1$ </tex-math></inline-formula>) loss function, while the discriminative representations of both the labeled and unlabeled data are simultaneously learned by explicitly exploring the block-diagonal structure. The proposed problem is formulated as an optimization problem with a well-defined objective function solved by the proposed iterative algorithm. The convergence of the proposed optimization algorithm is analyzed both theoretically and empirically. In addition, we also discuss the relationships between the proposed method and some previous methods. Extensive experiments on both the synthetic and real-world data sets are conducted, and the experimental results demonstrate the effectiveness of the proposed method in comparison to the state-of-the-art methods.", "title": "" }, { "docid": "ef065f2471d9b940e9167ff8daf1c735", "text": "Fano’s inequality lower bounds the probability of transmission error through a communication channel. Applied to classification problems, it provides a lower bound on the Bayes error rate and motivates the widely used Infomax principle. In modern machine learning, we are often interested in more than just the error rate. In medical diagnosis, different errors incur different cost; hence, the overall risk is cost-sensitive. Two other popular criteria are balanced error rate (BER) and F-score. In this work, we focus on the two-class problem and use a general definition of conditional entropy (including Shannon’s as a special case) to derive upper/lower bounds on the optimal F-score, BER and cost-sensitive risk, extending Fano’s result. As a consequence, we show that Infomax is not suitable for optimizing F-score or cost-sensitive risk, in that it can potentially lead to low F-score and high risk. For cost-sensitive risk, we propose a new conditional entropy formulation which avoids this inconsistency. In addition, we consider the common practice of using a threshold on the posterior probability to tune performance of a classifier. As is widely known, a threshold of 0.5, where the posteriors cross, minimizes error rate—we derive similar optimal thresholds for F-score and BER.", "title": "" }, { "docid": "a7123f38dc30813bf82262ae711897a6", "text": "s Crime is a behavior disorder that is an integrated result of social, economical and environmental factors. Crimes are a social nuisance and cost our society dearly in several ways. Any research that can help in solving crimes faster will pay for itself. In this paper we look at use of missing value and clustering algorithm for crime data using data mining. We will look at MV algorithm and Apriori algorithm with some enhancements to aid in the process of filling the missing value and identification of crime patterns. We applied these techniques to real crime data from a city police department. We also use semi-supervised learning technique here for knowledge discovery from the crime records and to help increase the predictive accuracy.", "title": "" }, { "docid": "1d7b7ea9f0cc284f447c11902bad6685", "text": "In the last few years the efficiency of secure multi-party computation (MPC) increased in several orders of magnitudes. However, this alone might not be enough if we want MPC protocols to be used in practice. A crucial property that is needed in many applications is that everyone can check that a given (secure) computation was performed correctly – even in the extreme case where all the parties involved in the computation are corrupted, and even if the party who wants to verify the result was not participating. This is especially relevant in the clients-servers setting, where many clients provide input to a secure computation performed by a few servers. An obvious example of this is electronic voting, but also in many types of auctions one may want independent verification of the result. Traditionally, this is achieved by using non-interactive zero-knowledge proofs during the computation. A recent trend in MPC protocols is to have a more expensive preprocessing phase followed by a very efficient online phase, e.g., the recent so-called SPDZ protocol by Damg̊ard et al. Applications such as voting and some auctions are perfect use-case for these protocols, as the parties usually know well in advance when the computation will take place, and using those protocols allows us to use only cheap information-theoretic primitives in the actual computation. Unfortunately no protocol of the SPDZ type supports an audit phase. In this paper, we show how to achieve efficient MPC with a public audit. We formalize the concept of publicly auditable secure computation and provide an enhanced version of the SPDZ protocol where, even if all the servers are corrupted, anyone with access to the transcript of the protocol can check that the output is indeed correct. Most importantly, we do so without significantly compromising the performance of SPDZ i.e. our online phase has complexity approximately twice that of SPDZ.", "title": "" }, { "docid": "e5f2e7b7dfdfaee33a2187a0a7183cfb", "text": "BACKGROUND\nPossible associations between television viewing and video game playing and children's aggression have become public health concerns. We did a systematic review of studies that examined such associations, focussing on children and young people with behavioural and emotional difficulties, who are thought to be more susceptible.\n\n\nMETHODS\nWe did computer-assisted searches of health and social science databases, gateways, publications from relevant organizations and for grey literature; scanned bibliographies; hand-searched key journals; and corresponded with authors. We critically appraised all studies.\n\n\nRESULTS\nA total of 12 studies: three experiments with children with behavioural and emotional difficulties found increased aggression after watching aggressive as opposed to low-aggressive content television programmes, one found the opposite and two no clear effect, one found such children no more likely than controls to imitate aggressive television characters. One case-control study and one survey found that children and young people with behavioural and emotional difficulties watched more television than controls; another did not. Two studies found that children and young people with behavioural and emotional difficulties viewed more hours of aggressive television programmes than controls. One study on video game use found that young people with behavioural and emotional difficulties viewed more minutes of violence and played longer than controls. In a qualitative study children with behavioural and emotional difficulties, but not their parents, did not associate watching television with aggression. All studies had significant methodological flaws. None was based on power calculations.\n\n\nCONCLUSION\nThis systematic review found insufficient, contradictory and methodologically flawed evidence on the association between television viewing and video game playing and aggression in children and young people with behavioural and emotional difficulties. If public health advice is to be evidence-based, good quality research is needed.", "title": "" }, { "docid": "74da0fe221dd6a578544e6b4896ef60e", "text": "This paper outlines a new approach to the study of power, that of the sociology of translation. Starting from three principles, those of agnosticism (impartiality between actors engaged in controversy), generalised symmetry (the commitment to explain conflicting viewpoints in the same terms) and free association (the abandonment of all a priori distinctions between the natural and the social), the paper describes a scientific and economic controversy about the causes for the decline in the population of scallops in St. Brieuc Bay and the attempts by three marine biologists to develop a conservation strategy for that population. Four ‘moments’ of translation are discerned in the attempts by these researchers to impose themselves and their definition of the situation on others: (a) problematisation: the researchers sought to become indispensable to other actors in the drama by defining the nature and the problems of the latter and then suggesting that these would be resolved if the actors negotiated the ‘obligatory passage point’ of the researchers’ programme of investigation; (b) interessement: a series of processes by which the researchers sought to lock the other actors into the roles that had been proposed for them in that programme; (c) enrolment: a set of strategies in which the researchers sought to define and interrelate the various roles they had allocated to others; (d) mobilisation: a set of methods used by the researchers to ensure that supposed spokesmen for various relevant collectivities were properly able to represent those collectivities and not betrayed by the latter. In conclusion it is noted that translation is a process, never a completed accomplishment, and it may (as in the empirical case considered) fail.", "title": "" }, { "docid": "e6e91ce66120af510e24a10dee6d64b7", "text": "AI plays an increasingly prominent role in society since decisions that were once made by humans are now delegated to automated systems. These systems are currently in charge of deciding bank loans, criminals’ incarceration, and the hiring of new employees, and it’s not difficult to envision that they will in the future underpin most of the decisions in society. Despite the high complexity entailed by this task, there is still not much understanding of basic properties of such systems. For instance, we currently cannot detect (neither explain nor correct) whether an AI system is operating fairly (i.e., is abiding by the decision-constraints agreed by society) or it is reinforcing biases and perpetuating a preceding prejudicial practice. Issues of discrimination have been discussed extensively in legal circles, but there exists still not much understanding of the formal conditions that a system must adhere to be deemed fair. In this paper, we use the language of structural causality (Pearl, 2000) to fill in this gap. We start by introducing three new fine-grained measures of transmission of change from stimulus to effect, which we called counterfactual direct (Ctf-DE), indirect (Ctf-IE), and spurious (Ctf-SE) effects. We then derive the causal explanation formula, which allows the AI designer to quantitatively evaluate fairness and explain the total observed disparity of decisions through different discriminatory mechanisms. We apply these results to various discrimination analysis tasks and run extensive simulations, including detection, evaluation, and optimization of decision-making under fairness constraints. We conclude studying the trade-off between different types of fairness criteria (outcome and procedural), and provide a quantitative approach to policy implementation and the design of fair decision-making systems.", "title": "" } ]
scidocsrr
cc47ef4cd325b8aed4f114ed2257586f
Integrating Programming by Example and Natural Language Programming
[ { "docid": "7d8dcb65acd5e0dc70937097ded83013", "text": "This paper addresses the problem of mapping natural language sentences to lambda–calculus encodings of their meaning. We describe a learning algorithm that takes as input a training set of sentences labeled with expressions in the lambda calculus. The algorithm induces a grammar for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence. We apply the method to the task of learning natural language interfaces to databases and show that the learned parsers outperform previous methods in two benchmark database domains.", "title": "" }, { "docid": "eb79d012c63ac7904c30a89f62349393", "text": "Learning programs is a timely and interesting challenge. In Programming by Example (PBE), a system attempts to infer a program from input and output examples alone, by searching for a composition of some set of base functions. We show how machine learning can be used to speed up this seemingly hopeless search problem, by learning weights that relate textual features describing the provided input-output examples to plausible sub-components of a program. This generic learning framework lets us address problems beyond the scope of earlier PBE systems. Experiments on a prototype implementation show that learning improves search and ranking on a variety of text processing tasks found on help forums.", "title": "" } ]
[ { "docid": "119ba393df80bc197fda2bd893db1bc7", "text": "Traditional electricity meters are replaced by Smart Meters in customers’ households. Smart Meters collect fine-grained utility consumption profiles from customers, which in turn enables the introduction of dynamic, time-of-use tariffs. However, the fine-grained usage data that is compiled in this process also allows to infer the inhabitant’s personal schedules and habits. We propose a privacy-preserving protocol that enables billing with time-of-use tariffs without disclosing the actual consumption profile to the supplier. Our approach relies on a zero-knowledge proof based on Pedersen Commitments performed by a plug-in privacy component that is put into the communication link between Smart Meter and supplier’s back-end system. We require no changes to the Smart Meter hardware and only small changes to the software of Smart Meter and back-end system. In this paper we describe the functional and privacy requirements, the specification and security proof of our solution and give a performance evaluation of a prototypical implementation.", "title": "" }, { "docid": "8621fff78e92e1e0e9ba898d5e2433ca", "text": "This paper aims at providing insight on the transferability of deep CNN features to unsupervised problems. We study the impact of different pretrained CNN feature extractors on the problem of image set clustering for object classification as well as fine-grained classification. We propose a rather straightforward pipeline combining deep-feature extraction using a CNN pretrained on ImageNet and a classic clustering algorithm to classify sets of images. This approach is compared to state-of-the-art algorithms in image-clustering and provides better results. These results strengthen the belief that supervised training of deep CNN on large datasets, with a large variability of classes, extracts better features than most carefully designed engineering approaches, even for unsupervised tasks. We also validate our approach on a robotic application, consisting in sorting and storing objects smartly based on clustering.", "title": "" }, { "docid": "3ebe9aecd4c84e9b9ed0837bd294b4ed", "text": "A bond graph model of a hybrid electric vehicle (HEV) powertrain test cell is proposed. The test cell consists of a motor/generator coupled to a HEV powertrain and powered by a bidirectional power converter. Programmable loading conditions, including positive and negative resistive and inertial loads of any magnitude are modeled, avoiding the use of mechanical inertial loads involved in conventional test cells. The dynamics and control equations of the test cell are derived directly from the bond graph models. The modeling and simulation results of the dynamics of the test cell are validated through experiments carried out on a scaled-down system.", "title": "" }, { "docid": "873aa095401a4f57359f27fcbac88fdd", "text": "We present an algorithm for estimating the pose of a rigid object in real-time under challenging conditions. Our method effectively handles poorly textured objects in cluttered, changing environments, even when their appearance is corrupted by large occlusions, and it relies on grayscale images to handle metallic environments on which depth cameras would fail. As a result, our method is suitable for practical Augmented Reality applications including industrial environments. At the core of our approach is a novel representation for the 3D pose of object parts: We predict the 3D pose of each part in the form of the 2D projections of a few control points. The advantages of this representation is three-fold: We can predict the 3D pose of the object even when only one part is visible; when several parts are visible, we can easily combine them to compute a better pose of the object; the 3D pose we obtain is usually very accurate, even when only few parts are visible. We show how to use this representation in a robust 3D tracking framework. In addition to extensive comparisons with the state-of-the-art, we demonstrate our method on a practical Augmented Reality application for maintenance assistance in the ATLAS particle detector at CERN.", "title": "" }, { "docid": "c25a59a97870c9296ebf2196d1d10cc7", "text": "(Background) We proposed a novel computer-aided diagnosis (CAD) system based on the hybridization of biogeography-based optimization (BBO) and particle swarm optimization (PSO), with the goal of detecting pathological brains in MRI scanning. (Method) The proposed method used wavelet entropy (WE) to extract features from MR brain images, followed by feed-forward neural network (FNN) with training method of a Hybridization of BBO and PSO (HBP), which combined the exploration ability of BBO and exploitation ability of PSO. (Results) The 10 repetition of k-fold cross validation result showed that the proposed HBP outperformed existing FNN training methods and that the proposed WE + HBP-FNN outperformed fourteen state-of-the-art CAD systems of MR brain classification in terms of classification accuracy. The proposed method achieved accuracy of 100%, 100%, and 99.49% over Dataset-66, Dataset-160, and Dataset-255, respectively. The offline learning cost 208.2510 s for Dataset-255, and merely 0.053s for online prediction. (Conclusion) The proposed WE + HBP-FNN method achieves nearly perfect detection pathological brains in MRI scanning.", "title": "" }, { "docid": "60f9a34771b844228e1d8da363e89359", "text": "3-mercaptopyruvate sulfurtransferase (3-MST) was a novel hydrogen sulfide (H2S)-synthesizing enzyme that may be involved in cyanide degradation and in thiosulfate biosynthesis. Over recent years, considerable attention has been focused on the biochemistry and molecular biology of H2S-synthesizing enzyme. In contrast, there have been few concerted attempts to investigate the changes in the expression of the H2S-synthesizing enzymes with disease states. To investigate the changes of 3-MST after traumatic brain injury (TBI) and its possible role, mice TBI model was established by controlled cortical impact system, and the expression and cellular localization of 3-MST after TBI was investigated in the present study. Western blot analysis revealed that 3-MST was present in normal mice brain cortex. It gradually increased, reached a peak on the first day after TBI, and then reached a valley on the third day. Importantly, 3-MST was colocalized with neuron. In addition, Western blot detection showed that the first day post injury was also the autophagic peak indicated by the elevated expression of LC3. Importantly, immunohistochemistry analysis revealed that injury-induced expression of 3-MST was partly colabeled by LC3. However, there was no colocalization of 3-MST with propidium iodide (cell death marker) and LC3 positive cells were partly colocalized with propidium iodide. These data suggested that 3-MST was mainly located in living neurons and may be implicated in the autophagy of neuron and involved in the pathophysiology of brain after TBI.", "title": "" }, { "docid": "dfe502f728d76f9b4294f725eca78413", "text": "SUMMARY This paper reports work being carried out under the AMODEUS project (BRA 3066). The goal of the project is to develop interdisciplinary approaches to studying human-computer interaction and to move towards applying the results to the practicalities of design. This paper describes one of the approaches the project is taking to represent design-Design Space Analysis. One of its goals is help us bridge from relatively theoretical concerns to the practicalities of design. Design Space Analysis is a central component of a framework for representing the design rationale for designed artifacts. Our current work focusses more specifically on the design of user interfaces. A Design Space Analysis is represented using the QOC notation, which consists of Questions identifying key design issues, Options providing possible answers to the Questions, and Criteria for assessing and comparing the Options. In this paper we give an overview of our approach, some examples of the research issues we are currently tackling and an illustration of its role in helping to integrate the work of some of our project partners with design considerations.", "title": "" }, { "docid": "ff7db3cca724a06c594a525b1f229024", "text": "At the heart of emotion, mood, and any other emotionally charged event are states experienced as simply feeling good or bad, energized or enervated. These states--called core affect--influence reflexes, perception, cognition, and behavior and are influenced by many causes internal and external, but people have no direct access to these causal connections. Core affect can therefore be experienced as free-floating (mood) or can be attributed to some cause (and thereby begin an emotional episode). These basic processes spawn a broad framework that includes perception of the core-affect-altering properties of stimuli, motives, empathy, emotional meta-experience, and affect versus emotion regulation; it accounts for prototypical emotional episodes, such as fear and anger, as core affect attributed to something plus various nonemotional processes.", "title": "" }, { "docid": "c3ba6fea620b410d5b6d9b07277d431e", "text": "Nanonetworks, i.e., networks of nano-sized devices, are the enabling technology of long-awaited applications in the biological, industrial and military fields. For the time being, the size and power constraints of nano-devices limit the applicability of classical wireless communication in nanonetworks. Alternatively, nanomaterials can be used to enable electromagnetic (EM) communication among nano-devices. In this paper, a novel graphene-based nano-antenna, which exploits the behavior of Surface Plasmon Polariton (SPP) waves in semi-finite size Graphene Nanoribbons (GNRs), is proposed, modeled and analyzed. First, the conductivity of GNRs is analytically and numerically studied by starting from the Kubo formalism to capture the impact of the electron lateral confinement in GNRs. Second, the propagation of SPP waves in GNRs is analytically and numerically investigated, and the SPP wave vector and propagation length are computed. Finally, the nano-antenna is modeled as a resonant plasmonic cavity, and its frequency response is determined. The results show that, by exploiting the high mode compression factor of SPP waves in GNRs, graphene-based plasmonic nano-antennas are able to operate at much lower frequencies than their metallic counterparts, e.g., the Terahertz Band for a one-micrometer-long ten-nanometers-wide antenna. This result has the potential to enable EM communication in nanonetworks.", "title": "" }, { "docid": "a6fbd3f79105fd5c9edfc4a0292a3729", "text": "The widespread use of templates on the Web is considered harmful for two main reasons. Not only do they compromise the relevance judgment of many web IR and web mining methods such as clustering and classification, but they also negatively impact the performance and resource usage of tools that process web pages. In this paper we present a new method that efficiently and accurately removes templates found in collections of web pages. Our method works in two steps. First, the costly process of template detection is performed over a small set of sample pages. Then, the derived template is removed from the remaining pages in the collection. This leads to substantial performance gains when compared to previous approaches that combine template detection and removal. We show, through an experimental evaluation, that our approach is effective for identifying terms occurring in templates - obtaining F-measure values around 0.9, and that it also boosts the accuracy of web page clustering and classification methods.", "title": "" }, { "docid": "bbd378407abb1c2a9a5016afee40c385", "text": "One approach to the generation of natural-sounding synthesized speech waveforms is to select and concatenate units from a large speech database. Units (in the current work, phonemes) are selected to produce a natural realisation of a target phoneme sequence predicted from text which is annotated with prosodic and phonetic context information. We propose that the units in a synthesis database can be considered as a state transition network in which the state occupancy cost is the distance between a database unit and a target, and the transition cost is an estimate of the quality of concatenation of two consecutive units. This framework has many similarities to HMM-based speech recognition. A pruned Viterbi search is used to select the best units for synthesis from the database. This approach to waveform synthesis permits training from natural speech: two methods for training from speech are presented which provide weights which produce more natural speech than can be obtained by hand-tuning.", "title": "" }, { "docid": "7392769dae1e2859bb264774778860a0", "text": "Abstract form only given. Communications is becoming increasingly important to the operation of protection and control schemes. Although offering many benefits, using standards-based communications, particularly IEC 61850, in the course of the research and development of novel schemes can be complex. This paper describes an open source platform which enables the rapid-prototyping of communications-enhanced schemes. The platform automatically generates the data model and communications code required for an Intelligent Electronic Device (IED) to implement publisher-subscriber Generic Object-Oriented Substation Event (GOOSE) and Sampled Value (SV) messaging. The generated code is tailored to a particular System Configuration Description (SCD) file, and is therefore extremely efficient at run-time. It is shown how a model-centric tool, such as the open source Eclipse Modeling Framework, can be used to manage the complexity of the IEC 61850 standard, by providing a framework for validating SCD files and by automating parts of the code generation process.", "title": "" }, { "docid": "80bf80719a1751b16be2420635d34455", "text": "Mood disorders are inherently related to emotion. In particular, the behaviour of people suffering from mood disorders such as unipolar depression shows a strong temporal correlation with the affective dimensions valence, arousal and dominance. In addition to structured self-report questionnaires, psychologists and psychiatrists use in their evaluation of a patient's level of depression the observation of facial expressions and vocal cues. It is in this context that we present the fourth Audio-Visual Emotion recognition Challenge (AVEC 2014). This edition of the challenge uses a subset of the tasks used in a previous challenge, allowing for more focussed studies. In addition, labels for a third dimension (Dominance) have been added and the number of annotators per clip has been increased to a minimum of three, with most clips annotated by 5. The challenge has two goals logically organised as sub-challenges: the first is to predict the continuous values of the affective dimensions valence, arousal and dominance at each moment in time. The second is to predict the value of a single self-reported severity of depression indicator for each recording in the dataset. This paper presents the challenge guidelines, the common data used, and the performance of the baseline system on the two tasks.", "title": "" }, { "docid": "806eb562d4e2f1c8c45a08d7a8e7ce31", "text": "We study admissibility of inference rules and unification with parameters in transitive modal logics (extensions of K4), in particular we generalize various results on parameterfree admissibility and unification to the setting with parameters. Specifically, we give a characterization of projective formulas generalizing Ghilardi’s characterization in the parameter-free case, leading to new proofs of Rybakov’s results that admissibility with parameters is decidable and unification is finitary for logics satisfying suitable frame extension properties (called cluster-extensible logics in this paper). We construct explicit bases of admissible rules with parameters for cluster-extensible logics, and give their semantic description. We show that in the case of finitely many parameters, these logics have independent bases of admissible rules, and determine which logics have finite bases. As a sideline, we show that cluster-extensible logics have various nice properties: in particular, they are finitely axiomatizable, and have an exponential-size model property. We also give a rather general characterization of logics with directed (filtering) unification. In the sequel, we will use the same machinery to investigate the computational complexity of admissibility and unification with parameters in cluster-extensible logics, and we will adapt the results to logics with unique top cluster (e.g., S4.2) and superintuitionistic logics.", "title": "" }, { "docid": "d96373920011674bbb6b2008e9d4eec2", "text": "Social networking site users must decide what content to share and with whom. Many social networks, including Facebook, provide tools that allow users to selectively share content or block people from viewing content. However, sometimes instead of targeting a particular audience, users will self-censor, or choose not to share. We report the results from an 18-participant user study designed to explore self-censorship behavior as well as the subset of unshared content participants would have potentially shared if they could have specifically targeted desired audiences. We asked participants to report all content they thought about sharing but decided not to share on Facebook and interviewed participants about why they made sharing decisions and with whom they would have liked to have shared or not shared. Participants reported that they would have shared approximately half the unshared content if they had been able to exactly target their desired audiences.", "title": "" }, { "docid": "019f4534383668216108a456ac086610", "text": "Cloud computing is an emerging paradigm for large scale infrastructures. It has the advantage of reducing cost by sharing computing and storage resources, combined with an on-demand provisioning mechanism relying on a pay-per-use business model. These new features have a direct impact on the budgeting of IT budgeting but also affect traditional security, trust and privacy mechanisms. Many of these mechanisms are no longer adequate, but need to be rethought to fit this new paradigm. In this paper we assess how security, trust and privacy issues occur in the context of cloud computing and discuss ways in which they may be addressed.", "title": "" }, { "docid": "745562de56499ff0030f35afa8d84b7f", "text": "This paper will show how the accuracy and security of SCADA systems can be improved by using anomaly detection to identify bad values caused by attacks and faults. The performance of invariant induction and ngram anomaly-detectors will be compared and this paper will also outline plans for taking this work further by integrating the output from several anomalydetecting techniques using Bayesian networks. Although the methods outlined in this paper are illustrated using the data from an electricity network, this research springs from a more general attempt to improve the security and dependability of SCADA systems using anomaly detection.", "title": "" }, { "docid": "440b6eb0db7d28e85b74fd92c17dd818", "text": "Recent advances in health and life sciences have led to generation of a large amount of data. To facilitate access to its desired parts, such a big mass of data has been represented in structured forms, like biomedical ontologies. On the other hand, representing ontologies in a formal language, constructing them independently from each other and storing them at different locations have brought about many challenges for answering queries about the knowledge represented in these ontologies. One of the challenges for the users is to be able represent a complex query in a natural language, and get its answers in an understandable form: Currently, such queries are answered by software systems in a formal language, however, the majority of the users lack the necessary knowledge of a formal query language to represent a query; moreover, none of these systems can provide informative explanations about the answers. Another challenge is to be able to answer complex queries that require appropriate integration of relevant knowledge stored in different places and in various forms. In this work, we address the first challenge by developing an intelligent user interface that allows users to enter biomedical queries in a natural language, and that presents the answers (possibly with explanations if requested) in a natural language. We address the second challenge by developing a rule layer over biomedical ontologies and databases, and use automated reasoners to answer queries considering relevant parts of the rule layer. The main contributions of our work can be summarized as follows:", "title": "" } ]
scidocsrr
15f23f09085e0dae423253cfe45ca814
A fuzzy model for wind speed prediction and power generation in wind parks using spatial correlation
[ { "docid": "00b8207e783aed442fc56f7b350307f6", "text": "A mathematical tool to build a fuzzy model of a system where fuzzy implications and reasoning are used is presented. The premise of an implication is the description of fuzzy subspace of inputs and its consequence is a linear input-output relation. The method of identification of a system using its input-output data is then shown. Two applications of the method to industrial processes are also discussed: a water cleaning process and a converter in a steel-making process.", "title": "" }, { "docid": "338a8efaaf4a790b508705f1f88872b2", "text": "During the past several years, fuzzy control has emerged as one of the most active and fruitful areas for research in the applications of fuzzy set theory, especially in the realm of industrial processes, which do not lend themselves to control by conventional methods because of a lack of quantitative data regarding the input-output relations. Fuzzy control is based on fuzzy logic-a logical system that is much closer in spirit to human thinking and natural language than traditional logical systems. The fuzzy logic controller (FLC) based on fuzzy logic provides a means of converting a linguistic control strategy based on expert knowledge into an automatic control strategy. A survey of the FLC is presented ; a general methodology for constructing an FLC and assessing its performance is described; and problems that need further research are pointed out. In particular, the exposition includes a discussion of fuzzification and defuzzification strategies, the derivation of the database and fuzzy control rules, the definition of fuzzy implication, and an analysis of fuzzy reasoning mechanisms. A may be regarded as a means of emulating a skilled human operator. More generally, the use of an FLC may be viewed as still another step in the direction of model-ing human decisionmaking within the conceptual framework of fuzzy logic and approximate reasoning. In this context, the forward data-driven inference (generalized modus ponens) plays an especially important role. In what follows, we shall investigate fuzzy implication functions, the sentence connectives and and also, compositional operators, inference mechanisms, and other concepts that are closely related to the decisionmaking logic of an FLC. In general, a fuzzy control rule is a fuzzy relation which is expressed as a fuzzy implication. In fuzzy logic, there are many ways in which a fuzzy implication may be defined. The definition of a fuzzy implication may be expressed as a fuzzy implication function. The choice of a fuzzy implication function reflects not only the intuitive criteria for implication but also the effect of connective also. I) Basic Properties of a Fuuy Implication Function: The choice of a fuzzy implication function involves a number of criteria, which are discussed in considered the following basic characteristics of a fuzzy implication function: fundamental property, smoothness property, unrestricted inference, symmetry of generalized modus ponens and generalized modus tollens, and a measure of propagation of fuzziness. All of these properties are justified on purely intuitive grounds. We prefer to say …", "title": "" } ]
[ { "docid": "f370a8ff8722d341d6e839ec2c7217c1", "text": "We give the first O(mpolylog(n)) time algorithms for approximating maximum flows in undirected graphs and constructing polylog(n)-quality cut-approximating hierarchical tree decompositions. Our algorithm invokes existing algorithms for these two problems recursively while gradually incorporating size reductions. These size reductions are in turn obtained via ultra-sparsifiers, which are key tools in solvers for symmetric diagonally dominant (SDD) linear systems.", "title": "" }, { "docid": "5d3738f554cbcba51d59ac18087795e0", "text": "This study examined the role of monoand biarticular muscles in control of countermovement jumps (CMJ) in different directions. It was hypothesized that monoarticular muscles would demonstrate the same activity regardless of jump direction, based on previous studies which suggest their role is to generate energy to maximize center-of-mass (CM) velocity. In contrast, biarticular activity patterns were expected to change to control the direction of the ground reaction force (GRF) and CM velocity vectors. Twelve participants performed maximal CMJs in four directions: vertical, forward, intermediate forward, and backward. Electromyographical data from 4 monoarticular and 3 biarticular lower extremity muscles were analyzed with respect to segmental kinematics and kinetics during the jumps. The biarticular rectus femoris (RF), hamstrings (HA), and gastrocnemius all exhibited changes in activity magnitude and pattern as a function of jump angle. In particular, HA and RF demonstrated reciprocal trends, with HA activity increasing as jump angle changed from backward to forward, while RF activity was reduced in the forward jump condition. The vastus lateralis and gluteus maximus both demonstrated changes in activity patterns, although the former was the only monoarticular muscle to change activity level with jump direction. Monoand biarticular muscle activities therefore did not fit with their hypothesized roles. CM and segmental kinematics suggest that jump direction was initiated early in the countermovement, and that in each jump direction the propulsion phase began from a different position with unique angular and linear momentum. Issues that dictated the muscle activity patterns in each jump direction were the early initiation of appropriate forward momentum, the transition from countermovement to propulsion, the control of individual segment rotations, the control of GRF location and direction, and the influence of the subsequent landing.", "title": "" }, { "docid": "c5b7fc20ec1f53390fbee7815e334c63", "text": "In this paper, we propose a novel optimization framework for Roadside Unit (RSU) deployment and configuration in a vehicular network. We formulate the problem of placement of RSUs and selecting their configurations (e.g. power level, types of antenna and wired/wireless back haul network connectivity) as a linear program. The objective function is to minimize the total cost to deploy and maintain the network of RSU's. A user specified constraint on the minimum coverage provided by the RSU is also incorporated into the optimization framework. Further, the framework also supports the option of specifying selected regions of higher importance such as locations of frequently occurring accidents and incorporating constraints requiring stricter coverage in those areas. Simulation results are presented to demonstrate the feasibility of deployment on the campus map of Southern Methodist University (SMU). The efficiency and scalability of the optimization procedure for large scale problems are also studied and results shows that optimization over an area with the size of Cambridge, Massachusetts is completed in under 2 minutes. Finally, the effects of variation in several key parameters on the resulting design are studied.", "title": "" }, { "docid": "f0f432edbfd66ae86621c9888d04249d", "text": "Facial retouching is widely used in media and entertainment industry. Professional software usually require a minimum level of user expertise to achieve the desirable results. In this paper, we present an algorithm to detect facial wrinkles/imperfection. We believe that any such algorithm would be amenable to facial retouching applications. The detection of wrinkles/imperfections can allow these skin features to be processed differently than the surrounding skin without much user interaction. For detection, Gabor filter responses along with texture orientation field are used as image features. A bimodal Gaussian mixture model (GMM) represents distributions of Gabor features of normal skin versus skin imperfections. Then, a Markov random field model is used to incorporate the spatial relationships among neighboring pixels for their GMM distributions and texture orientations. An expectation-maximization algorithm then classifies skin versus skin wrinkles/imperfections. Once detected automatically, wrinkles/imperfections are removed completely instead of being blended or blurred. We propose an exemplar-based constrained texture synthesis algorithm to inpaint irregularly shaped gaps left by the removal of detected wrinkles/imperfections. We present results conducted on images downloaded from the Internet to show the efficacy of our algorithms.", "title": "" }, { "docid": "0c6ad036e4136034d515c8eab4d414e2", "text": "This paper presents Social MatchUP, a multiplayer Virtual Reality game for children with Neurodevelopmental Disorders (NDD). Shared virtual reality environments (SVREs) allow NDD children to interact in the same virtual space, but without the possible discomfort or fear caused by having a real person in front of them. Social MatchUP is a simple Concentration-like game, run on smartphones, where players should communicate to match up all the pairs of images they are given. Because every player can only interact with half of the pictures, but can see what his companion is doing, the game improves social and communication skill, and can be used also as a learning tool. A simple and easy-to-use customization tool was also developed to let therapists and teachers adapt the game context to the needs of the children they take care of.", "title": "" }, { "docid": "c201eec6ee2b2a9dee62d56eae9ebe17", "text": "In modeling system response to security threats, researchers have made extensive use of state space models, notable instances including the partially observable stochastic game model proposed by Zonouz et.al. The drawback of these state space models is that they may suffer from state space explosion. Our approach in modeling defense makes use of a combinatorial model which helps avert this problem. We propose a new attack-tree (AT) model named attack-countermeasure trees (ACT) based on combinatorial modeling technique for modeling attacks and countermeasures. ACT enables one to (i) place defense mechanisms in the form of detection and mitigation techniques at any node of the tree, not just at the leaf nodes as in defense trees (DT) (ii) automate the generation of attack scenarios from the ACT using its mincuts and (iii) perform probabilistic analysis (e.g. probability of attack, attack and security investment cost, impact of an attack, system risk, return on attack (ROA) and return on investment (ROI)) in an integrated manner (iv) select an optimal countermeasure set from the pool of defense mechanisms using a method which is much less expensive compared to the state-space based approach (v) perform analysis for trees with both repeated and non-repeat events. For evaluation purposes, we suggest suitable algorithms and implement an ACT module in SHARPE. We demonstrate the utility of ACT using a practical case study (BGP attacks).", "title": "" }, { "docid": "a9314b036f107c99545349ccdeb30781", "text": "The development and implementation of language teaching programs can be approached in several different ways, each of which has different implications for curriculum design. Three curriculum approaches are described and compared. Each differs with respect to when issues related to input, process, and outcomes, are addressed. Forward design starts with syllabus planning, moves to methodology, and is followed by assessment of learning outcomes. Resolving issues of syllabus content and sequencing are essential starting points with forward design, which has been the major tradition in language curriculum development. Central design begins with classroom processes and methodology. Issues of syllabus and learning outcomes are not specified in detail in advance and are addressed as the curriculum is implemented. Many of the ‘innovative methods’ of the 1980s and 90s reflect central design. Backward design starts from a specification of learning outcomes and decisions on methodology and syllabus are developed from the learning outcomes. The Common European Framework of Reference is a recent example of backward design. Examples will be given to suggest how the distinction between forward, central and backward design can clarify the nature of issues and trends that have emerged in language teaching in recent years.", "title": "" }, { "docid": "2b491f3c06f91e62e07b43c68bec0801", "text": "Sissay M.M., 2007. Helminth parasites of sheep and goats in eastern Ethiopia: Epidemiology, and anthelmintic resistance and its management. Doctoral thesis, Swedish University of Agricultural Sciences, Uppsala, Sweden. ISSN 1652-6880, ISBN 978-91-576-7351-0 A two-year epidemiology study of helminths of small ruminants involved the collection of viscera from 655 sheep and 632 goats from 4 abattoirs in eastern Ethiopia. A further more detailed epidemiology study of gastro-intestinal nematode infections used the Haramaya University (HU) flock of 60 Black Head Ogaden sheep. The parasitological data included numbers of nematode eggs per gram of faeces (EPG), faecal culture L3 larvae, packed red cell volume (PCV), adult worm and early L4 counts, and FAMACHA eye-colour score estimates, along with animal performance (body weight change). There were 13 species of nematodes and 4 species of flukes present in the sheep and goats, with Haemonchus contortus being the most prevalent (65–80%), followed by Trichostrongylus spp. The nematode infection levels of both sheep and goats followed the bi-modal annual rainfall pattern, with the highest worm burdens occurring during the two rain seasons (peaks in May and September). There were significant differences in worm burdens between the 4 geographic locations for both sheep and goats. Similar seasonal but not geographical variations occurred in the prevalence of flukes. There were significant correlations between EPG and PCV, EPG and FAMACHA scores, and PCV and FAMACHA scores. Moreover, H. contortus showed an increased propensity for arrested development during the dry seasons. Faecal egg count reduction tests (FECRT) conducted on the HU flocks, and flocks in surrounding small-holder communities, evaluated the efficacy of commonly used anthelmintics, including albendazole (ABZ), tetramisole (TET), a combination (ABZ + TET) and ivermectin (IVM). Initially, high levels of resistance to all of the anthelmintics were found in the HU goat flock but not in the sheep. In an attempt to restore the anthelmintic efficacy a new management system was applied to the HU goat flock, including: eliminating the existing parasite infections in the goats, exclusion from the traditional goat pastures, and initiation of communal grazing of the goats with the HU sheep and animals of the local small-holder farmers. Subsequent FECRTs revealed high levels of efficacy of all three drugs in the goat and sheep flocks, demonstrating that anthelmintic efficacy can be restored by exploiting refugia. Individual FECRTs were also conducted on 8 sheep and goat flocks owned by neighbouring small-holder farmers, who received breeding stock from the HU. In each FECRT, 50 local breed sheep and goats, 6–9 months old, were divided into 5 treatment groups: ABZ, TET, ABZ + TET, IVM and untreated control. There was no evidence of anthelmintic resistance in the nematodes, indicating that dilution of resistant parasites, which are likely to be imported with introduced breeding goats, and the low selection pressure imposed by the small-holder farmers, had prevented anthelmintic resistance from emerging.", "title": "" }, { "docid": "338b6f6cd30f16ebfc991215e7ea5931", "text": "Distance learning, electronic learning, and mobile learning offer content, methods, and technologies that decrease the limitations of traditional education. Mobile learning (m-learning) is an extension of distance education, supported by mobile devices equipped with wireless technologies. It is an emerging learning model and process that requires new forms of teaching, learning, contents, and dynamics between actors. In order to ascertain the current state of knowledge and research, an extensive review of the literature in m-learning has been undertaken to identify and harness potential factors and gaps in implementation. This article provides a critical analysis of m-learning projects and related literature, presenting the findings of this aforementioned analysis. It seeks to facilitate the inquiry into the following question: “What is possible in m-learning using recent technologies?” The analysis will be divided into two main parts: applications from the recent online mobile stores and operating system standalone applications.", "title": "" }, { "docid": "515519cc7308477e1c38a74c4dd720f0", "text": "The objective of cosmetic surgery is increased patient self-esteem and confidence. Most patients undergoing a procedure report these results post-operatively. The success of any procedure is measured in patient satisfaction. In order to optimize patient satisfaction, literature suggests careful pre-operative patient preparation including a discussion of the risks, benefits, limitations and expected results for each procedure undertaken. As a general rule, the patients that are motivated to surgery by a desire to align their outward appearance to their body-image tend to be the most satisfied. There are some psychiatric conditions that can prevent a patient from being satisfied without regard aesthetic success. The most common examples are minimal defect/Body Dysmorphic Disorder, the patient in crisis, the multiple revision patient, and loss of identity. This paper will familiarize the audience with these conditions, symptoms and related illnesses. Case examples are described and then explored in terms of the conditions presented. A discussion of the patient’s motivation for surgery, goals pertaining to specific attributes, as well as an evaluation of the patient’s understanding of the risks, benefits, and limitations of the procedure can help the physician determine if a patient is capable of being satisfied with a cosmetic plastic surgery procedure. Plastic surgeons can screen patients suffering from these conditions relatively easily, as psychiatry is an integral part of medical school education. If a psychiatric referral is required, then the psychiatrist needs to be aware of the nuances of each of these conditions.", "title": "" }, { "docid": "ff345d732a273577ca0f965b92e1bbbd", "text": "Integrated circuit (IC) testing for quality assurance is approaching 50% of the manufacturing costs for some complex mixed-signal IC’s. For many years the market growth and technology advancements in digital IC’s were driving the developments in testing. The increasing trend to integrate information acquisition and digital processing on the same chip has spawned increasing attention to the test needs of mixed-signal IC’s. The recent advances in wireless communications indicate a trend toward the integration of the RF and baseband mixed signal technologies. In this paper we examine the developments in IC testing form the historic, current status and future view points. In separate sections we address the testing developments for digital, mixed signal and RF IC’s. With these reviews as context, we relate new test paradigms that have the potential to fundamentally alter the methods used to test mixed-signal and RF parts.", "title": "" }, { "docid": "822fdafcb1cec1c0f54e82fb79900ff3", "text": "Chlorophyll fluorescence imaging was used to follow infections of Nicotiana benthamiana with the hemibiotrophic fungus, Colletotrichum orbiculare. Based on Fv/Fm images, infected leaves were divided into: healthy tissue with values similar to non-inoculated leaves; water-soaked/necrotic tissue with values near zero; and non-necrotic disease-affected tissue with intermediate values, which preceded or surrounded water-soaked/necrotic tissue. Quantification of Fv/Fm images showed that there were no changes until late in the biotrophic phase when spots of intermediate Fv/Fm appeared in visibly normal tissue. Those became water-soaked approx. 24 h later and then turned necrotic. Later in the necrotrophic phase, there was a rapid increase in affected and necrotic tissue followed by a slower increase as necrotic areas merged. Treatment with the induced systemic resistance activator, 2R, 3R-butanediol, delayed affected and necrotic tissue development by approx. 24 h. Also, the halo of affected tissue was narrower indicating that plant cells retained a higher photosystem II efficiency longer prior to death. While chlorophyll fluorescence imaging can reveal much about the physiology of infected plants, this study demonstrates that it is also a practical tool for quantifying hemibiotrophic fungal infections, including affected tissue that is appears normal visually but is damaged by infection.", "title": "" }, { "docid": "1121e6d94c1e545e0fa8b0d8b0ef5997", "text": "Research is a continuous phenomenon. It is recursive in nature. Every research is based on some earlier research outcome. A general approach in reviewing the literature for a problem is to categorize earlier work for the same problem as positive and negative citations. In this paper, we propose a novel automated technique, which classifies whether an earlier work is cited as sentiment positive or sentiment negative. Our approach first extracted the portion of the cited text from citing paper. Using a sentiment lexicon we classify the citation as positive or negative by picking a window of at most five (5) sentences around the cited place (corpus). We have used Naïve-Bayes Classifier for sentiment analysis. The algorithm is evaluated on a manually annotated and class labelled collection of 150 research papers from the domain of computer science. Our preliminary results show an accuracy of 80%. We assert that our approach can be generalized to classification of scientific research papers in different disciplines.", "title": "" }, { "docid": "c731c1fb8a1b1a8bd6ab8b9165de5498", "text": "Video Game Software Development is a promising area of empirical research because our first observations in industry environment identified a lack of a systematic process and method support and rarely conducted/documented studies. Nevertheless, video games specific types of software products focus strongly on user interface and game design. Thus, engineering processes, methods for game construction and verification/validation, and best-practices, derived from traditional software engineering, might be applicable in context of video game development. We selected the Austrian games industry as a manageable and promising starting point for systematically capturing the state-of-the practice in Video game development. In this paper we present the survey design and report on the first results of a national survey in the Austrian games industry. The results of the survey showed that the Austrian games industry is organized in a set of small and young studios with the trend to ad-hoc and flexible development processes and limitations in systematic method support.", "title": "" }, { "docid": "9bb8a69b500d7d3ab5299262c8f17726", "text": "Collecting training images for all visual categories is not only expensive but also impractical. Zero-shot learning (ZSL), especially using attributes, offers a pragmatic solution to this problem. However, at test time most attribute-based methods require a full description of attribute associations for each unseen class. Providing these associations is time consuming and often requires domain specific knowledge. In this work, we aim to carry out attribute-based zero-shot classification in an unsupervised manner. We propose an approach to learn relations that couples class embeddings with their corresponding attributes. Given only the name of an unseen class, the learned relationship model is used to automatically predict the class-attribute associations. Furthermore, our model facilitates transferring attributes across data sets without additional effort. Integrating knowledge from multiple sources results in a significant additional improvement in performance. We evaluate on two public data sets: Animals with Attributes and aPascal/aYahoo. Our approach outperforms state-of the-art methods in both predicting class-attribute associations and unsupervised ZSL by a large margin.", "title": "" }, { "docid": "8eb0edd6d378a627c61f9228745ef36e", "text": "Unlike radial flux machines, slotted axial flux machine has a particular airgap flux distribution which is a function of the machine diameter. Due to the rectangular slot geometry, stator teeth present a trapezoidal geometry with small tooth width close to the shaft, increasing as the diameter becomes larger. This fact introduces an uneven airgap flux distribution if a constant flux source, such as rectangular PM, is utilized to magnetize the machine. As a result, flux density over the stator tooth becomes irregular, inductance parameters are a function of the stator diameter and saliency varies according to machine load. All these effects degrade machine power capability for low and rated load. In this paper, a novel axial flux PM machine with tangential magnetization is presented. An analytic and numerical study is carried out to consider stator tooth geometry and its effect over machine saliency ratio.", "title": "" }, { "docid": "da19fd683e64b0192bd52eadfade33a2", "text": "For professional users such as firefighters and other first responders, GNSS positioning technology (GPS, assisted GPS) can satisfy outdoor positioning requirements in many instances. However, there is still a need for high-performance deep indoor positioning for use by these same professional users. This need has already been clearly expressed by various communities of end users in the context of WearIT@Work, an R&D project funded by the European Community's Sixth Framework Program. It is known that map matching can help for indoor pedestrian navigation. In most previous research, it was assumed that detailed building plans are available. However, in many emergency / rescue scenarios, only very limited building plan information may be at hand. For example a building outline might be obtained from aerial photographs or cataster databases. Alternatively, an escape plan posted at the entrances to many building would yield only approximate exit door and stairwell locations as well as hallway and room orientation. What is not known is how much map information is really required for a USAR mission and how much each level of map detail might help to improve positioning accuracy. Obviously, the geometry of the building and the course through will be factors consider. The purpose of this paper is to show how a previously published Backtracking Particle Filter (BPF) can be combined with different levels of building plan detail to improve PDR performance. A new in/out scenario that might be typical of a reconnaissance mission during a fire in a two-story office building was evaluated. Using only external wall information, the new scenario yields positioning performance (2.56 m mean 2D error) that is greatly superior to the PDR-only, no map base case (7.74 m mean 2D error). This result has a substantial practical significance since this level of building plan detail could be quickly and easily generated in many emergency instances. The technique could be used to mitigate heading errors that result from exposing the IMU to extreme operating conditions. It is hoped that this mitigating effect will also occur for more irregular paths and in larger traversed spaces such as parking garages and warehouses.", "title": "" }, { "docid": "a280f710b0e41d844f1b9c76e7404694", "text": "Self-determination theory posits that the degree to which a prosocial act is volitional or autonomous predicts its effect on well-being and that psychological need satisfaction mediates this relation. Four studies tested the impact of autonomous and controlled motivation for helping others on well-being and explored effects on other outcomes of helping for both helpers and recipients. Study 1 used a diary method to assess daily relations between prosocial behaviors and helper well-being and tested mediating effects of basic psychological need satisfaction. Study 2 examined the effect of choice on motivation and consequences of autonomous versus controlled helping using an experimental design. Study 3 examined the consequences of autonomous versus controlled helping for both helpers and recipients in a dyadic task. Finally, Study 4 manipulated motivation to predict helper and recipient outcomes. Findings support the idea that autonomous motivation for helping yields benefits for both helper and recipient through greater need satisfaction. Limitations and implications are discussed.", "title": "" }, { "docid": "698ff874df9ec0ee7a2b45f1ef52a09e", "text": "a lot of studies provide strong evidence that traditional predictive regression models face significant challenges in out-of sample predictability tests due to model uncertainty and parameter instability. Recent studies introduce particular strategies that overcome these problems. Support Vector Machine (SVM) is a relatively new learning algorithm that has the desirable characteristics of the control of the decision function, the use of the kernel method, and the sparsely of the solution. In this paper, we present a theoretical and empirical framework to apply the Support Vector Machines strategy to predict the stock market. Firstly, four company-specific and six macroeconomic factors that may influence the stock trend are selected for further stock multivariate analysis. Secondly, Support Vector Machine is used in analyzing the relationship of these factors and predicting the stock performance. Our results suggest that SVM is a powerful predictive tool for stock predictions in the financial market.", "title": "" }, { "docid": "cf428835fa19d39c9c4488ab9c715fbb", "text": "Principle Component Analysis (PCA) is a mathematical procedure widely used in exploratory data analysis, signal processing, etc. However, it is often considered a black box operation whose results and procedures are difficult to understand. The goal of this paper is to provide a detailed explanation of PCA based on a designed visual analytics tool that visualizes the results of principal component analysis and supports a rich set of interactions to assist the user in better understanding and utilizing PCA. The paper begins by describing the relationship between PCA and single vector decomposition (SVD), the method used in our visual analytics tool. Then a detailed explanation of the interactive visual analytics tool, including advantages and limitations, is provided.", "title": "" } ]
scidocsrr
b915896fb257b3b9c4b1d38cebd80ddb
An improved K-nearest-neighbor algorithm for text categorization
[ { "docid": "286ccc898eb9bdf2aae7ed5208b1ae18", "text": "It has recently been argued that a Naive Bayesian classifier can be used to filter unsolicited bulk e-mail (“spam”). We conduct a thorough evaluation of this proposal on a corpus that we make publicly available, contributing towards standard benchmarks. At the same time we investigate the effect of attribute-set size, training-corpus size, lemmatization, and stop-lists on the filter’s performance, issues that had not been previously explored. After introducing appropriate cost-sensitive evaluation measures, we reach the conclusion that additional safety nets are needed for the Naive Bayesian anti-spam filter to be viable in practice.", "title": "" } ]
[ { "docid": "4edc0f70d6b8d599e28d245cbd8af31e", "text": "To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within approximately 0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD(50), and conversely m and TD(50) are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d(ref), n, v(eff) and the Niemierko equivalent uniform dose (EUD), where d(ref) and v(eff) are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data.", "title": "" }, { "docid": "d4954bab5fc4988141c509a6d6ab79db", "text": "Recent advances in neural autoregressive models have improve the performance of speech synthesis (SS). However, as they lack the ability to model global characteristics of speech (such as speaker individualities or speaking styles), particularly when these characteristics have not been labeled, making neural autoregressive SS systems more expressive is still an open issue. In this paper, we propose to combine VoiceLoop, an autoregressive SS model, with Variational Autoencoder (VAE). This approach, unlike traditional autoregressive SS systems, uses VAE to model the global characteristics explicitly, enabling the expressiveness of the synthesized speech to be controlled in an unsupervised manner. Experiments using the VCTK and Blizzard2012 datasets show the VAE helps VoiceLoop to generate higher quality speech and to control the expressions in its synthesized speech by incorporating global characteristics into the speech generating process.", "title": "" }, { "docid": "a1d58b3a9628dc99edf53c1112dc99b8", "text": "Multiple criteria decision-making (MCDM) research has developed rapidly and has become a main area of research for dealing with complex decision problems. The purpose of the paper is to explore the performance evaluation model. This paper develops an evaluation model based on the fuzzy analytic hierarchy process and the technique for order performance by similarity to ideal solution, fuzzy TOPSIS, to help the industrial practitioners for the performance evaluation in a fuzzy environment where the vagueness and subjectivity are handled with linguistic values parameterized by triangular fuzzy numbers. The proposed method enables decision analysts to better understand the complete evaluation process and provide a more accurate, effective, and systematic decision support tool. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "02bd814b19eacf70339218f910c9a644", "text": "BACKGROUND\nAlthough \"traditional\" face-lifting techniques can achieve excellent improvement along the jawline and neck, they often have little impact on the midface area. Thus, many different types of procedures have been developed to provide rejuvenation in this region, usually contemplating various dissection planes, incisions, and suspension vectors.\n\n\nMETHODS\nA 7-year observational study of 350 patients undergoing midface lift was analyzed. The authors suspended the midface flap, anchoring to the deep temporal aponeurosis with a suspender-like suture (superolateral vector), or directly to the lower orbital rim with a belt-like suture (superomedial vector). Subjective and objective methods were used to evaluate the results. The subjective methods included a questionnaire completed by the patients. The objective method involved the evaluation of preoperative and postoperative photographs by a three-member jury instructed to compare the \"critical\" anatomical areas of the midface region: malar eminence, nasojugal groove, nasolabial fold, and jowls in the lower portion of the cheeks. The average follow-up period was 24 months.\n\n\nRESULTS\nHigh satisfaction was noticeable from the perceptions of both the jury and the patients. Objective evaluation evidenced that midface lift with temporal anchoring was more efficient for the treatment of malar eminence, whereas midface lift with transosseous periorbital anchoring was more efficient for the treatment of nasojugal groove.\n\n\nCONCLUSIONS\nThe most satisfying aspect of the adopted techniques is a dramatic facial rejuvenation and preservation of the patient's original youthful identity. Furthermore, choosing the most suitable technique respects the patient's needs and enables correction of the specific defects.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, IV.", "title": "" }, { "docid": "6ba76c4e9cbe20297bbf662250d6dc91", "text": "Interactive TV research encompasses a rather diverse body of work (e.g. multimedia, HCI, CSCW, UIST, user modeling, media studies) that has accumulated over the past 20 years. In this article, we highlight the state-of-the-art and consider two basic issues: What is interactive TV research? Can it help us reinvent the practices of creating, sharing and watching TV? We survey the literature and identify three concepts that have been inherent in interactive TV research: 1) interactive TV as content creation, 2) interactive TV as a content and experience sharing process, and 3) interactive TV as control of audiovisual content. We propose this simple taxonomy (create-share-control) as an evolutionary step over the traditional hierarchical produce-distribute-consume paradigm. Moreover, we highlight the importance of sociability in all phases of the create-share-control model.", "title": "" }, { "docid": "54b43b5e3545710dfe37f55b93084e34", "text": "Cloud computing is a model for delivering information technology services, wherein resources are retrieved from the Internet through web-based tools and applications instead of a direct connection to a server. The capability to provision and release cloud computing resources with minimal management effort or service provider interaction led to the rapid increase of the use of cloud computing. Therefore, balancing cloud computing resources to provide better performance and services to end users is important. Load balancing in cloud computing means balancing three important stages through which a request is processed. The three stages are data center selection, virtual machine scheduling, and task scheduling at a selected data center. User task scheduling plays a significant role in improving the performance of cloud services. This paper presents a review of various energy-efficient task scheduling methods in a cloud environment. A brief analysis of various scheduling parameters considered in these methods is also presented. The results show that the best power-saving percentage level can be achieved by using both DVFS and DNS.", "title": "" }, { "docid": "e519d705cd52b4eb24e4e936b849b3ce", "text": "Computer manufacturers spend a huge amount of time, resources, and money in designing new systems and newer configurations, and their ability to reduce costs, charge competitive prices and gain market share depends on how good these systems perform. In this work, we develop predictive models for estimating the performance of systems by using performance numbers from only a small fraction of the overall design space. Specifically, we first develop three models, two based on artificial neural networks and another based on linear regression. Using these models, we analyze the published Standard Performance Evaluation Corporation (SPEC) benchmark results and show that by using the performance numbers of only 2% and 5% of the machines in the design space, we can estimate the performance of all the systems within 9.1% and 4.6% on average, respectively. Then, we show that the performance of future systems can be estimated with less than 2.2% error rate on average by using the data of systems from a previous year. We believe that these tools can accelerate the design space exploration significantly and aid in reducing the corresponding research/development cost and time-to-market.", "title": "" }, { "docid": "5184c27b7387a0cbedb1c3a393f797fa", "text": "Emulator-based dynamic analysis has been widely deployed in Android application stores. While it has been proven effective in vetting applications on a large scale, it can be detected and evaded by recent Android malware strains that carry detection heuristics. Using such heuristics, an application can check the presence or contents of certain artifacts and infer the presence of emulators. However, there exists little work that systematically discovers those heuristics that would be eventually helpful to prevent malicious applications from bypassing emulator-based analysis. To cope with this challenge, we propose a framework called Morpheus that automatically generates such heuristics. Morpheus leverages our insight that an effective detection heuristic must exploit discrepancies observable by an application. To this end, Morpheus analyzes the application sandbox and retrieves observable artifacts from both Android emulators and real devices. Afterwards, Morpheus further analyzes the retrieved artifacts to extract and rank detection heuristics. The evaluation of our proof-of-concept implementation of Morpheus reveals more than 10,000 novel detection heuristics that can be utilized to detect existing emulator-based malware analysis tools. We also discuss the discrepancies in Android emulators and potential countermeasures.", "title": "" }, { "docid": "2e65ae613aa80aac27d5f8f6e00f5d71", "text": "Industrial systems, e.g., wind turbines, generate big amounts of data from reliable sensors with high velocity. As it is unfeasible to store and query such big amounts of data, only simple aggregates are currently stored. However, aggregates remove fluctuations and outliers that can reveal underlying problems and limit the knowledge to be gained from historical data. As a remedy, we present the distributed Time Series Management System (TSMS) ModelarDB that uses models to store sensor data. We thus propose an online, adaptive multi-model compression algorithm that maintains data values within a user-defined error bound (possibly zero). We also propose (i) a database schema to store time series as models, (ii) methods to push-down predicates to a key-value store utilizing this schema, (iii) optimized methods to execute aggregate queries on models, (iv) a method to optimize execution of projections through static code-generation, and (v) dynamic extensibility that allows new models to be used without recompiling the TSMS. Further, we present a general modular distributed TSMS architecture and its implementation, ModelarDB, as a portable library, using Apache Spark for query processing and Apache Cassandra for storage. An experimental evaluation shows that, unlike current systems, ModelarDB hits a sweet spot and offers fast ingestion, good compression, and fast, scalable online aggregate query processing at the same time. This is achieved by dynamically adapting to data sets using multiple models. The system degrades gracefully as more outliers occur and the actual errors are much lower than the bounds. PVLDB Reference Format: Søren Kejser Jensen, Torben Bach Pedersen, Christian Thomsen. ModelarDB: Modular Model-Based Time Series Management with Spark and Cassandra. PVLDB, 11(11): 1688-1701, 2018. DOI: https://doi.org/10.14778/3236187.3236215", "title": "" }, { "docid": "99aaea5ec8f90994a9fa01bfc0131ee2", "text": "Beyond simply acting as thoroughfares for motor vehicles, urban streets often double as public spaces. Urban streets are places where people walk, shop, meet, and generally engage in the diverse array of social and recreational activities that, for many, are what makes urban living enjoyable. And beyond even these quality-of-life benefits, pedestrian-friendly urban streets have been increasingly linked to a host of highly desirable social outcomes, including economic growth and innovation (Florida, ), improvements in air quality (Frank et al., ), and increased physical fitness and health (Frank et al., ), to name only a few. For these reasons, many groups and individuals encourage the design of “livable” streets, or streets that seek to better integrate the needs of pedestrians and local developmental objectives into a roadway’s design. There has been a great deal of work describing the characteristics of livable streets (see Duany et al., ; Ewing, ; Jacobs, ), and there is general consensus on their characteristics: livable streets, at a minimum, seek to enhance the pedestrian character of the street by providing a continuous sidewalk network and incorporating design features that minimize the negative impacts of motor vehicle use on pedestrians. Of particular importance is the role played by roadside features such as street trees and on-street parking, which serve to buffer the pedestrian realm from potentially hazardous oncoming traffic, and to provide spatial definition to the public right-of-way. Indeed, many livability advocates assert that trees, as much as any other single feature, can play a central role in enhancing a roadway’s livability (Duany et al., ; Jacobs, ). While most would agree that the inclusion of trees and other streetscape features enhances the aesthetic quality of a roadway, there is substantive disagreement about their safety effects (see Figure ). Conventional engineering practice encourages the design of roadsides that will allow a vehicle leaving the travelway to safely recover before encountering a potentially hazardous fixed object. When one considers the aggregate statistics on run-off-roadway crashes, there is indeed ", "title": "" }, { "docid": "3d9187bbc9a6bad0208ff560b3bcb57d", "text": "Properties of networks are often characterized in terms of features such as node degree distributions, average path lengths, diameters, or clustering coefficients. Here, we study shortest path length distributions. On the one hand, average as well as maximum distances can be determined therefrom; on the other hand, they are closely related to the dynamics of network spreading processes. Because of the combinatorial nature of networks, we apply maximum entropy arguments to derive a general, physically plausible model. In particular, we establish the generalized Gamma distribution as a continuous characterization of shortest path length histograms of networks or arbitrary topology. Experimental evaluations corroborate our theoretical results.", "title": "" }, { "docid": "226fa477fa59b930639435f76ab6a621", "text": "Mobile Augmented Reality (AR) is most commonly implemented using a camera and a flat screen. Such implementation removes binocular disparity from users' observation. To compensate, people use alternative depth cues (e.g. depth ordering). However, these cues may also get distorted in certain AR implementations, creating depth distortion which is problematic in situations where precise hand interaction within AR workspace is required such as when transcribing augmented instructions to physical objects (e.g. virtual tracing -- creating a physical sketch on a 2D or 3D object given a virtual image on a mobile device). In this paper we explore how depth distortion affects 3D virtual tracing by implementing a first of its kind 3D virtual tracing prototype and run an observational study. Drawing performance exceeded our expectations suggesting that the lack of visual depth cues, whilst holding the object in hand, is not as problematic as initially predicted. However, when placing the object on the stand and drawing with only one hand (the other is used for holding the phone) their performance drastically decreased.", "title": "" }, { "docid": "a47e0a04383cc379994bfae6d929e0f6", "text": "This paper shows that echo state networks are universal uniform approximants in the context of discrete-time fading memory filters with uniformly bounded inputs defined on negative infinite times. This result guarantees that any fading memory input/output system in discrete time can be realized as a simple finite-dimensional neural network-type state-space model with a static linear readout map. This approximation is valid for infinite time intervals. The proof of this statement is based on fundamental results, also presented in this work, about the topological nature of the fading memory property and about reservoir computing systems generated by continuous reservoir maps.", "title": "" }, { "docid": "aedb6c6bce85ca8c58b3a4ef0850f3ff", "text": "Data assurance and resilience are crucial security issues in cloud-based IoT applications. With the widespread adoption of drones in IoT scenarios such as warfare, agriculture and delivery, effective solutions to protect data integrity and communications between drones and the control system have been in urgent demand to prevent potential vulnerabilities that may cause heavy losses. To secure drone communication during data collection and transmission, as well as preserve the integrity of collected data, we propose a distributed solution by utilizing blockchain technology along with the traditional cloud server. Instead of registering the drone itself to the blockchain, we anchor the hashed data records collected from drones to the blockchain network and generate a blockchain receipt for each data record stored in the cloud, reducing the burden of moving drones with the limit of battery and process capability while gaining enhanced security guarantee of the data. This paper presents the idea of securing drone data collection and communication in combination with a public blockchain for provisioning data integrity and cloud auditing. The evaluation shows that our system is a reliable and distributed system for drone data assurance and resilience with acceptable overhead and scalability for a large number of drones.", "title": "" }, { "docid": "45c3c54043337e91a44e71945f4d63dd", "text": "Neutrophils are being increasingly recognized as an important element in tumor progression. They have been shown to exert important effects at nearly every stage of tumor progression with a number of studies demonstrating that their presence is critical to tumor development. Novel aspects of neutrophil biology have recently been elucidated and its contribution to tumorigenesis is only beginning to be appreciated. Neutrophil extracellular traps (NETs) are neutrophil-derived structures composed of DNA decorated with antimicrobial peptides. They have been shown to trap and kill microorganisms, playing a critical role in host defense. However, their contribution to tumor development and metastasis has recently been demonstrated in a number of studies highlighting NETs as a potentially important therapeutic target. Here, studies implicating NETs as facilitators of tumor progression and metastasis are reviewed. In addition, potential mechanisms by which NETs may exert these effects are explored. Finally, the ability to target NETs therapeutically in human neoplastic disease is highlighted.", "title": "" }, { "docid": "bde769df506e361bf374bd494fc5db6f", "text": "Molded interconnect devices (MID) allow the realization of electronic circuits on injection molded thermoplastics. MID antennas can be manufactured as part of device casings without the need for additional printed circuit boards or attachment of antennas printed on foil. Baluns, matching networks, amplifiers and connectors can be placed on the polymer in the vicinity of the antenna. A MID dipole antenna for 1 GHz is designed, manufactured and measured. A prototype of the antenna is built with laser direct structuring (LDS) on a Xantar LDS 3720 substrate. Measured return loss and calibrated gain patterns are compared to simulation results.", "title": "" }, { "docid": "37c4c0d309c9543f3d9e3744b2362e4d", "text": "The paper presents to develop a new control strategy of limiting the dc-link voltage fluctuation for a back-to-back pulsewidth modulation converter in a doubly fed induction generator (DFIG) for wind turbine systems. The reasons of dc-link voltage fluctuation are analyzed. An improved control strategy with the instantaneous rotor power feedback is proposed to limit the fluctuation range of the dc-link voltage. An experimental rig is set up to valid the proposed strategy, and the dynamic performances of the DFIG are compared with the traditional control method under a constant grid voltage. Furthermore, the capabilities of keeping the dc-link voltage stable are also compared in the ride-through control of DFIG during a three-phase grid fault, by using a developed 2 MW DFIG wind power system model. Both the experimental and simulation results have shown that the proposed control strategy is more effective, and the fluctuation of the dc-link voltage may be successfully limited in a small range under a constant grid voltage and a non-serious grid voltage dip.", "title": "" }, { "docid": "5ebd92444b69b2dd8e728de2381f3663", "text": "A mind is a computer.", "title": "" }, { "docid": "024cc15c164656f90ade55bf3c391405", "text": "Unmanned aerial vehicles (UAVs), also known as drones have many applications and they are a current trend across many industries. They can be used for delivery, sports, surveillance, professional photography, cinematography, military combat, natural disaster assistance, security, and the list grows every day. Programming opens an avenue to automate many processes of daily life and with the drone as aerial programmable eyes, security and surveillance can become more efficient and cost effective. At Barry University, parking is becoming an issue as the number of people visiting the school greatly outnumbers the convenient parking locations. This has caused a multitude of hazards in parking lots due to people illegally parking, as well as unregistered vehicles parking in reserved areas. In this paper, we explain how automated drone surveillance is utilized to detect unauthorized parking at Barry University. The automated process is incorporated into Java application and completed in three steps: collecting visual data, processing data automatically, and sending automated responses and queues to the operator of the system.", "title": "" }, { "docid": "8750fc51d19bbf0cbae2830638f492fd", "text": "Smartphones are increasingly becoming an ordinary part of our daily lives. With their remarkable capacity, applications used in these devices are extremely varied. In terms of language teaching, the use of these applications has opened new windows of opportunity, innovatively shaping the way instructors teach and students learn. This 4 week-long study aimed to investigate the effectiveness of a mobile application on teaching 40 figurative idioms from the Michigan Corpus of Academic Spoken English (MICASE) corpus compared to traditional activities. Quasi-experimental research design with pretest and posttest was employed to determine the differences between the scores of the control (n=25) and the experimental group (n=25) formed with convenience sampling. Results indicate that participants in the experimental group performed significantly better in the posttest, demonstrating the effectiveness of the mobile application used in this study on learning idioms. The study also provides recommendations towards the use of mobile applications in teaching vocabulary.", "title": "" } ]
scidocsrr
a7be13a5c5754d88c16eb15e46ae9992
Big data analysis using Hadoop cluster
[ { "docid": "b12cbcf5e4c9ec3bf7f9fc0c5dd11b67", "text": "This tutorial is motivated by the clear need of many organizations, companies, and researchers to deal with big data volumes efficiently. Examples include web analytics applications, scientific applications, and social networks. A popular data processing engine for big data is Hadoop MapReduce. Early versions of Hadoop MapReduce suffered from severe performance problems. Today, this is becoming history. There are many techniques that can be used with Hadoop MapReduce jobs to boost performance by orders of magnitude. In this tutorial we teach such techniques. First, we will briefly familiarize the audience with Hadoop MapReduce and motivate its use for big data processing. Then, we will focus on different data management techniques, going from job optimization to physical data organization like data layouts and indexes. Throughout this tutorial, we will highlight the similarities and differences between Hadoop MapReduce and Parallel DBMS. Furthermore, we will point out unresolved research problems and open issues.", "title": "" }, { "docid": "f35d164bd1b19f984b10468c41f149e3", "text": "Recent technological advancements have led to a deluge of data from distinctive domains (e.g., health care and scientific sensors, user-generated data, Internet and financial companies, and supply chain systems) over the past two decades. The term big data was coined to capture the meaning of this emerging trend. In addition to its sheer volume, big data also exhibits other unique characteristics as compared with traditional data. For instance, big data is commonly unstructured and require more real-time analysis. This development calls for new system architectures for data acquisition, transmission, storage, and large-scale data processing mechanisms. In this paper, we present a literature survey and system tutorial for big data analytics platforms, aiming to provide an overall picture for nonexpert readers and instill a do-it-yourself spirit for advanced audiences to customize their own big-data solutions. First, we present the definition of big data and discuss big data challenges. Next, we present a systematic framework to decompose big data systems into four sequential modules, namely data generation, data acquisition, data storage, and data analytics. These four modules form a big data value chain. Following that, we present a detailed survey of numerous approaches and mechanisms from research and industry communities. In addition, we present the prevalent Hadoop framework for addressing big data challenges. Finally, we outline several evaluation benchmarks and potential research directions for big data systems.", "title": "" } ]
[ { "docid": "83a968fcd2d77de796a8161b6dead9bc", "text": "We introduce a deep learning-based method to generate full 3D hair geometry from an unconstrained image. Our method can recover local strand details and has real-time performance. State-of-the-art hair modeling techniques rely on large hairstyle collections for nearest neighbor retrieval and then perform ad-hoc refinement. Our deep learning approach, in contrast, is highly efficient in storage and can run 1000 times faster while generating hair with 30K strands. The convolutional neural network takes the 2D orientation field of a hair image as input and generates strand features that are evenly distributed on the parameterized 2D scalp. We introduce a collision loss to synthesize more plausible hairstyles, and the visibility of each strand is also used as a weight term to improve the reconstruction accuracy. The encoder-decoder architecture of our network naturally provides a compact and continuous representation for hairstyles, which allows us to interpolate naturally between hairstyles. We use a large set of rendered synthetic hair models to train our network. Our method scales to real images because an intermediate 2D orientation field, automatically calculated from the real image, factors out the difference between synthetic and real hairs. We demonstrate the effectiveness and robustness of our method on a wide range of challenging real Internet pictures, and show reconstructed hair sequences from videos.", "title": "" }, { "docid": "5b6daefbefd44eea4e317e673ad91da3", "text": "A three-dimensional (3-D) thermogram can provide spatial information; however, it is rarely applied because it lacks an accurate method in obtaining the intrinsic and extrinsic parameters of an infrared (IR) camera. Conventional methods cannot be used for such calibration because an IR camera cannot capture visible calibration patterns. Therefore, in the current study, a trinocular vision system composed of two visible cameras and an IR camera is constructed and a calibration board with miniature bulbs is designed. The two visible cameras compose a binocular vision system that obtains 3-D information from the miniature bulbs while the IR camera captures the calibration board to obtain the two dimensional subpixel coordinates of miniature bulbs. The corresponding algorithm is proposed to calibrate the IR camera based on the gathered information. Experimental results show that the proposed calibration can accurately obtain the intrinsic and extrinsic parameters of the IR camera, and meet the requirements of its application.", "title": "" }, { "docid": "e51f7fde238b0896df22d196b8c59c1a", "text": "The aim of color constancy is to remove the effect of the color of the light source. As color constancy is inherently an ill-posed problem, most of the existing color constancy algorithms are based on specific imaging assumptions such as the grey-world and white patch assumptions. In this paper, 3D geometry models are used to determine which color constancy method to use for the different geometrical regions found in images. To this end, images are first classified into stages (rough 3D geometry models). According to the stage models, images are divided into different regions using hard and soft segmentation. After that, the best color constancy algorithm is selected for each geometry segment. As a result, light source estimation is tuned to the global scene geometry. Our algorithm opens the possibility to estimate the remote scene illumination color, by distinguishing nearby light source from distant illuminants. Experiments on large scale image datasets show that the proposed algorithm outperforms state-of-the-art single color constancy algorithms with an improvement of almost 14% of median angular error. When using an ideal classifier (i.e, all of the test images are correctly classified into stages), the performance of the proposed method achieves an improvement of 31% of median angular error compared to the best-performing single color constancy algorithm.", "title": "" }, { "docid": "137fc87bf1152e30af2109b260f78460", "text": "This paper presents a novel feature learning model for cyber security tasks. We propose to use Auto-encoders (AEs), as a generative model, to learn latent representation of different feature sets. We show how well the AE is capable of automatically learning a reasonable notion of semantic similarity among input features. Specifically, the AE accepts a feature vector, obtained from cyber security phenomena, and extracts a code vector that captures the semantic similarity between the feature vectors. This similarity is embedded in an abstract latent representation. Because the AE is trained in an unsupervised fashion, the main part of this success comes from appropriate original feature set that is used in this paper. It can also provide more discriminative features in contrast to other feature engineering approaches. Furthermore, the scheme can reduce the dimensionality of the features thereby signicantly minimising the memory requirements. We selected two different cyber security tasks: networkbased anomaly intrusion detection and Malware classication. We have analysed the proposed scheme with various classifiers using publicly available datasets for network anomaly intrusion detection and malware classifications. Several appropriate evaluation metrics show improvement compared to prior results.", "title": "" }, { "docid": "1adabe21b99d7b26851d78c9a607b01d", "text": "Text Summarization is a way to produce a text, which contains the significant portion of information of the original text(s). Different methodologies are developed till now depending upon several parameters to find the summary as the position, format and type of the sentences in an input text, formats of different words, frequency of a particular word in a text etc. But according to different languages and input sources, these parameters are varied. As result the performance of the algorithm is greatly affected. The proposed approach summarizes a text without depending upon those parameters. Here, the relevance of the sentences within the text is derived by Simplified Lesk algorithm and WordNet, an online dictionary. This approach is not only independent of the format of the text and position of a sentence in a text, as the sentences are arranged at first according to their relevance before the summarization process, the percentage of summarization can be varied according to needs. The proposed approach gives around 80% accurate results on 50% summarization of the original text with respect to the manually summarized result, performed on 50 different types and lengths of texts. We have achieved satisfactory results even upto 25% summarization of the original text.", "title": "" }, { "docid": "5f369b620b029a7e0c54d5d867954d5f", "text": "Clustering aims at representing large datasets by a fewer number of prototypes or clusters. It brings simplicity in modeling data and thus plays a central role in the process of knowledge discovery and data mining. Data mining tasks, in these days, require fast and accurate partitioning of huge datasets, which may come with a variety of attributes or features. This, in turn, imposes severe computational requirements on the relevant clustering techniques. A family of bio-inspired algorithms, well-known as Swarm Intelligence (SI) has recently emerged that meets these requirements and has successfully been applied to a number of real world clustering problems. This chapter explores the role of SI in clustering different kinds of datasets. It finally describes a new SI technique for partitioning any dataset into an optimal number of groups through one run of optimization. Computer simulations undertaken in this research have also been provided to demonstrate the effectiveness of the proposed algorithm.", "title": "" }, { "docid": "968555bbada2d930b97d8bb982580535", "text": "With the recent developments in three-dimensional (3-D) scanner technologies and photogrammetric techniques, it is now possible to acquire and create accurate models of historical and archaeological sites. In this way, unrestricted access to these sites, which is highly desirable from both a research and a cultural perspective, is provided. Through the process of virtualisation, numerous virtual collections are created. These collections must be archives, indexed and visualised over a very long period of time in order to be able to monitor and restore them as required. However, the intrinsic complexities and tremendous importance of ensuring long-term preservation and access to these collections have been widely overlooked. This neglect may lead to the creation of a so-called “Digital Rosetta Stone”, where models become obsolete and the data cannot be interpreted or virtualised. This paper presents a framework for the long-term preservation of 3-D culture heritage data as well as the application thereof in monitoring, restoration and virtual access. The interplay between raw data and model is considered as well as the importance of calibration. Suitable archiving and indexing techniques are described and the issue of visualisation over a very long period of time is addressed. An approach to experimentation though detachment, migration and emulation is presented.", "title": "" }, { "docid": "5098414995b4ece21ed690a349f670e4", "text": "Wireless sensor networks (WSN) are used for many applications such as environmental monitoring, infrastructure security, healthcare applications, and traffic control. The design and development of such applications must address many challenges dictated by WSN characteristics on one hand and the targeted applications on the other. One of the emerging approaches used for relaxing these challenges is using service-oriented middleware (SOM). Service-oriented computing, in general, aims to make services available and easily accessible through standardized models and protocols without having to worry about the underlying infrastructures, development models, or implementation details. SOM could play an important role in facilitating the design, development, and implementation of service-oriented systems. This will help achieve interoperability, loose coupling, and heterogeneity support. Furthermore, SOM approaches will provision non-functional requirements like scalability, reliability, flexibility, and Quality of Service (QoS) assurance. This paper surveys the current work in SOM and the trends and challenges to be addressed when designing and developing these solutions for WSN.", "title": "" }, { "docid": "e1a0c57edb51bd304f97eec99b30d3c7", "text": "This thesis deals with a Bayesian neural network model. The focus is on how to use the model for automatic classification, i.e. on how to train the neural network to classify objects from some domain, given a database of labeled examples from the domain. The original Bayesian neural network is a one-layer network implementing a naive Bayesian classifier. It is based on the assumption that different attributes of the objects appear independent of each other. This work has been aimed at extending the original Bayesian neural network model, mainly focusing on three different aspects. First the model is extended to a multi-layer network, to relax the independence requirement. This is done by introducing a hidden layer of complex columns, groups of units which take input from the same set of input attributes. Two different types of complex column structures in the hidden layer are studied and compared. An information theoretic measure is used to decide which input attributes to consider together in complex columns. Also used are ideas from Bayesian statistics, as a means to estimate the probabilities from data which are required to set up the weights and biases in the neural network. The use of uncertain evidence and continuous valued attributes in the Bayesian neural network are also treated. Both things require the network to handle graded inputs, i.e. probability distributions over some discrete attributes given as input. Continuous valued attributes can then be handled by using mixture models. In effect, each mixture model converts a set of continuous valued inputs to a discrete number of probabilities for the component densities in the mixture model. Finally a query-reply system based on the Bayesian neural network is described. It constitutes a kind of expert system shell on top of the network. Rather than requiring all attributes to be given at once, the system can ask for the attributes relevant for the classification. Information theory is used to select the attributes to ask for. The system also offers an explanatory mechanism, which can give simple explanations of the state of the network, in terms of which inputs mean the most for the outputs. These extensions to the Bayesian neural network model are evaluated on a set of different databases, both realistic and synthetic, and the classification results are compared to those of various other classification methods on the same databases. The conclusion is that the Bayesian neural network model compares favorably to other methods for classification. In this work much inspiration has been taken from various branches of machine learning. The goal has been to combine the different ideas into one consistent and useful neural network model. A main theme throughout is to utilize independencies between attributes, to decrease the number of free parameters, and thus to increase the generalization capability of the method. Significant contributions are the method used to combine the outputs from mixture models over different subspaces of the domain, and the use of Bayesian estimation of parameters in the expectation maximization method during training of the mixture models.", "title": "" }, { "docid": "190f7750701c6db1a50fc02368a014c9", "text": "MOTIVATION\nA large choice of tools exists for many standard tasks in the analysis of high-throughput sequencing (HTS) data. However, once a project deviates from standard workflows, custom scripts are needed.\n\n\nRESULTS\nWe present HTSeq, a Python library to facilitate the rapid development of such scripts. HTSeq offers parsers for many common data formats in HTS projects, as well as classes to represent data, such as genomic coordinates, sequences, sequencing reads, alignments, gene model information and variant calls, and provides data structures that allow for querying via genomic coordinates. We also present htseq-count, a tool developed with HTSeq that preprocesses RNA-Seq data for differential expression analysis by counting the overlap of reads with genes.\n\n\nAVAILABILITY AND IMPLEMENTATION\nHTSeq is released as an open-source software under the GNU General Public Licence and available from http://www-huber.embl.de/HTSeq or from the Python Package Index at https://pypi.python.org/pypi/HTSeq.", "title": "" }, { "docid": "978deffd9337932a217dde27130be0e4", "text": "Semantic memory includes all acquired knowledge about the world and is the basis for nearly all human activity, yet its neurobiological foundation is only now becoming clear. Recent neuroimaging studies demonstrate two striking results: the participation of modality-specific sensory, motor, and emotion systems in language comprehension, and the existence of large brain regions that participate in comprehension tasks but are not modality-specific. These latter regions, which include the inferior parietal lobe and much of the temporal lobe, lie at convergences of multiple perceptual processing streams. These convergences enable increasingly abstract, supramodal representations of perceptual experience that support a variety of conceptual functions including object recognition, social cognition, language, and the remarkable human capacity to remember the past and imagine the future.", "title": "" }, { "docid": "6b1dd01c57f967e3caf83af9343099c5", "text": "We have devised and implemented a novel computational strategy for de novo design of molecules with desired properties termed ReLeaSE (Reinforcement Learning for Structural Evolution). On the basis of deep and reinforcement learning (RL) approaches, ReLeaSE integrates two deep neural networks—generative and predictive—that are trained separately but are used jointly to generate novel targeted chemical libraries. ReLeaSE uses simple representation of molecules by their simplified molecular-input line-entry system (SMILES) strings only. Generative models are trained with a stack-augmented memory network to produce chemically feasible SMILES strings, and predictive models are derived to forecast the desired properties of the de novo–generated compounds. In the first phase of the method, generative and predictive models are trained separately with a supervised learning algorithm. In the second phase, both models are trained jointly with the RL approach to bias the generation of new chemical structures toward those with the desired physical and/or biological properties. In the proof-of-concept study, we have used the ReLeaSE method to design chemical libraries with a bias toward structural complexity or toward compounds with maximal, minimal, or specific range of physical properties, such as melting point or hydrophobicity, or toward compounds with inhibitory activity against Janus protein kinase 2. The approach proposed herein can find a general use for generating targeted chemical libraries of novel compounds optimized for either a single desired property or multiple properties.", "title": "" }, { "docid": "f2fc77ae984b27bc90a24454d5a7c762", "text": "We develop a method for comparing hierarchical image representations in terms of their ability to explain perceptual sensitivity in humans. Specifically, we utilize Fisher information to establish a model-derived prediction of sensitivity to local perturbations of an image. For a given image, we compute the eigenvectors of the Fisher information matrix with largest and smallest eigenvalues, corresponding to the model-predicted mostand least-noticeable image distortions, respectively. For human subjects, we then measure the amount of each distortion that can be reliably detected when added to the image. We use this method to test the ability of a variety of representations to mimic human perceptual sensitivity. We find that the early layers of VGG16, a deep neural network optimized for object recognition, provide a better match to human perception than later layers, and a better match than a 4-stage convolutional neural network (CNN) trained on a database of human ratings of distorted image quality. On the other hand, we find that simple models of early visual processing, incorporating one or more stages of local gain control, trained on the same database of distortion ratings, provide substantially better predictions of human sensitivity than either the CNN, or any combination of layers of VGG16. Human capabilities for recognizing complex visual patterns are believed to arise through a cascade of transformations, implemented by neurons in successive stages in the visual system. Several recent studies have suggested that representations of deep convolutional neural networks trained for object recognition can predict activity in areas of the primate ventral visual stream better than models constructed explicitly for that purpose (Yamins et al. [2014], Khaligh-Razavi and Kriegeskorte [2014]). These results have inspired exploration of deep networks trained on object recognition as models of human perception, explicitly employing their representations as perceptual distortion metrics or loss functions (Hénaff and Simoncelli [2016], Johnson et al. [2016], Dosovitskiy and Brox [2016]). On the other hand, several other studies have used synthesis techniques to generate images that indicate a profound mismatch between the sensitivity of these networks and that of human observers. Specifically, Szegedy et al. [2013] constructed image distortions, imperceptible to humans, that cause their networks to grossly misclassify objects. Similarly, Nguyen and Clune [2015] optimized randomly initialized images to achieve reliable recognition by a network, but found that the resulting ∗Currently at Google, Inc. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. ‘fooling images’ were uninterpretable by human viewers. Simpler networks, designed for texture classification and constrained to mimic the early visual system, do not exhibit such failures (Portilla and Simoncelli [2000]). These results have prompted efforts to understand why generalization failures of this type are so consistent across deep network architectures, and to develop more robust training methods to defend networks against attacks designed to exploit these weaknesses (Goodfellow et al. [2014]). From the perspective of modeling human perception, these synthesis failures suggest that representational spaces within deep neural networks deviate significantly from those of humans, and that methods for comparing representational similarity, based on fixed object classes and discrete sampling of the representational space, are insufficient to expose these deviations. If we are going to use such networks as models for human perception, we need better methods of comparing model representations to human vision. Recent work has taken the first step in this direction, by analyzing deep networks’ robustness to visual distortions on classification tasks, as well as the similarity of classification errors that humans and deep networks make in the presence of the same kind of distortion (Dodge and Karam [2017]). Here, we aim to accomplish something in the same spirit, but rather than testing on a set of handselected examples, we develop a model-constrained synthesis method for generating targeted test stimuli that can be used to compare the layer-wise representational sensitivity of a model to human perceptual sensitivity. Utilizing Fisher information, we isolate the model-predicted most and least noticeable changes to an image. We test these predictions by determining how well human observers can discriminate these same changes. We apply this method to six layers of VGG16 (Simonyan and Zisserman [2015]), a deep convolutional neural network (CNN) trained to classify objects. We also apply the method to several models explicitly trained to predict human sensitivity to image distortions, including both a 4-stage generic CNN, an optimally-weighted version of VGG16, and a family of highly-structured models explicitly constructed to mimic the physiology of the early human visual system. Example images from the paper, as well as additional examples, are available at http://www.cns.nyu.edu/~lcv/eigendistortions/. 1 Predicting discrimination thresholds Suppose we have a model for human visual representation, defined by conditional density p(~r|~x), where ~x is an N -dimensional vector containing the image pixels, and ~r is an M -dimensional random vector representing responses internal to the visual system (e.g., firing rates of a population of neurons). If the image is modified by the addition of a distortion vector, ~x+ αû, where û is a unit vector, and scalar α controls the amplitude of distortion, the model can be used to predict the threshold at which the distorted image can be reliably distinguished from the original image. Specifically, one can express a lower bound on the discrimination threshold in direction û for any observer or model that bases its judgments on ~r (Seriès et al. [2009]): T (û; ~x) ≥ β √ ûTJ−1[~x]û (1) where β is a scale factor that depends on the noise amplitude of the internal representation (as well as experimental conditions, when measuring discrimination thresholds of human observers), and J [~x] is the Fisher information matrix (FIM; Fisher [1925]), a second-order expansion of the log likelihood: J [~x] = E~r|~x [( ∂ ∂~x log p(~r|~x) )( ∂ ∂~x log p(~r|~x) )T] (2) Here, we restrict ourselves to models that can be expressed as a deterministic (and differentiable) mapping from the input pixels to mean output response vector, f(~x), with additive white Gaussian noise in the response space. The log likelihood in this case reduces to a quadratic form: log p(~r|~x) = − 2 ( [~r − f(~x)] [~r − f(~x)] ) + const. Substituting this into Eq. (2) gives: J [~x] = ∂f ∂~x T ∂f ∂~x Thus, for these models, the Fisher information matrix induces a locally adaptive Euclidean metric on the space of images, as specified by the Jacobian matrix, ∂f/∂~x.", "title": "" }, { "docid": "037d8aa430923ddaaf5f7d280f5ea0c2", "text": "We describe a system that recognizes human postures with heavy self-occlusion. In particular, we address posture recognition in a robot assisted-living scenario, where the environment is equipped with a top-view camera for monitoring human activities. This setup is very useful because top-view cameras lead to accurate localization and limited inter-occlusion between persons, but conversely they suffer from body parts being frequently self-occluded. The conventional way of posture recognition relies on good estimation of body part positions, which turns out to be unstable in the top-view due to occlusion and foreshortening. In our approach, we learn a posture descriptor for each specific posture category. The posture descriptor encodes how well the person in the image can be `explained' by the model. The postures are subsequently recognized from the matching scores returned by the posture descriptors. We select the state-of-the-art approach of pose estimation as our posture descriptor. The results show that our method is able to correctly classify 79.7% of the test sample, which outperforms the conventional approach by over 23%.", "title": "" }, { "docid": "7012d4233e0b92008e4a4e05ae6cc143", "text": "We present a novel method for assigning fingers to notes in a polyphonic piano score. Such a mapping (called a “fingering”) is of great use to performers. To accommodate performers’ unique hand sha our method relies on a simple, user function. We use dynamic programming to search the space of all possible fingerings for the optimal fi ngering under this cost function. Despite the simplicity of the algorithm we achieve re asonable and useful results.", "title": "" }, { "docid": "b5387b0b6fa8ba1d87bd5e4f16a7e83e", "text": "Recent studies on visual tracking have shown significant improvement in accuracy by handling the appearance variations of the target object. Whereas most studies present schemes to extract the time-invariant characteristics of the target and adaptively update the appearance model, the present paper concentrates on modeling the probabilistic dependency between sequential target appearances (Fig. 1-(a)). To actualize this interest, a new Bayesian tracking framework is formulated under the autoregressive Hidden Markov Model (AR-HMM), where the probabilistic dependency between sequential target appearances is implied. During the learning phase at each time step, the proposed tracker separates formerly seen target samples into several clusters based on their visual similarity, and learns cluster-specific classifiers as multiple appearance models, each of which represents a certain type of the target appearance. Then the dependency between these appearance models is learned. During the searching phase, the target state is estimated by inferring the most probable appearance model under the consideration of its dependency on formerly utilized appearance models. The proposed method is tested on 12 challenging video sequences containing targets with abrupt appearance variations, and demonstrates that it outperforms current state-of-the-art methods in accuracy.", "title": "" }, { "docid": "80d0ac8bccbf4bee233d24da4de0fe0a", "text": "Volunteers have always been extremely crucial and in urgent need for nonprofit organizations (NPOs) to sustain their continuing operations. However, it is expensive and time-consuming to recruit volunteers using traditional approaches. In the Web 2.0 era, abundant and ubiquitous social media data opens a door to the possibility of automatic volunteer identification. In this article, we aim to fully explore this possibility by proposing a scheme that is able to predict users’ volunteerism tendency from user-generated contents collected from multiple social networks based on a conceptual volunteering decision model. We conducted comprehensive experiments to investigate the effectiveness of our proposed scheme and further discussed its generalizibility and extendability. This novel interdisciplinary research will potentially inspire more promising and important human-centered applications.", "title": "" }, { "docid": "1f4ff9d732b3512ee9b105f084edd3d2", "text": "Today, as Network environments become more complex and cyber and Network threats increase, Organizations use wide variety of security solutions against today's threats. For proper and centralized control and management, range of security features need to be integrated into unified security package. Unified threat management (UTM) as a comprehensive network security solution, integrates all of security services such as firewall, URL filtering, virtual private networking, etc. in a single appliance. PfSense is a variant of UTM, and a customized FreeBSD (Unix-like operating system). Specially is used as a router and statefull firewall. It has many packages extend it's capabilities such as Squid3 package as a as a proxy server that cache data and SquidGuard, redirector and access controller plugin for squid3 proxy server. In this paper, with implementing UTM based on PfSense platform we use Squid3 proxy server and SquidGuard proxy filter to avoid extreme amount of unwanted uploading/ downloading over the internet by users in order to optimize our organization's bandwidth consumption. We begin by defining UTM and types of it, PfSense platform with it's key services and introduce a simple and operational solution for security stability and reducing the cost. Finally, results and statistics derived from this approach compared with the prior condition without PfSense platform.", "title": "" }, { "docid": "9b1cf040b59dd25528b58d281e796ad9", "text": "The rapid development of Web2.0 leads to significant information redundancy. Especially for a complex news event, it is difficult to understand its general idea within a single coherent picture. A complex event often contains branches, intertwining narratives and side news which are all called storylines. In this paper, we propose a novel solution to tackle the challenging problem of storylines extraction and reconstruction. Specifically, we first investigate two requisite properties of an ideal storyline. Then a unified algorithm is devised to extract all effective storylines by optimizing these properties at the same time. Finally, we reconstruct all extracted lines and generate the high-quality story map. Experiments on real-world datasets show that our method is quite efficient and highly competitive, which can bring about quicker, clearer and deeper comprehension to readers.", "title": "" }, { "docid": "d518f1b11f2d0fd29dcef991afe17d17", "text": "Applications must be able to synchronize accesses to operating system resources in order to ensure correctness in the face of concurrency and system failures. System transactions allow the programmer to specify updates to heterogeneous system resources with the OS guaranteeing atomicity, consistency, isolation, and durability (ACID). System transactions efficiently and cleanly solve persistent concurrency problems that are difficult to address with other techniques. For example, system transactions eliminate security vulnerabilities in the file system that are caused by time-of-check-to-time-of-use (TOCTTOU) race conditions. System transactions enable an unsuccessful software installation to roll back without disturbing concurrent, independent updates to the file system.\n This paper describes TxOS, a variant of Linux 2.6.22 that implements system transactions. TxOS uses new implementation techniques to provide fast, serializable transactions with strong isolation and fairness between system transactions and non-transactional activity. The prototype demonstrates that a mature OS running on commodity hardware can provide system transactions at a reasonable performance cost. For instance, a transactional installation of OpenSSH incurs only 10% overhead, and a non-transactional compilation of Linux incurs negligible overhead on TxOS. By making transactions a central OS abstraction, TxOS enables new transactional services. For example, one developer prototyped a transactional ext3 file system in less than one month.", "title": "" } ]
scidocsrr
716b6f6cedad893bc110b912526f0873
GestureWrist and GesturePad: Unobtrusive Wearable Interaction Devices
[ { "docid": "24f141bd7a29bb8922fa010dd63181a6", "text": "This paper reports on the development of a hand to machine interface device that provides real-time gesture, position and orientation information. The key element is a glove and the device as a whole incorporates a collection of technologies. Analog flex sensors on the glove measure finger bending. Hand position and orientation are measured either by ultrasonics, providing five degrees of freedom, or magnetic flux sensors, which provide six degrees of freedom. Piezoceramic benders provide the wearer of the glove with tactile feedback. These sensors are mounted on the light-weight glove and connected to the driving hardware via a small cable.\nApplications of the glove and its component technologies include its use in conjunction with a host computer which drives a real-time 3-dimensional model of the hand allowing the glove wearer to manipulate computer-generated objects as if they were real, interpretation of finger-spelling, evaluation of hand impairment in addition to providing an interface to a visual programming language.", "title": "" }, { "docid": "c526e32c9c8b62877cb86bc5b097e2cf", "text": "This paper proposes a new field of user interfaces called multi-computer direct manipulation and presents a penbased direct manipulation technique that can be used for data transfer between different computers as well as within the same computer. The proposed Pick-andDrop allows a user to pick up an object on a display and drop it on another display as if he/she were manipulating a physical object. Even though the pen itself does not have storage capabilities, a combination of Pen-ID and the pen manager on the network provides the illusion that the pen can physically pick up and move a computer object. Based on this concept, we have built several experimental applications using palm-sized, desk-top, and wall-sized pen computers. We also considered the importance of physical artifacts in designing user interfaces in a future computing environment.", "title": "" } ]
[ { "docid": "e2666b0eed30a4eed2ad0cde07324d73", "text": "It is logical that the requirement for antioxidant nutrients depends on a person's exposure to endogenous and exogenous reactive oxygen species. Since cigarette smoking results in an increased cumulative exposure to reactive oxygen species from both sources, it would seem cigarette smokers would have an increased requirement for antioxidant nutrients. Logic dictates that a diet high in antioxidant-rich foods such as fruits, vegetables, and spices would be both protective and a prudent preventive strategy for smokers. This review examines available evidence of fruit and vegetable intake, and supplementation of antioxidant compounds by smokers in an attempt to make more appropriate nutritional recommendations to this population.", "title": "" }, { "docid": "d90a6f0b13b42ea44d214b3584fd41d7", "text": "Much work on the demographics of social media platforms such as Twitter has focused on the properties of individuals, such as gender or age. However, because credible detectors for organization accounts do not exist, these and future largescale studies of human behavior on social media can be contaminated by the presence of accounts belonging to organizations. We analyze organizations on Twitter to assess their distinct behavioral characteristics and determine what types of organizations are active. We first create a dataset of manually classified accounts from a representative sample of Twitter and then introduce a classifier to distinguish between organizational and personal accounts. In addition, we find that although organizations make up less than 10% of the accounts, they are significantly more connected, with an order of magnitude more friends and followers.", "title": "" }, { "docid": "0a916e98a315c44a5be68bb1f9aef9a3", "text": "Knowledge bases, which consist of concepts, entities, attributes and relations, are increasingly important in a wide range of applications. We argue that knowledge about attributes (of concepts or entities) plays a critical role in inferencing. In this paper, we propose methods to derive attributes for millions of concepts and we quantify the typicality of the attributes with regard to their corresponding concepts. We employ multiple data sources such as web documents, search logs, and existing knowledge bases, and we derive typicality scores for attributes by aggregating different distributions derived from different sources using different methods. To the best of our knowledge, ours is the first approach to integrate concept- and instance-based patterns into probabilistic typicality scores that scale to broad concept space. We have conducted extensive experiments to show the effectiveness of our approach.", "title": "" }, { "docid": "451a52573c5a4d81ea8a58a583afbca7", "text": "Sharding is a fundamental building block of large-scale applications, but most have their own custom, ad-hoc implementations. Our goal is to make sharding as easily reusable as a filesystem or lock manager. Slicer is Google’s general purpose sharding service. It monitors signals such as load hotspots and server health to dynamically shard work over a set of servers. Its goals are to maintain high availability and reduce load imbalance while minimizing churn from moved work. In this paper, we describe Slicer’s design and implementation. Slicer has the consistency and global optimization of a centralized sharder while approaching the high availability, scalability, and low latency of systems that make local decisions. It achieves this by separating concerns: a reliable data plane forwards requests, and a smart control plane makes load-balancing decisions off the critical path. Slicer’s small but powerful API has proven useful and easy to adopt in dozens of Google applications. It is used to allocate resources for web service front-ends, coalesce writes to increase storage bandwidth, and increase the efficiency of a web cache. It currently handles 2-7M req/s of production traffic. The median production Slicer-managed workload uses 63% fewer resources than it would with static sharding.", "title": "" }, { "docid": "001b5a976b6b6ccb15ab80ead4617422", "text": "Multivariate time-series modeling and forecasting is an important problem with numerous applications. Traditional approaches such as VAR (vector auto-regressive) models and more recent approaches such as RNNs (recurrent neural networks) are indispensable tools in modeling time-series data. In many multivariate time series modeling problems, there is usually a significant linear dependency component, for which VARs are suitable, and a nonlinear component, for which RNNs are suitable. Modeling such times series with only VAR or only RNNs can lead to poor predictive performance or complex models with large training times. In this work, we propose a hybrid model called R2N2 (Residual RNN), which first models the time series with a simple linear model (like VAR) and then models its residual errors using RNNs. R2N2s can be trained using existing algorithms for VARs and RNNs. Through an extensive empirical evaluation on two real world datasets (aviation and climate domains), we show that R2N2 is competitive, usually better than VAR or RNN, used alone. We also show that R2N2 is faster to train as compared to an RNN, while requiring less number of hidden units.", "title": "" }, { "docid": "cf4509b8d2b458f608a7e72165cdf22b", "text": "Nowadays, blockchain is becoming a synonym for distributed ledger technology. However, blockchain is only one of the specializations in the field and is currently well-covered in existing literature, but mostly from a cryptographic point of view. Besides blockchain technology, a new paradigm is gaining momentum: directed acyclic graphs. The contribution presented in this paper is twofold. Firstly, the paper analyzes distributed ledger technology with an emphasis on the features relevant to distributed systems. Secondly, the paper analyses the usage of directed acyclic graph paradigm in the context of distributed ledgers, and compares it with the blockchain-based solutions. The two paradigms are compared using representative implementations: Bitcoin, Ethereum and Nano. We examine representative solutions in terms of the applied data structures for maintaining the ledger, consensus mechanisms, transaction confirmation confidence, ledger size, and scalability.", "title": "" }, { "docid": "44bffd6caa0d90798f8ebc21a10fd248", "text": "INTRODUCTION\nThis study describes quality indicators for the pre-analytical process, grouping errors according to patient risk as critical or major, and assesses their evaluation over a five-year period.\n\n\nMATERIALS AND METHODS\nA descriptive study was made of the temporal evolution of quality indicators, with a study population of 751,441 analytical requests made during the period 2007-2011. The Runs Test for randomness was calculated to assess changes in the trend of the series, and the degree of control over the process was estimated by the Six Sigma scale.\n\n\nRESULTS\nThe overall rate of critical pre-analytical errors was 0.047%, with a Six Sigma value of 4.9. The total rate of sampling errors in the study period was 13.54% (P = 0.003). The highest rates were found for the indicators \"haemolysed sample\" (8.76%), \"urine sample not submitted\" (1.66%) and \"clotted sample\" (1.41%), with Six Sigma values of 3.7, 3.7 and 2.9, respectively.\n\n\nCONCLUSION\nThe magnitude of pre-analytical errors was accurately valued. While processes that triggered critical errors are well controlled, the results obtained for those regarding specimen collection are borderline unacceptable; this is particularly so for the indicator \"haemolysed sample\".", "title": "" }, { "docid": "722f7073b9bf9cf9363eed0d21ae8cb4", "text": "By virtue of the increasingly large amount of various sensors, information about the same object can be collected from multiple views. These mutually enriched information can help many real-world applications, such as daily activity recognition in which both video cameras and on-body sensors are continuously collecting information. Such multivariate time series (m.t.s.) data from multiple views can lead to a significant improvement of classification tasks. However, the existing methods for time series data classification only focus on single-view data, and the benefits of mutual-support multiple views are not taken into account. In light of this challenge, we propose a novel approach, named Multi-view Discriminative Bilinear Projections (MDBP), for extracting discriminative features from multi-view m.t.s. data. First, MDBP keeps the original temporal structure of m.t.s. data, and projects m.t.s. from different views onto a shared latent subspace. Second, MDBP incorporates discriminative information by minimizing the within-class separability and maximizing the between-class separability of m.t.s. in the shared latent subspace. Moreover, a Laplacian regularization term is designed to preserve the temporal smoothness within m.t.s.. Extensive experiments on two real-world datasets demonstrate the effectiveness of our approach. Compared to the state-of-the-art multi-view learning and m.t.s. classification methods, our approach greatly improves the classification accuracy due to the full exploration of multi-view streaming data. Moreover, by using a feature fusion strategy, our approach further improves the classification accuracy by at least 10%.", "title": "" }, { "docid": "269cff08201fd7815e3ea2c9a786d38b", "text": "In this paper, we are interested in developing compositional models to explicit representing pose, parts and attributes and tackling the tasks of attribute recognition, pose estimation and part localization jointly. This is different from the recent trend of using CNN-based approaches for training and testing on these tasks separately with a large amount of data. Conventional attribute models typically use a large number of region-based attribute classifiers on parts of pre-trained pose estimator without explicitly detecting the object or its parts, or considering the correlations between attributes. In contrast, our approach jointly represents both the object parts and their semantic attributes within a unified compositional hierarchy. We apply our attributed grammar model to the task of human parsing by simultaneously performing part localization and attribute recognition. We show our modeling helps performance improvements on pose-estimation task and also outperforms on other existing methods on attribute prediction task.", "title": "" }, { "docid": "6b92580dafc9baf21393d8f265efd5fd", "text": "Refactoring and, in particular, remodularization operations can be performed to repair the design of a software system and remove the erosion caused by software evolution. Various approaches have been proposed to support developers during the remodularization of a software system. Most of these approaches are based on the underlying assumption that developers pursue an optimal balance between cohesion and coupling when modularizing the classes of their systems. Thus, a remodularization recommender proposes a solution that implicitly provides a (near) optimal balance between such quality attributes. However, there is still no empirical evidence that such a balance is the desideratum by developers. This article aims at analyzing both objectively and subjectively the aforementioned phenomenon. Specifically, we present the results of (1) a large study analyzing the modularization quality, in terms of package cohesion and coupling, of 100 open-source systems, and (2) a survey conducted with 29 developers aimed at understanding the driving factors they consider when performing modularization tasks. The results achieved have been used to distill a set of lessons learned that might be considered to design more effective remodularization recommenders.", "title": "" }, { "docid": "5da804fa4c1474e27a1c91fcf5682e20", "text": "We present an overview of Candide, a system for automatic translat ion of French text to English text. Candide uses methods of information theory and statistics to develop a probabili ty model of the translation process. This model, which is made to accord as closely as possible with a large body of French and English sentence pairs, is then used to generate English translations of previously unseen French sentences. This paper provides a tutorial in these methods, discussions of the training and operation of the system, and a summary of test results. 1. I n t r o d u c t i o n Candide is an experimental computer program, now in its fifth year of development at IBM, for translation of French text to Enghsh text. Our goal is to perform fuRy-automatic, high-quality text totext translation. However, because we are still far from achieving this goal, the program can be used in both fully-automatic and translator 's-assistant modes. Our approach is founded upon the statistical analysis of language. Our chief tools axe the source-channel model of communication, parametric probabili ty models of language and translation, and an assortment of numerical algorithms for training such models from examples. This paper presents elementary expositions of each of these ideas, and explains how they have been assembled to produce Caadide. In Section 2 we introduce the necessary ideas from information theory and statistics. The reader is assumed to know elementary probabili ty theory at the level of [1]. In Sections 3 and 4 we discuss our language and translation models. In Section 5 we describe the operation of Candide as it translates a French document. In Section 6 we present results of our internal evaluations and the AB.PA Machine Translation Project evaluations. Section 7 is a summary and conclusion. 2 . Stat is t ical Trans la t ion Consider the problem of translating French text to English text. Given a French sentence f , we imagine that it was originally rendered as an equivalent Enghsh sentence e. To obtain the French, the Enghsh was t ransmit ted over a noisy communication channel, which has the curious property that English sentences sent into it emerge as their French translations. The central assumption of Candide's design is that the characteristics of this channel can be determined experimentally, and expressed mathematically. *Current address: Renaissance Technologies, Stony Brook, NY ~ English-to-French I f e Channel \" _[ French-to-English -] Decoder 6 Figure 1: The Source-Channel Formalism of Translation. Here f is the French text to be translated, e is the putat ive original English rendering, and 6 is the English translation. This formalism can be exploited to yield French-to-English translations as follows. Let us write P r (e I f ) for the probability that e was the original English rendering of the French f. Given a French sentence f, the problem of automatic translation reduces to finding the English sentence tha t maximizes P.r(e I f) . That is, we seek 6 = argmsx e Pr (e I f) . By virtue of Bayes' Theorem, we have = argmax Pr(e If ) = argmax Pr(f I e)Pr(e) (1) e e The term P r ( f l e ) models the probabili ty that f emerges from the channel when e is its input. We call this function the translation model; its domain is all pairs (f, e) of French and English word-strings. The term Pr (e ) models the a priori probability that e was supp led as the channel input. We call this function the language model. Each of these fac tors the translation model and the language model independent ly produces a score for a candidate English translat ion e. The translation model ensures that the words of e express the ideas of f, and the language model ensures that e is a grammatical sentence. Candide sehcts as its translat ion the e that maximizes their product. This discussion begs two impor tant questions. First , where do the models P r ( f [ e) and Pr (e ) come from? Second, even if we can get our hands on them, how can we search the set of all English strings to find 6? These questions are addressed in the next two sections. 2.1. P robab i l i ty Models We begin with a brief detour into probabili ty theory. A probability model is a mathematical formula that purports to express the chance of some observation. A parametric model is a probability model with adjustable parameters, which can be changed to make the model bet ter match some body of data. Let us write c for a body of da ta to be modeled, and 0 for a vector of parameters. The quanti ty Prs (c ) , computed according to some formula involving c and 0, is called the hkelihood 157 [Human Language Technology, Plainsboro, 1994]", "title": "" }, { "docid": "cc8b42f5b5f7de3695e169d99a0a6a22", "text": "Dota 2 is a multiplayer online game in which two teams of five players control “heroes” and compete to earn gold and destroy enemy structures. Teamwork is essential and heroes are chosen to create a balanced team that will counter the opponents’ selections. We studied how the win rate depends on hero selection by performing logistic regression with models that incorporate interactions between heroes. Our models did not match the naive model without interactions which had a 62% win prediction rate, suggesting cleaner data or better models are needed.", "title": "" }, { "docid": "23c2ea4422ec6057beb8fa0be12e57b3", "text": "This study applied logistic regression to model urban growth in the Atlanta Metropolitan Area of Georgia in a GIS environment and to discover the relationship between urban growth and the driving forces. Historical land use/cover data of Atlanta were extracted from the 1987 and 1997 Landsat TM images. Multi-resolution calibration of a series of logistic regression models was conducted from 50 m to 300 m at intervals of 25 m. A fractal analysis pointed to 225 m as the optimal resolution of modeling. The following two groups of factors were found to affect urban growth in different degrees as indicated by odd ratios: (1) population density, distances to nearest urban clusters, activity centers and roads, and high/low density urban uses (all with odds ratios < 1); and (2) distance to the CBD, number of urban cells within a 7 · 7 cell window, bare land, crop/grass land, forest, and UTM northing coordinate (all with odds ratios > 1). A map of urban growth probability was calculated and used to predict future urban patterns. Relative operating characteristic (ROC) value of 0.85 indicates that the probability map is valid. It was concluded that despite logistic regression’s lack of temporal dynamics, it was spatially explicit and suitable for multi-scale analysis, and most importantly, allowed much deeper understanding of the forces driving the growth and the formation of the urban spatial pattern. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1350f4e274947881f4562ab6596da6fd", "text": "Calls for widespread Computer Science (CS) education have been issued from the White House down and have been met with increased enrollment in CS undergraduate programs. Yet, these programs often suffer from high attrition rates. One successful approach to addressing the problem of low retention has been a focus on group work and collaboration. This paper details the design of a collaborative ITS (CIT) for foundational CS concepts including basic data structures and algorithms. We investigate the benefit of collaboration to student learning while using the CIT. We compare learning gains of our prior work in a non-collaborative system versus two methods of supporting collaboration in the collaborative-ITS. In our study of 60 students, we found significant learning gains for students using both versions. We also discovered notable differences related to student perception of tutor helpfulness which we will investigate in subsequent work.", "title": "" }, { "docid": "118738ca4b870e164c7be53e882a9ab4", "text": "IA. Cause and Effect . . . . . . . . . . . . . . 465 1.2. Prerequisites of Selforganization . . . . . . . 467 1.2.3. Evolut ion Must S ta r t f rom R andom Even ts 467 1.2.2. Ins t ruc t ion Requires In format ion . . . . 467 1.2.3. In format ion Originates or Gains Value by S e l e c t i o n . . . . . . . . . . . . . . . 469 1.2.4. Selection Occurs wi th Special Substances under Special Conditions . . . . . . . . 470", "title": "" }, { "docid": "49bc648b7588e3d6d512a65688ce23aa", "text": "Many Chinese websites (relying parties) use OAuth 2.0 as the basis of a single sign-on service to ease password management for users. Many sites support five or more different OAuth 2.0 identity providers, giving users choice in their trust point. However, although OAuth 2.0 has been widely implemented (particularly in China), little attention has been paid to security in practice. In this paper we report on a detailed study of OAuth 2.0 implementation security for ten major identity providers and 60 relying parties, all based in China. This study reveals two critical vulnerabilities present in many implementations, both allowing an attacker to control a victim user’s accounts at a relying party without knowing the user’s account name or password. We provide simple, practical recommendations for identity providers and relying parties to enable them to mitigate these vulnerabilities. The vulnerabilities have been reported to the parties concerned.", "title": "" }, { "docid": "5dee244ee673909c3ba3d3d174a7bf83", "text": "Fingerprint has remained a very vital index for human recognition. In the field of security, series of Automatic Fingerprint Identification Systems (AFIS) have been developed. One of the indices for evaluating the contributions of these systems to the enforcement of security is the degree with which they appropriately verify or identify input fingerprints. This degree is generally determined by the quality of the fingerprint images and the efficiency of the algorithm. In this paper, some of the sub-models of an existing mathematical algorithm for the fingerprint image enhancement were modified to obtain new and improved versions. The new versions consist of different mathematical models for fingerprint image segmentation, normalization, ridge orientation estimation, ridge frequency estimation, Gabor filtering, binarization and thinning. The implementation was carried out in an environment characterized by Window Vista Home Basic operating system as platform and Matrix Laboratory (MatLab) as frontend engine. Synthetic images as well as real fingerprints obtained from the FVC2004 fingerprint database DB3 set A were used to test the adequacy of the modified sub-models and the resulting algorithm. The results show that the modified sub-models perform well with significant improvement over the original versions. The results also show the necessity of each level of the enhancement. KeywordAFIS; Pattern recognition; pattern matching; fingerprint; minutiae; image enhancement.", "title": "" }, { "docid": "9e46a59546d270aa74ffbe48a968b07b", "text": "We tested whether an opposing expert is an effective method of educating jurors about scientific validity by manipulating the methodological quality of defense expert testimony and the type of opposing prosecution expert testimony (none, standard, addresses the other expert's methodology) within the context of a written trial transcript. The presence of opposing expert testimony caused jurors to be skeptical of all expert testimony rather than sensitizing them to flaws in the other expert's testimony. Jurors rendered more guilty verdicts when they heard opposing expert testimony than when opposing expert testimony was absent, regardless of whether the opposing testimony addressed the methodology of the original expert or the validity of the original expert's testimony. Thus, contrary to the assumptions in the Supreme Court's decision in Daubert, opposing expert testimony may not be an effective safeguard against junk science in the courtroom.", "title": "" }, { "docid": "ab71df85da9c1798a88b2bb3572bf24f", "text": "In order to develop an efficient and reliable pulsed power supply for excimer dielectric barrier discharge (DBD) ultraviolet (UV) sources, a pulse generator using Marx topology is adopted. MOSFETs are used as switches. The 12-stage pulse generator operates with a voltage amplitude in the range of 0-5.5 kV. The repetition rate and pulsewidth can be adjusted from 0.1 to 50 kHz and 2 to 20 μs, respectively. It is used to excite KrCl* excilamp, a typical DBD UV source. In order to evaluate the performance of the pulse generator, a sinusoidal voltage power supply dedicated for DBD lamp is also used to excite the KrCl* excilamp. It shows that the lamp excited by the pulse generator has better performance with regard to radiant power and system efficiency. The influence of voltage amplitude, repetition rate, pulsewidth, and rise and fall times on radiant power and system efficiency is investigated using the pulse generator. An inductor is inserted between the pulse generator and the KrCl* excilamp to reduce electromagnetic interference and enhance system reliability.", "title": "" }, { "docid": "03966c28d31e1c45896eab46a1dcce57", "text": "For many applications it is useful to sample from a nite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov chain whose stationary distribution is the desired distribution on this set; after the Markov chain has run for M steps, with M suuciently large, the distribution governing the state of the chain approximates the desired distribution. Unfortunately it can be diicult to determine how large M needs to be. We describe a simple variant of this method that determines on its own when to stop, and that outputs samples in exact accordance with the desired distribution. The method uses couplings, which have also played a role in other sampling schemes; however, rather than running the coupled chains from the present into the future, one runs from a distant point in the past up until the present, where the distance into the past that one needs to go is determined during the running of the algorithm itself. If the state space has a partial order that is preserved under the moves of the Markov chain, then the coupling is often particularly eecient. Using our approach one can sample from the Gibbs distributions associated with various statistical mechanics models (including Ising, random-cluster, ice, and dimer) or choose uniformly at random from the elements of a nite distributive lattice.", "title": "" } ]
scidocsrr
0297c798889ab7375a493f2e6e1e761b
The career of metaphor.
[ { "docid": "2a61c7755dd99999721c5c8941666770", "text": "People construct ad hoc categories to achieve goals. For example, constructing the category of \"things to sell at a garage sale\" can be instrumental to achieving the goal of selling unwanted possessions. These categories differ from common categories (e.g., \"fruit,\" \"furniture\") in that ad hoc categories violate the correlational structure of the environment and are not well established in memory. Regarding the latter property, the category concepts, concept-to-instance associations, and instance-to-concept associations structuring ad hoc categories are shown to be much less established in memory than those of common categories. Regardless of these differences, however, ad hoc categories possess graded structures [i.e., typicality gradients) as salient as those structuring common categories. This appears to be the result of a similarity comparison process that imposes graded structure on any category regardless of type.", "title": "" } ]
[ { "docid": "2f0eb4a361ff9f09bda4689a1f106ff2", "text": "The growth of Quranic digital publishing increases the need to develop a better framework to authenticate Quranic quotes with the original source automatically. This paper aims to demonstrate the significance of the quote authentication approach. We propose an approach to verify the e-citation of the Quranic quote as compared with original texts from the Quran. In this paper, we will concentrate mainly on discussing the Algorithm to verify the fundamental text for Quranic quotes.", "title": "" }, { "docid": "ab3c0d4fecf7722a4b592473eb0de8dc", "text": "IOT( Internet of Things) relying on exchange of information through radio frequency identification(RFID), is emerging as one of important technologies that find its use in various applications ranging from healthcare, construction, hospitality to transportation sector and many more. This paper describes about IOT, concentrating its use in improving and securing future shopping. This paper shows how RFID technology makes life easier and secure and thus helpful in the future. KeywordsIOT,RFID, Intelligent shopping, RFID tags, RFID reader, Radio frequency", "title": "" }, { "docid": "66d584c242fb96527cef9b3b084d23a8", "text": "Online discussions boards represent a rich repository of knowledge organized in a collection of user generated content. These conversational cyberspaces allow users to express opinions, ideas and pose questions and answers without imposing strict limitations about the content. This freedom, in turn, creates an environment in which discussions are not bounded and often stray from the initial topic being discussed. In this paper we focus on approaches to assess the relevance of posts to a thread and detecting when discussions have been steered off-topic. A set of metrics estimating the level of novelty in online discussion posts are presented. These metrics are based on topical estimation and contextual similarity between posts within a given thread. The metrics are aggregated to rank posts based on the degree of relevance they maintain. The aggregation scheme is data-dependent and is normalized relative to the post length.", "title": "" }, { "docid": "20441819838ba1b60279e19523abe551", "text": "Chinese remainder problem Given: rl, ... , rn E R (remainders) 11, ... , In ideals in R (moduli), such that Ii + Ij = R for all i =f. j Find: r E R, such that r == ri mod Ii for 1 ::: i ::: n The abstract Chinese remainder problem can be treated basically in the same way as the CRP over Euclidean domains. Again there is a Lagrangian and a Newtonian approach and one can show that the problem always has a solution and if r is a solution then the set of all solutions is given by r + II n ... n In. 3.1 Chinese remainder problem 57 That is, the map ¢J: r t-+ (r + h, ... , r + In) is a homomorphism from R onto nj=1 R/lj with kernel II n ... n In. However, in the absence of the Euclidean algorithm it is not possible to compute a solution of the abstract CRP. See Lauer (1983). A preconditioned Chinese remainder algorithm If the CRA is applied in a setting where many conversions w.r.t. a fixed set of moduli have to be computed, it is reasonable to precompute all partial results depending on the moduli alone. This idea leads to a preconditioned CRA, as described in Aho et al. (1974). Theorem 3.1.7. Let rl, ... , rn and ml, ... , mn be the remainders and moduli, respectively, of a CRP in the Euclidean domain D. Let m be the product of all the moduli. Let Ci = m/mi and di = ci l mod mi for 1 :::s i :::s n. Then n r = LCidiri mod m i=1 is a solution to the corresponding CRP. (3.l.1) Proof Since Ci is divisible by mj for j =1= i, we have Cidiri == 0 mod mj for j =1= i. Therefore n LCidiri == cjdjrj == rj mod mj, for all l:::s j :::s n . 0 i=1 A more detailed analysis of (3.l.1) reveals many common factors of the expressions Cidiri. Let us assume that n is a power of 2, n = 2t. Obviously, ml ..... mn/2 is a factor of Cidiri for all i > n/2 and m n/2+1 ..... mn is a factor of Cidiri for all i :::s n/2. So we could write (3.1.1) as", "title": "" }, { "docid": "bf23a6fcf1a015d379dee393a294761c", "text": "This study addresses the inconsistency of contemporary literature on defining the link between leadership styles and personality traits. The plethora of literature on personality traits has culminated into symbolic big five personality dimensions but there is still a dearth of research on developing representative leadership styles despite the perennial fascination with the subject. Absence of an unequivocal model for developing representative styles in conjunction with the use of several non-mutually exclusive existing leadership styles has created a discrepancy in developing a coherent link between leadership and personality. This study sums up 39 different styles of leadership into five distinct representative styles on the basis of similar theoretical underpinnings and common characteristics to explore how each of these five representative leadership style relates to personality dimensions proposed by big five model.", "title": "" }, { "docid": "1f28f5efa70a6387b00e335a8cf1e1d0", "text": "The two underlying requirements of face age progression, i.e. aging accuracy and identity permanence, are not well studied in the literature. In this paper, we present a novel generative adversarial network based approach. It separately models the constraints for the intrinsic subject-specific characteristics and the age-specific facial changes with respect to the elapsed time, ensuring that the generated faces present desired aging effects while simultaneously keeping personalized properties stable. Further, to generate more lifelike facial details, high-level age-specific features conveyed by the synthesized face are estimated by a pyramidal adversarial discriminator at multiple scales, which simulates the aging effects in a finer manner. The proposed method is applicable to diverse face samples in the presence of variations in pose, expression, makeup, etc., and remarkably vivid aging effects are achieved. Both visual fidelity and quantitative evaluations show that the approach advances the state-of-the-art.", "title": "" }, { "docid": "97dfc2b23b527a05f7de443f10a89543", "text": "Over-the-top mobile video streaming is invariably influenced by volatile network conditions which cause playback interruptions (stalling events), thereby impairing users’ quality of experience (QoE). Developing models that can accurately predict users’ QoE could enable the more efficient design of quality-control protocols for video streaming networks that reduce network operational costs while still delivering high-quality video content to the customers. Existing objective models that predict QoE are based on global video features, such as the number of stall events and their lengths, and are trained and validated on a small pool of ad hoc video datasets, most of which are not publicly available. The model we propose in this work goes beyond previous models as it also accounts for the fundamental effect that a viewer’s recent level of satisfaction or dissatisfaction has on their overall viewing experience. In other words, the proposed model accounts for and adapts to the recency, or hysteresis effect caused by a stall event in addition to accounting for the lengths, frequency of occurrence, and the positions of stall events factors that interact in a complex way to affect a user’s QoE. On the recently introduced LIVE-Avvasi Mobile Video Database, which consists of 180 distorted videos of varied content that are afflicted solely with over 25 unique realistic stalling events, we trained and validated our model to accurately predict the QoE, attaining standout QoE prediction performance.", "title": "" }, { "docid": "7fe4f5ca8e770a51deef16d05f40b335", "text": "Ultrasonic flow meters are gaining wide usage in commercial, industrial and medical applications. Major benefits of utilizing this type of flowmeter are higher accuracy, low maintenance (no moving parts), noninvasive flow measurement, and the ability to regularly diagnose health of the meter. This application note is intended as an introduction to ultrasonic time-of-flight (TOF) flow sensing using the TDC1000 ultrasonic analog-front-end (AFE) and the TDC7200 picosecond accurate stopwatch. Information regarding a typical off-the-shelf ultrasonic flow sensor is provided, along with related equations for calculation of flow velocity and flow rate. Included in the appendix is a summary of standards for water meters and a list of low cost sensors suitable for this application space. Topic ........................................................................................................................... Page", "title": "" }, { "docid": "048ff79b90371eb86b9d62810cfea31f", "text": "In October, 2006 Netflix released a dataset containing 100 million anonymous movie ratings and challenged the data mining, machine learning and computer science communities to develop systems that could beat the accuracy of its recommendation system, Cinematch. We briefly describe the challenge itself, review related work and efforts, and summarize visible progress to date. Other potential uses of the data are outlined, including its application to the KDD Cup 2007.", "title": "" }, { "docid": "d7e2654767d1178871f3f787f7616a94", "text": "We propose a nonparametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute the final segmentation of the test subject. Such label fusion methods have been shown to yield accurate segmentation, since the use of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures. To the best of our knowledge, this manuscript presents the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multiatlas segmentation algorithms are interpreted as special cases of our framework. We conduct two sets of experiments to validate the proposed methods. In the first set of experiments, we use 39 brain MRI scans - with manually segmented white matter, cerebral cortex, ventricles and subcortical structures - to compare different label fusion algorithms and the widely-used FreeSurfer whole-brain segmentation tool. Our results indicate that the proposed framework yields more accurate segmentation than FreeSurfer and previous label fusion algorithms. In a second experiment, we use brain MRI scans of 282 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal volume changes in a study of aging and Alzheimer's Disease.", "title": "" }, { "docid": "42c0f8504f26d46a4cc92d3c19eb900d", "text": "Research into suicide prevention has been hampered by methodological limitations such as low sample size and recall bias. Recently, Natural Language Processing (NLP) strategies have been used with Electronic Health Records to increase information extraction from free text notes as well as structured fields concerning suicidality and this allows access to much larger cohorts than previously possible. This paper presents two novel NLP approaches – a rule-based approach to classify the presence of suicide ideation and a hybrid machine learning and rule-based approach to identify suicide attempts in a psychiatric clinical database. Good performance of the two classifiers in the evaluation study suggest they can be used to accurately detect mentions of suicide ideation and attempt within free-text documents in this psychiatric database. The novelty of the two approaches lies in the malleability of each classifier if a need to refine performance, or meet alternate classification requirements arises. The algorithms can also be adapted to fit infrastructures of other clinical datasets given sufficient clinical recording practice knowledge, without dependency on medical codes or additional data extraction of known risk factors to predict suicidal behaviour.", "title": "" }, { "docid": "1d61e1eb5275444c6a2a3f8ad5c2865a", "text": "We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore,we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance fetures is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix. European Conference on Computer Vision (ECCV) This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2006 201 Broadway, Cambridge, Massachusetts 02139 Region Covariance: A Fast Descriptor for Detection and Classification Oncel Tuzel, Fatih Porikli, and Peter Meer 1 Computer Science Department, 2 Electrical and Computer Engineering Department, Rutgers University, Piscataway, NJ 08854 {otuzel, meer}@caip.rutgers.edu 3 Mitsubishi Electric Research Laboratories, Cambridge, MA 02139 {fatih}@merl.com Abstract. We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix. We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix.", "title": "" }, { "docid": "2943c046bae638a287ddaf72129bee0e", "text": "The use of graphene for fixed-beam reflectarray antennas at Terahertz (THz) is proposed. Graphene's unique electronic band structure leads to a complex surface conductivity at THz frequencies, which allows the propagation of very slow plasmonic modes. This leads to a drastic reduction of the electrical size of the array unit cell and thereby good array performance. The proposed reflectarray has been designed at 1.3 THz and comprises more than 25000 elements of size about λ0/16. The array reflective unit cell is analyzed using a full vectorial approach, taking into account the variation of the angle of incidence and assuming local periodicity. Good performance is obtained in terms of bandwidth, cross-polar, and grating lobes suppression, proving the feasibility of graphene-based reflectarrays and other similar spatially fed structures at Terahertz frequencies. This result is also a first important step toward reconfigurable THz reflectarrays using graphene electric field effect.", "title": "" }, { "docid": "f87e8f9d733ed60cedfda1cbfe176cbf", "text": "Image set classification finds its applications in a number of real-life scenarios such as classification from surveillance videos, multi-view camera networks and personal albums. Compared with single image based classification, it offers more promises and has therefore attracted significant research attention in recent years. Unlike many existing methods which assume images of a set to lie on a certain geometric surface, this paper introduces a deep learning framework which makes no such prior assumptions and can automatically discover the underlying geometric structure. Specifically, a Template Deep Reconstruction Model (TDRM) is defined whose parameters are initialized by performing unsupervised pre-training in a layer-wise fashion using Gaussian Restricted Boltzmann Machines (GRBMs). The initialized TDRM is then separately trained for images of each class and class-specific DRMs are learnt. Based on the minimum reconstruction errors from the learnt class-specific models, three different voting strategies are devised for classification. Extensive experiments are performed to demonstrate the efficacy of the proposed framework for the tasks of face and object recognition from image sets. Experimental results show that the proposed method consistently outperforms the existing state of the art methods.", "title": "" }, { "docid": "7e17c1842a70e416f0a90bdcade31a8e", "text": "A novel feeding system using substrate integrated waveguide (SIW) technique for antipodal linearly tapered slot array antenna (ALTSA) is presented in this paper. After making studies by simulations for a SIW fed ALTSA cell, a 1/spl times/8 ALTSA array fed by SIW feeding system at X-band is fabricated and measured, and the measured results show that this array antenna has a wide bandwidth and good performances.", "title": "" }, { "docid": "2283e43c2bad5ac682fe185cb2b8a9c1", "text": "As widely recognized in the literature, information technology (IT) investments have several special characteristics that make assessing their costs and benefits complicated. Here, we address the problem of evaluating a web content management system for both internal and external use. The investment is presently undergoing an evaluation process in a multinational company. We aim at making explicit the desired benefits and expected risks of the system investment. An evaluation hierarchy at general level is constructed. After this, a more detailed hierarchy is constructed to take into account the contextual issues. To catch the contextual issues key company representatives were interviewed. The investment alternatives are compared applying the principles of the Analytic Hierarchy Process (AHP). Due to the subjective and uncertain characteristics of the strategic IT investments a wide range of sensitivity analyses is performed.", "title": "" }, { "docid": "9bbf9422ae450a17e0c46d14acf3a3e3", "text": "This short paper outlines how polynomial chaos theory (PCT) can be utilized for manipulator dynamic analysis and controller design in a 4-DOF selective compliance assembly robot-arm-type manipulator with variation in both the link masses and payload. It includes a simple linear control algorithm into the formulation to show the capability of the PCT framework.", "title": "" }, { "docid": "619a699d6e848ff692a581dc40a86a10", "text": "Intelligent Transportation System (ITS) is a significant part of smart city, and short-term traffic flow prediction plays an important role in intelligent transportation management and route guidance. A number of models and algorithms based on time series prediction and machine learning were applied to short-term traffic flow prediction and achieved good results. However, most of the models require the length of the input historical data to be predefined and static, which cannot automatically determine the optimal time lags. To overcome this shortage, a model called Long Short-Term Memory Recurrent Neural Network (LSTM RNN) is proposed in this paper, which takes advantages of the three multiplicative units in the memory block to determine the optimal time lags dynamically. The dataset from Caltrans Performance Measurement System (PeMS) is used for building the model and comparing LSTM RNN with several well-known models, such as random walk(RW), support vector machine(SVM), single layer feed forward neural network(FFNN) and stacked autoencoder(SAE). The results show that the proposed prediction model achieves higher accuracy and generalizes well.", "title": "" }, { "docid": "b68f0c4aa0b5638a2a426bf9bd97a2ab", "text": "The interrelationship between ionizing radiation and the immune system is complex, multifactorial, and dependent on radiation dose/quality and immune cell type. High-dose radiation usually results in immune suppression. On the contrary, low-dose radiation (LDR) modulates a variety of immune responses that have exhibited the properties of immune hormesis. Although the underlying molecular mechanism is not fully understood yet, LDR has been used clinically for the treatment of autoimmune diseases and malignant tumors. These advancements in preclinical and clinical studies suggest that LDR-mediated immune modulation is a well-orchestrated phenomenon with clinical potential. We summarize recent developments in the understanding of LDR-mediated immune modulation, with an emphasis on its potential clinical applications.", "title": "" } ]
scidocsrr
040dbe51b012768f8a43cb51f6377a01
A Generative Approach for Dynamically Varying Photorealistic Facial Expressions in Human-Agent Interactions
[ { "docid": "102bec350390b46415ae07128cb4e77f", "text": "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.", "title": "" } ]
[ { "docid": "d9947d2a6b6e184cf27515ad72cc7f98", "text": "This study examined the role of a social network site (SNS) in the lives of 11 high school teenagers from low-income families in the U.S. We conducted interviews, talk-alouds and content analysis of MySpace profiles. Qualitative analysis of these data revealed three themes. First, SNSs facilitated emotional support, helped maintain relationships, and provided a platform for self-presentation. Second, students used their online social network to fulfill essential social learning functions. Third, within their SNS, students engaged in a complex array of communicative and creative endeavors. In several instances, students’ use of social network sites demonstrated the new literacy practices currently being discussed within education reform efforts. Based on our findings, we suggest additional directions for related research and educational practices.", "title": "" }, { "docid": "6d31096c16817f13641b23ae808b0dce", "text": "In the competitive environment of the internet, retaining and growing one's user base is of major concern to most web services. Furthermore, the economic model of many web services is allowing free access to most content, and generating revenue through advertising. This unique model requires securing user time on a site rather than the purchase of good which makes it crucially important to create new kinds of metrics and solutions for growth and retention efforts for web services. In this work, we address this problem by proposing a new retention metric for web services by concentrating on the rate of user return. We further apply predictive analysis to the proposed retention metric on a service, as a means for characterizing lost customers. Finally, we set up a simple yet effective framework to evaluate a multitude of factors that contribute to user return. Specifically, we define the problem of return time prediction for free web services. Our solution is based on the Cox's proportional hazard model from survival analysis. The hazard based approach offers several benefits including the ability to work with censored data, to model the dynamics in user return rates, and to easily incorporate different types of covariates in the model. We compare the performance of our hazard based model in predicting the user return time and in categorizing users into buckets based on their predicted return time, against several baseline regression and classification methods and find the hazard based approach to be superior.", "title": "" }, { "docid": "d414dd7d2fd699e58cae194a828ae042", "text": "Network design problems consist of identifying an optimal subgraph of a graph, subject to side constraints. In generalized network design problems, the vertex set is partitioned into clusters and the feasibility conditions are expressed in terms of the clusters. Several applications of generalized network design problems arise in the fields of telecommunications, transportation and biology. The aim of this review article is to formally define generalized network design problems, to study their properties and to provide some applications.", "title": "" }, { "docid": "5f3dfd97498034d0a104bf41149651f2", "text": "BACKGROUND\nResearch questionnaires are not always translated appropriately before they are used in new temporal, cultural or linguistic settings. The results based on such instruments may therefore not accurately reflect what they are supposed to measure. This paper aims to illustrate the process and required steps involved in the cross-cultural adaptation of a research instrument using the adaptation process of an attitudinal instrument as an example.\n\n\nMETHODS\nA questionnaire was needed for the implementation of a study in Norway 2007. There was no appropriate instruments available in Norwegian, thus an Australian-English instrument was cross-culturally adapted.\n\n\nRESULTS\nThe adaptation process included investigation of conceptual and item equivalence. Two forward and two back-translations were synthesized and compared by an expert committee. Thereafter the instrument was pretested and adjusted accordingly. The final questionnaire was administered to opioid maintenance treatment staff (n=140) and harm reduction staff (n=180). The overall response rate was 84%. The original instrument failed confirmatory analysis. Instead a new two-factor scale was identified and found valid in the new setting.\n\n\nCONCLUSIONS\nThe failure of the original scale highlights the importance of adapting instruments to current research settings. It also emphasizes the importance of ensuring that concepts within an instrument are equal between the original and target language, time and context. If the described stages in the cross-cultural adaptation process had been omitted, the findings would have been misleading, even if presented with apparent precision. Thus, it is important to consider possible barriers when making a direct comparison between different nations, cultures and times.", "title": "" }, { "docid": "4a201e61cbb168df4df48fe331817260", "text": "The use of qualitative research methodology is well established for data generation within healthcare research generally and clinical pharmacy research specifically. In the past, qualitative research methodology has been criticized for lacking rigour, transparency, justification of data collection and analysis methods being used, and hence the integrity of findings. Demonstrating rigour in qualitative studies is essential so that the research findings have the “integrity” to make an impact on practice, policy or both. Unlike other healthcare disciplines, the issue of “quality” of qualitative research has not been discussed much in the clinical pharmacy discipline. The aim of this paper is to highlight the importance of rigour in qualitative research, present different philosophical standpoints on the issue of quality in qualitative research and to discuss briefly strategies to ensure rigour in qualitative research. Finally, a mini review of recent research is presented to illustrate the strategies reported by clinical pharmacy researchers to ensure rigour in their qualitative research studies.", "title": "" }, { "docid": "c80b01048778e5863882868774e3e98d", "text": "A new liaison role between Information Systems (IS) and users, the relationship manager (RM), has recently emerged. Accolding to the prescriptive literature, RMs add value by deep understanding of the businesses they serve and technologyleadership. Uttle is known, however, about their actual work practices. Is the RM an intermediary, filtering information and sometimes misinformation, from clients to IS, or do they play more pivotal roles as entrepreneurs and change agents? This article addresses these questions by studying four RMs in four different industries. The RMs were studied using the structured observation methodology employed by Mintzberg (CEOs), Ives and Olson (MIS managers), and Stephens et at. (CIOs), l'he findings suggest that while RMs spend less time communicating with users than one would expect, they are leaders, often mavericks, in the entrepreneurial work practices necessary to build partnerships with clients and to make the IS infrastructure more responsive to client needs.", "title": "" }, { "docid": "cbc22adbd8f7a82d1972e6b53bc5e000", "text": "This thesis examines several aspects of narrative in video games, in order to construct a detailed image of the characteristics that separate video game narrative from other, noninteractive narrative forms. These findings are subsequently used to identify and define three basic models of video game narrative. Since it has also been argued that video games should not have narrative in the first place, the validity of this question is also examined. Overall, it is found that while the interactive nature of the video game does indeed cause some problems for the implementation of narrative, this relationship is not as problematic as has been claimed, and there seems to be no reason to argue that video games and narrative should be kept separate from each other. It is also found that the interactivity of the video game encourages the use of certain narrative tools while discouraging or disabling the author’s access to other options. Thus, video games in general allow for a much greater degree of subjectivity than is typical in non-interactive narrative forms. At the same time, the narrator’s ability to manipulate time within the story is restricted precisely because of this increased subjectivity. Another interesting trait of video game narrative is that it opens up the possibility of the game player sharing some of the author’s abilities as the narrator. Three models of video game narrative are suggested. These included the linear ‘string of pearls’ model, where the player is given a certain degree of freedom at certain times during the game, but ultimately still follows a linear storyline; the ‘branching narrative’ model, where the player helps define the course and ending of the story by selecting from narrative branches; and the ‘amusement park’ model, where the player is invited to put together a story out of a group of optional subplots. The existence of a fourth model, the ‘building blocks’ model, is also noted, but this model is not discussed in detail as it does not utilise any traditional narrative structure, instead allowing the players to define every aspect of the story.", "title": "" }, { "docid": "b829049a8abf47f8f13595ca54eaa009", "text": "This paper describes a face recognition-based people tracking and re-identification system for RGB-D camera networks. The system tracks people and learns their faces online to keep track of their identities even if they move out from the camera's field of view once. For robust people re-identification, the system exploits the combination of a deep neural network- based face representation and a Bayesian inference-based face classification method. The system also provides a predefined people identification capability: it associates the online learned faces with predefined people face images and names to know the people's whereabouts, thus, allowing a rich human-system interaction. Through experiments, we validate the re-identification and the predefined people identification capabilities of the system and show an example of the integration of the system with a mobile robot. The overall system is built as a Robot Operating System (ROS) module. As a result, it simplifies the integration with the many existing robotic systems and algorithms which use such middleware. The code of this work has been released as open-source in order to provide a baseline for the future publications in this field.", "title": "" }, { "docid": "d1f130e8b742023e5224a2f99c3639b5", "text": "An increasing number of firms are responding to new opportunities and risks originating from digital technologies by introducing company-wide digital transformation strategies as a means to systematically address their digital transformation. Yet, what processes and strategizing activities affect the formation of digital transformation strategies in organizations are not well understood. We adopt a phenomenon-based approach and investigate the formation of digital transformation strategies in organizations from a process perspective. Drawing on an activity-based process model that links Mintzberg’s strategy typology with the concept of IS strategizing, we conduct a multiple-case study at three European car manufacturers. Our results indicate that digital transformation strategies are predominantly shaped by a diversity of emergent strategizing activities of separate organizational subcommunities through a bottom-up process and prior to the initiation of a holistic digital transformation strategy by top management. As a result, top management’s deliberate strategies seek to accomplish the subsequent alignment of preexisting emergent strategy contents with their intentions and to simultaneously increase the share of deliberate contents. Besides providing practical implications for the formulation and implementation of a digital transformation strategy, we contribute to the literature on digital transformation and IS strategizing.", "title": "" }, { "docid": "16708c9e697dbd867aa81420bc669953", "text": "We propose a dynamic trust management protocol for Internet of Things (IoT) systems to deal with misbehaving nodes whose status or behavior may change dynamically. We consider an IoT system being deployed in a smart community where each node autonomously performs trust evaluation. We provide a formal treatment of the convergence, accuracy, and resilience properties of our dynamic trust management protocol and validate these desirable properties through simulation. We demonstrate the effectiveness of our dynamic trust management protocol with a trust-based service composition application in IoT environments. Our results indicate that trust-based service composition significantly outperforms non-trust-based service composition and approaches the maximum achievable performance based on ground truth status. Furthermore, our dynamic trust management protocol is capable of adaptively adjusting the best trust parameter setting in response to dynamically changing environments to maximize application performance.", "title": "" }, { "docid": "ce1f67735cfa0e68246e92c53072155f", "text": "Event and relation extraction are central tasks in biomedical text mining. Where relation extraction concerns the detection of semantic connections between pairs of entities, event extraction expands this concept with the addition of trigger words, multiple arguments and nested events, in order to more accurately model the diversity of natural language. In this work we develop a convolutional neural network that can be used for both event and relation extraction. We use a linear representation of the input text, where information is encoded with various vector space embeddings. Most notably, we encode the parse graph into this linear space using dependency path embeddings. We integrate our neural network into the open source Turku Event Extraction System (TEES) framework. Using this system, our machine learning model can be easily applied to a large set of corpora from e.g. the BioNLP, DDI Extraction and BioCreative shared tasks. We evaluate our system on 12 different event, relation and NER corpora, showing good generalizability to many tasks and achieving improved performance on several corpora.", "title": "" }, { "docid": "c1220bd89725bf06b811f3ae14fc1a3f", "text": "In the simultaneous localization and mapping (SLAM) problem, a mobile robot must build a map of its environment while simultaneously determining its location within that map. We propose a new algorithm, for visual SLAM (VSLAM), in which the robot's only sensory information is video imagery. Our approach combines stereo vision with a popular sequential Monte Carlo (SMC) algorithm, the Rao-Blackwellised particle filter, to simultaneously explore multiple hypotheses about the robot's six degree-of-freedom trajectory through space and maintain a distinct stochastic map for each of those candidate trajectories. We demonstrate the algorithm's effectiveness in mapping a large outdoor virtual reality environment in the presence of odometry error", "title": "" }, { "docid": "41d97d98a524e5f1e45ae724017819d9", "text": "Dynamically changing (reconfiguring) the membership of a replicated distributed system while preserving data consistency and system availability is a challenging problem. In this paper, we show that reconfiguration can be simplified by taking advantage of certain properties commonly provided by Primary/Backup systems. We describe a new reconfiguration protocol, recently implemented in Apache Zookeeper. It fully automates configuration changes and minimizes any interruption in service to clients while maintaining data consistency. By leveraging the properties already provided by Zookeeper our protocol is considerably simpler than state of the art.", "title": "" }, { "docid": "a07fe75974cc12b12c274a72b1a1fdf5", "text": "We study a model of incentivizing correct computations in a variety of cryptographic tasks. For each of these tasks we propose a formal model and design protocols satisfying our model's constraints in a hybrid model where parties have access to special ideal functionalities that enable monetary transactions. We summarize our results: Verifiable computation. We consider a setting where a delegator outsources computation to a worker who expects to get paid in return for delivering correct outputs. We design protocols that compile both public and private verification schemes to support incentivizations described above. Secure computation with restricted leakage. Building on the recent work of Huang et al. (Security and Privacy 2012), we show an efficient secure computation protocol that monetarily penalizes an adversary that attempts to learn one bit of information but gets detected in the process. Fair secure computation. Inspired by recent work, we consider a model of secure computation where a party that aborts after learning the output is monetarily penalized. We then propose an ideal transaction functionality FML and show a constant-round realization on the Bitcoin network. Then, in the FML-hybrid world we design a constant round protocol for secure computation in this model. Noninteractive bounties. We provide formal definitions and candidate realizations of noninteractive bounty mechanisms on the Bitcoin network which (1) allow a bounty maker to place a bounty for the solution of a hard problem by sending a single message, and (2) allow a bounty collector (unknown at the time of bounty creation) with the solution to claim the bounty, while (3) ensuring that the bounty maker can learn the solution whenever its bounty is collected, and (4) preventing malicious eavesdropping parties from both claiming the bounty as well as learning the solution.\n All our protocol realizations (except those realizing fair secure computation) rely on a special ideal functionality that is not currently supported in Bitcoin due to limitations imposed on Bitcoin scripts. Motivated by this, we propose validation complexity of a protocol, a formal complexity measure that captures the amount of computational effort required to validate Bitcoin transactions required to implement it in Bitcoin. Our protocols are also designed to take advantage of optimistic scenarios where participating parties behave honestly.", "title": "" }, { "docid": "6a2d9597887a39d3f3a22427b32260aa", "text": "A complete over-current and short-circuit protection system for Low-Drop Out (LDO) regulator applications is presented. The system consists of a current-sense circuit, a current comparator, a D Flip-Flop, an OR logic gate and the short-circuit sense topology. The protection circuit is able to shut down the LDO rapidly by producing a control signal when an over-current event occurs while during the normal operation of the LDO, the protection circuit is idle. The restart of the LDO has to be made manually and a master Reset signal is, also, available. The proposed protection system was designed by using a standard 0.18u CMOS technology using high-voltage transistors.", "title": "" }, { "docid": "7a033c2bedf107dfbd92887eaa4ae8c0", "text": "Building high-performance virtual machines is a complex and expensive undertaking; many popular languages still have low-performance implementations. We describe a new approach to virtual machine (VM) construction that amortizes much of the effort in initial construction by allowing new languages to be implemented with modest additional effort. The approach relies on abstract syntax tree (AST) interpretation where a node can rewrite itself to a more specialized or more general node, together with an optimizing compiler that exploits the structure of the interpreter. The compiler uses speculative assumptions and deoptimization in order to produce efficient machine code. Our initial experience suggests that high performance is attainable while preserving a modular and layered architecture, and that new high-performance language implementations can be obtained by writing little more than a stylized interpreter.", "title": "" }, { "docid": "4021a6d34ca5a6c3d2d021d0ba2cbcf7", "text": "Visual compatibility is critical for fashion analysis, yet is missing in existing fashion image synthesis systems. In this paper, we propose to explicitly model visual compatibility through fashion image inpainting. To this end, we present Fashion Inpainting Networks (FiNet), a two-stage image-to-image generation framework that is able to perform compatible and diverse inpainting. Disentangling the generation of shape and appearance to ensure photorealistic results, our framework consists of a shape generation network and an appearance generation network. More importantly, for each generation network, we introduce two encoders interacting with one another to learn latent code in a shared compatibility space. The latent representations are jointly optimized with the corresponding generation network to condition the synthesis process, encouraging a diverse set of generated results that are visually compatible with existing fashion garments. In addition, our framework is readily extended to clothing reconstruction and fashion transfer, with impressive results. Extensive experiments with comparisons with state-of-the-art approaches on fashion synthesis task quantitatively and qualitatively demonstrate the effectiveness of our method.", "title": "" }, { "docid": "6807545797869605f90721ee5777b5a0", "text": "This paper examines location-based services (LBS) from a broad perspective involving deWnitions, characteristics, and application prospects. We present an overview of LBS modeling regarding users, locations, contexts and data. The LBS modeling endeavors are cross-examined with a research agenda of geographic information science. Some core research themes are brieXy speculated. © 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "81f71bf0f923ff07a770ae30321382f6", "text": "The growth rate of scientific publication has been studied from 1907 to 2007 using available data from a number of literature databases, including Science Citation Index (SCI) and Social Sciences Citation Index (SSCI). Traditional scientific publishing, that is publication in peer-reviewed journals, is still increasing although there are big differences between fields. There are no indications that the growth rate has decreased in the last 50 years. At the same time publication using new channels, for example conference proceedings, open archives and home pages, is growing fast. The growth rate for SCI up to 2007 is smaller than for comparable databases. This means that SCI was covering a decreasing part of the traditional scientific literature. There are also clear indications that the coverage by SCI is especially low in some of the scientific areas with the highest growth rate, including computer science and engineering sciences. The role of conference proceedings, open access archives and publications published on the net is increasing, especially in scientific fields with high growth rates, but this has only partially been reflected in the databases. The new publication channels challenge the use of the big databases in measurements of scientific productivity or output and of the growth rate of science. Because of the declining coverage and this challenge it is problematic that SCI has been used and is used as the dominant source for science indicators based on publication and citation numbers. The limited data available for social sciences show that the growth rate in SSCI was remarkably low and indicate that the coverage by SSCI was declining over time. National Science Indicators from Thomson Reuters is based solely on SCI, SSCI and Arts and Humanities Citation Index (AHCI). Therefore the declining coverage of the citation databases problematizes the use of this source.", "title": "" } ]
scidocsrr
4af838ff29ce0662d0e1717a85911705
Class Proportion Estimation with Application to Multiclass Anomaly Rejection
[ { "docid": "d6564e6ab6b770792f7563377478fb18", "text": "We consider the problem of zero-shot learning, where the goal is to learn a classifier f : X → Y that must predict novel values of Y that were omitted from the training set. To achieve this, we define the notion of a semantic output code classifier (SOC) which utilizes a knowledge base of semantic properties of Y to extrapolate to novel classes. We provide a formalism for this type of classifier and study its theoretical properties in a PAC framework, showing conditions under which the classifier can accurately predict novel classes. As a case study, we build a SOC classifier for a neural decoding task and show that it can often predict words that people are thinking about from functional magnetic resonance images (fMRI) of their neural activity, even without training examples for those words.", "title": "" } ]
[ { "docid": "7643861888d06aa7d4df682ec960926b", "text": "This meta-analysis explores the relationship between SNS-use and academic performance. Examination of the literature containing quantitative measurements of both SNS-use and academic performance produced a sample of 28 effects sizes (N 1⁄4 101,441) for review. Results indicated a significant negative relationship between SNS-use and academic performance. Further moderation analysis points to test type as an important source of variability in the relationship. We found a negative correlation between SNS-use and GPA, while a positive one for SNS-use and language test. Moreover, we found that the relationship of SNS-use and GPA was more strongly negative in females and college students. © 2017 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "91b116c4b2e19096b2ae55e40c4946e3", "text": "Nanotechnology is expected to open some new aspects to fight and prevent diseases using atomic scale tailoring of materials. The ability to uncover the structure and function of biosystems at the nanoscale, stimulates research leading to improvement in biology, biotechnology, medicine and healthcare. The size of nanomaterials is similar to that of most biological molecules and structures; therefore, nanomaterials can be useful for both in vivo and in vitro biomedical research and applications. The integration of nanomaterials with biology has led to the development of diagnostic devices, contrast agents, analytical tools, physical therapy applications, and drug delivery vehicles. In all the nanomaterials with antibacterial properties, metallic nanoparticles are the best. Nanoparticles increase chemical activity due to crystallographic surface structure with their large surface to volume ratio. The importance of bactericidal nanomaterials study is because of the increase in new resistant strains of bacteria against most potent antibiotics. This has promoted research in the well known activity of silver ions and silver-based compounds, including silver nanoparticles. This effect was size and dose dependent and was more pronounced against gram-negative bacteria than gram-positive organisms.", "title": "" }, { "docid": "78a104485843c3940a364719e7a22d18", "text": "We present a simple and generic way to reason about name binding. Name binding is an essential component of every nontrivial programming language, matching uses of names, references, with the things that they name, declarations, based on scoping rules defined by the language. The definition of name binding is often entangled with the language-specific details, which makes abstract and comparative analysis of competing designs challenging. We present a framework that allows to abstract the fundamental notions of references, declarations, and scopes, and to express scoping rules in terms of four scope combinators and three properties of a specific programming language encapsulated in a concept named Language. Using this framework, we clarify complex scoping rules like argument-dependent lookup in C++, investigate the implications of the concepts feature for C++, and introduce a novel scoping rule named weak hiding. In an ideal world, specifications could be formulated based on our framework, and compilers could use such formulation to unambiguously implement name binding. While our examples are primarily centered around C++ and lexical scoping, our framework has applications in other languages and dynamic scoping.", "title": "" }, { "docid": "36f6f21ff8619ef89900cc0de7ff1a1d", "text": "Human being is the most intelligent animal in this world. Intuitively, optimization algorithm inspired by human being creative problem solving process should be superior to the optimization algorithms inspired by collective behavior of insects like ants, bee, etc. In this paper, we introduce a novel brain storm optimization algorithm, which was inspired by the human brainstorming process. Two benchmark functions were tested to validate the effectiveness and usefulness of the proposed algorithm.", "title": "" }, { "docid": "f5be73d82f441b5f0d6011bbbec8b759", "text": "Abnormal crowd behavior detection is an important research issue in computer vision. The traditional methods first extract the local spatio-temporal cuboid from video. Then the cuboid is described by optical flow or gradient features, etc. Unfortunately, because of the complex environmental conditions, such as severe occlusion, over-crowding, etc., the existing algorithms cannot be efficiently applied. In this paper, we derive the high-frequency and spatio-temporal (HFST) features to detect the abnormal crowd behaviors in videos. They are obtained by applying the wavelet transform to the plane in the cuboid which is parallel to the time direction. The high-frequency information characterize the dynamic properties of the cuboid. The HFST features are applied to the both global and local abnormal crowd behavior detection. For the global abnormal crowd behavior detection, Latent Dirichlet allocation is used to model the normal scenes. For the local abnormal crowd behavior detection, Multiple Hidden Markov Models, with an competitive mechanism, is employed to model the normal scenes. The comprehensive experiment results show that the speed of detection has been greatly improved using our approach. Moreover, a good accuracy has been achieved considering the false positive and false negative detection rates.", "title": "" }, { "docid": "15ec9bfa4c3a989fb67dce4f1fb172c5", "text": "This paper proposes ReBNet, an end-to-end framework for training reconfigurable binary neural networks on software and developing efficient accelerators for execution on FPGA. Binary neural networks offer an intriguing opportunity for deploying large-scale deep learning models on resource-constrained devices. Binarization reduces the memory footprint and replaces the power-hungry matrix-multiplication with light-weight XnorPopcount operations. However, binary networks suffer from a degraded accuracy compared to their fixed-point counterparts. We show that the state-of-the-art methods for optimizing binary networks accuracy, significantly increase the implementation cost and complexity. To compensate for the degraded accuracy while adhering to the simplicity of binary networks, we devise the first reconfigurable scheme that can adjust the classification accuracy based on the application. Our proposition improves the classification accuracy by representing features with multiple levels of residual binarization. Unlike previous methods, our approach does not exacerbate the area cost of the hardware accelerator. Instead, it provides a tradeoff between throughput and accuracy while the area overhead of multi-level binarization is negligible.", "title": "" }, { "docid": "78d00cb1af094c91cc7877ba051f925e", "text": "Neuropathic pain refers to pain that originates from pathology of the nervous system. Diabetes, infection (herpes zoster), nerve compression, nerve trauma, \"channelopathies,\" and autoimmune disease are examples of diseases that may cause neuropathic pain. The development of both animal models and newer pharmacological strategies has led to an explosion of interest in the underlying mechanisms. Neuropathic pain reflects both peripheral and central sensitization mechanisms. Abnormal signals arise not only from injured axons but also from the intact nociceptors that share the innervation territory of the injured nerve. This review focuses on how both human studies and animal models are helping to elucidate the mechanisms underlying these surprisingly common disorders. The rapid gain in knowledge about abnormal signaling promises breakthroughs in the treatment of these often debilitating disorders.", "title": "" }, { "docid": "8a985329026f6ba7696f770b4d8d07df", "text": "Multimodal sentiment classification in practical applications may have to rely on erroneous and imperfect views, namely (a) language transcription from a speech recognizer and (b) under-performing acoustic views. This work focuses on improving the representations of these views by performing a deep canonical correlation analysis with the representations of the better performing manual transcription view. Enhanced representations of the imperfect views can be obtained even in absence of the perfect views and give an improved performance during test conditions. Evaluations on the CMU-MOSI and CMU-MOSEI datasets demonstrate the effectiveness of the proposed approach.", "title": "" }, { "docid": "3a1c4b17a3fd943576bd7f1a6a170501", "text": "Tourists are not all the same, they have different pictures of their ideal vacation. Tourists are heterogeneous. Market segmentation is the strategic tool to account for heterogeneity among tourists by grouping them into market segments which include members similar to each other and dissimilar to members of other segments. Both tourism researchers and tourism industry use market segmentation widely to study opportunities for competitive advantage in the marketplace. This chapter explains the foundations of market segmentation, discusses alternative ways in which market segments can be formed, guides the reader through two practical examples, highlights methodological difficulties and points to milestone publications and recently published applications of market segmentation in the field of tourism.", "title": "" }, { "docid": "8e6be29997001367542283e94c7d8f05", "text": "Character recognition has been widely used since its inception in applications involved processing of scanned or camera-captured documents. There exist multiple scripts in which the languages are written. The scripts could broadly be divided into cursive and non-cursive scripts. The recurrent neural networks have been proved to obtain state-of-the-art results for optical character recognition. We present a thorough investigation of the performance of recurrent neural network (RNN) for cursive and non-cursive scripts. We employ bidirectional long short-term memory (BLSTM) networks, which is a variant of the standard RNN. The output layer of the architecture used to carry out our investigation is a special layer called connectionist temporal classification (CTC) which does the sequence alignment. The CTC layer takes as an input the activations of LSTM and aligns the target labels with the inputs. The results were obtained at the character level for both cursive Urdu and non-cursive English scripts are significant and suggest that the BLSTM technique is potentially more useful than the existing OCR algorithms.", "title": "" }, { "docid": "25cbc3f8f9ecbeb89c2c49c044e61c2a", "text": "This study investigated lying behavior and the behavior of people who are deceived by using a deception game (Gneezy, 2005) in both anonymity and face-to-face treatments. Subjects consist of students and non-students (citizens) to investigate whether lying behavior is depended on socioeconomic backgrounds. To explore how liars feel about lying, we give senders a chance to confess their behaviors to their counter partner for the guilty aversion of lying. The following results are obtained: i) a frequency of lying behavior for students is significantly higher than that for non-students at a payoff in the anonymity treatment, but that is not significantly difference between the anonymity and face-to-face treatments; ii) lying behavior is not influenced by gender; iii) a frequency of confession is higher in the face-to-face treatment than in the anonymity treatment; and iv) the receivers who are deceived are more likely to believe a sender’s message to be true in the anonymity treatment. This study implies that the existence of the partner prompts liars to confess their behavior because they may feel remorse or guilt.", "title": "" }, { "docid": "0950d606153e4e634f4bb5633562aa69", "text": "The approach that one chooses to evolve software-intensive systems depends on the organization, the system, and the technology. We believe that significant progress in system architecture, system understanding, object technology, and net-centric computing make it possible to economically evolve software systems to a state in which they exhibit greater functionality and maintainability. In particular, interface technology, wrapping technology, and network technology are opening many opportunities to leverage existing software assets instead of scrapping them and starting over. But these promising technologies cannot be applied in a vacuum or without management understanding and control. There must be a framework in which to motivate the organization to understand its business opportunities, its application systems, and its road to an improved target system. This report outlines a comprehensive system evolution approach that incorporates an enterprise framework for the application of the promising technologies in the context of legacy systems.", "title": "" }, { "docid": "2449aaafacd9a824a8f867052bd7ffe3", "text": "As medicine leans increasingly on mathematics no clinician can afford to leave the statistical aspects of a paper to the \"experts.\" If you are numerate, try the \"Basic Statistics for Clinicians\" series in the Canadian Medical Association Journal,1 2 3 4 or a more mainstream statistical textbook.5 If, on the other hand, you find statistics impossibly difficult, this article and the next in this series give a checklist of preliminary questions to help you appraise the statistical validity of a paper.", "title": "" }, { "docid": "785d2100fd91350e9835b69b130d631b", "text": "Distracted driving might lead to many catastrophic consequences. Developing a countermeasure to track drivers' focus of attention (FOA) and engagement of operators in dual (multi)-tasking conditions is thus imperative. Ten healthy volunteers participated in a dual-task experiment that comprised two tasks: a lane-keeping driving task and a mathematical problem-solving task (e.g., 24 + 15=37?) during which their electroencephalogram (EEG) and behaviors were concurrently recorded. Independent component analysis (ICA) was employed as a spatial filter to separate the contributions of independent sources from the recorded EEG data. The power spectra of six components (i.e., frontal, central, parietal, occipital, left motor, and right motor) extracted from single-task conditions were fed into support vector machine (SVM) based on the radial basis function (RBF) kernel to build an FOA assessment system. The system achieved 84.6 ± 5.8% and 86.2 ± 5.4% classification accuracies in detecting the participants' FOAs on the math versus driving tasks, respectively. This FOA assessment system was then applied to evaluate participants' FOAs during dual-task conditions. The detected FOAs revealed that participants' cognitive attention and strategies dynamically changed between tasks to optimize the overall performance, as attention was limited and competed. The empirical results of this study demonstrate the feasibility of a practical system to continuously estimating cognitive attention through EEG spectra.", "title": "" }, { "docid": "00b6eb8e2e2c288f19ed8ce5a5cba270", "text": "Over the past decade or so, aided by information and communication technologies, money laundering has seen a sharp increase. While traditionally money laundering was associated with drug cartels and warlords, increasingly the focus has shifted towards white-collar crime and terrorist financing. It is therefore important to understand why money laundering (and hence white-collar crime) comes into being and sustains over a period of time. This paper, while exploring the antecedents of technology enabled white-collar crime, undertakes a cultural analysis of the Silk Road case. The cultural analysis helps in understanding patterns of behavior associated with technology enabled white-collar crimes.", "title": "" }, { "docid": "98e557f291de3b305a91e47f59a9ed34", "text": "We propose SfM-Net, a geometry-aware neural network for motion estimation in videos that decomposes frameto-frame pixel motion in terms of scene and object depth, camera motion and 3D object rotations and translations. Given a sequence of frames, SfM-Net predicts depth, segmentation, camera and rigid object motions, converts those into a dense frame-to-frame motion field (optical flow), differentiably warps frames in time to match pixels and back-propagates. The model can be trained with various degrees of supervision: 1) self-supervised by the reprojection photometric error (completely unsupervised), 2) supervised by ego-motion (camera motion), or 3) supervised by depth (e.g., as provided by RGBD sensors). SfMNet extracts meaningful depth estimates and successfully estimates frame-to-frame camera rotations and translations. It often successfully segments the moving objects in the scene, even though such supervision is never provided.", "title": "" }, { "docid": "eac86562382c4ec9455f1422b6f50e9f", "text": "In this paper we look at how to sparsify a graph i.e. how to reduce the edgeset while keeping the nodes intact, so as to enable faster graph clustering without sacrificing quality. The main idea behind our approach is to preferentially retain the edges that are likely to be part of the same cluster. We propose to rank edges using a simple similarity-based heuristic that we efficiently compute by comparing the minhash signatures of the nodes incident to the edge. For each node, we select the top few edges to be retained in the sparsified graph. Extensive empirical results on several real networks and using four state-of-the-art graph clustering and community discovery algorithms reveal that our proposed approach realizes excellent speedups (often in the range 10-50), with little or no deterioration in the quality of the resulting clusters. In fact, for at least two of the four clustering algorithms, our sparsification consistently enables higher clustering accuracies.", "title": "" }, { "docid": "9897f5e64b4a5d6d80fadb96cb612515", "text": "Deep convolutional neural networks (CNNs) are rapidly becoming the dominant approach to computer vision and a major component of many other pervasive machine learning tasks, such as speech recognition, natural language processing, and fraud detection. As a result, accelerators for efficiently evaluating CNNs are rapidly growing in popularity. The conventional approaches to designing such CNN accelerators is to focus on creating accelerators to iteratively process the CNN layers. However, by processing each layer to completion, the accelerator designs must use off-chip memory to store intermediate data between layers, because the intermediate data are too large to fit on chip. In this work, we observe that a previously unexplored dimension exists in the design space of CNN accelerators that focuses on the dataflow across convolutional layers. We find that we are able to fuse the processing of multiple CNN layers by modifying the order in which the input data are brought on chip, enabling caching of intermediate data between the evaluation of adjacent CNN layers. We demonstrate the effectiveness of our approach by constructing a fused-layer CNN accelerator for the first five convolutional layers of the VGGNet-E network and comparing it to the state-of-the-art accelerator implemented on a Xilinx Virtex-7 FPGA. We find that, by using 362KB of on-chip storage, our fused-layer accelerator minimizes off-chip feature map data transfer, reducing the total transfer by 95%, from 77MB down to 3.6MB per image.", "title": "" }, { "docid": "704d729295cddd358eba5eefdf0bdee4", "text": "Remarkable advances in instrument technology, automation and computer science have greatly simplified many aspects of previously tedious tasks in laboratory diagnostics, creating a greater volume of routine work, and significantly improving the quality of results of laboratory testing. Following the development and successful implementation of high-quality analytical standards, analytical errors are no longer the main factor influencing the reliability and clinical utilization of laboratory diagnostics. Therefore, additional sources of variation in the entire laboratory testing process should become the focus for further and necessary quality improvements. Errors occurring within the extra-analytical phases are still the prevailing source of concern. Accordingly, lack of standardized procedures for sample collection, including patient preparation, specimen acquisition, handling and storage, account for up to 93% of the errors currently encountered within the entire diagnostic process. The profound awareness that complete elimination of laboratory testing errors is unrealistic, especially those relating to extra-analytical phases that are harder to control, highlights the importance of good laboratory practice and compliance with the new accreditation standards, which encompass the adoption of suitable strategies for error prevention, tracking and reduction, including process redesign, the use of extra-analytical specifications and improved communication among caregivers.", "title": "" }, { "docid": "1168c9e6ce258851b15b7e689f60e218", "text": "Modern deep learning architectures produce highly accurate results on many challenging semantic segmentation datasets. State-of-the-art methods are, however, not directly transferable to real-time applications or embedded devices, since naïve adaptation of such systems to reduce computational cost (speed, memory and energy) causes a significant drop in accuracy. We propose ContextNet, a new deep neural network architecture which builds on factorized convolution, network compression and pyramid representation to produce competitive semantic segmentation in real-time with low memory requirement. ContextNet combines a deep network branch at low resolution that captures global context information efficiently with a shallow branch that focuses on highresolution segmentation details. We analyse our network in a thorough ablation study and present results on the Cityscapes dataset, achieving 66.1% accuracy at 18.3 frames per second at full (1024× 2048) resolution (23.2 fps with pipelined computations for streamed data).", "title": "" } ]
scidocsrr
2cb2411c77f6f61e00d870bd439d887c
Personalising learning with dynamic prediction and adaptation to learning styles in a conversational intelligent tutoring system
[ { "docid": "fea4f7992ec61eaad35872e3a800559c", "text": "The ways in which an individual characteristically acquires, retains, and retrieves information are collectively termed the individual’s learning style. Mismatches often occur between the learning styles of students in a language class and the teaching style of the instructor, with unfortunate effects on the quality of the students’ learning and on their attitudes toward the class and the subject. This paper defines several dimensions of learning style thought to be particularly relevant to foreign and second language education, outlines ways in which certain learning styles are favored by the teaching styles of most language instructors, and suggests steps to address the educational needs of all students in foreign language classes. Students learn in many ways—by seeing and hearing; reflecting and acting; reasoning logically and intuitively; memorizing and visualizing. Teaching methods also vary. Some instructors lecture, others demonstrate or discuss; some focus on rules and others on examples; some emphasize memory and others understanding. How much a given student learns in a class is governed in part by that student’s native ability and prior preparation but also by the compatibility of his or her characteristic approach to learning and the instructor’s characteristic approach to teaching. The ways in which an individual characteristically acquires, retains, and retrieves information are collectively termed the individual’s learning style. Learning styles have been extensively discussed in the educational psychology literature (Claxton & Murrell 1987; Schmeck 1988) and specifically in the context Richard M. Felder (Ph.D., Princeton University) is the Hoechst Celanese Professor of Chemical Engineering at North Carolina State University,", "title": "" } ]
[ { "docid": "0a96e0b1c82ba4fecacda16746b29446", "text": "PURPOSE\nExternal transcranial electric and magnetic stimulation techniques allow for the fast induction of sustained and measurable changes in cortical excitability. Here we aim to develop a paradigm using transcranial alternating current (tACS) in a frequency range higher than 1 kHz, which potentially interferes with membrane excitation, to shape neuroplastic processes in the human primary motor cortex (M1).\n\n\nMETHODS\nTranscranial alternating current stimulation was applied at 1, 2 and 5 kHz over the left primary motor cortex with a reference electrode over the contralateral orbit in 11 healthy volunteers for a duration of 10 min at an intensity of 1 mA. Monophasic single- pulse transcranial magnetic stimulation (TMS) was used to measure changes in corticospinal excitability, both during and after tACS in the low kHz range, in the right hand muscle. As a control inactive sham stimulation was performed.\n\n\nRESULTS\nAll frequencies of tACS increased the amplitudes of motor- evoked potentials (MEPs) up to 30-60 min post stimulation, compared to the baseline. Two and 5 kHz stimulations were more efficacious in inducing sustained changes in cortical excitability than 1 kHz stimulation, compared to sham stimulation.\n\n\nCONCLUSIONS\nSince tACS in the low kHz range appears too fast to interfere with network oscillations, this technique opens a new possibility to directly interfere with cortical excitability, probably via neuronal membrane activation. It may also potentially replace more conventional repetitive transcranial magnetic stimulation (rTMS) techniques for some applications in a clinical setting.", "title": "" }, { "docid": "b648cbaef5ae2e273ddd8549bc360af5", "text": "We present extensions to a continuousstate dependency parsing method that makes it applicable to morphologically rich languages. Starting with a highperformance transition-based parser that uses long short-term memory (LSTM) recurrent neural networks to learn representations of the parser state, we replace lookup-based word representations with representations constructed from the orthographic representations of the words, also using LSTMs. This allows statistical sharing across word forms that are similar on the surface. Experiments for morphologically rich languages show that the parsing model benefits from incorporating the character-based encodings of words.", "title": "" }, { "docid": "573dde1b9187a925ddad7e2f1e5102c4", "text": "Nowadays, the usage of cloud storages to store data is a popular alternative to traditional local storage systems. However, besides the benefits such services can offer, there are also some downsides like vendor lock-in or unavailability. Furthermore, the large number of available providers and their different pricing models can turn the search for the best fitting provider into a tedious and cumbersome task. Furthermore, the optimal selection of a provider may change over time.In this paper, we formalize a system model that uses several cloud storages to offer a redundant storage for data. The according optimization problem considers historic data access patterns and predefined Quality of Service requirements for the selection of the best-fitting storages. Through extensive evaluations we show the benefits of our work and compare the novel approach against a baseline which follows a state-of-the-art approach.", "title": "" }, { "docid": "90f3c2ea17433ee296702cca53511b9e", "text": "This paper presents the design process, detailed analysis, and prototyping of a novel-structured line-start solid-rotor-based axial-flux permanent-magnet (AFPM) motor capable of autostarting with solid-rotor rings. The preliminary design is a slotless double-sided AFPM motor with four poles for high torque density and stable operation. Two concentric unilevel-spaced raised rings are added to the inner and outer radii of the rotor discs for smooth line-start of the motor. The design allows the motor to operate at both starting and synchronous speeds. The basic equations for the solid rings of the rotor of the proposed AFPM motor are discussed. Nonsymmetry of the designed motor led to its 3-D time-stepping finite-element analysis (FEA) via Vector Field Opera 14.0, which evaluates the design parameters and predicts the transient performance. To verify the design, a prototype 1-hp four-pole three-phase line-start AFPM synchronous motor is built and is used to test the performance in real time. There is a good agreement between experimental and FEA-based computed results. It is found that the prototype motor maintains high starting torque and good synchronization.", "title": "" }, { "docid": "be1b9731df45408571e75d1add5dfe9c", "text": "We investigate a new commonsense inference task: given an event described in a short free-form text (“X drinks coffee in the morning”), a system reasons about the likely intents (“X wants to stay awake”) and reactions (“X feels alert”) of the event’s participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people’s intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts.", "title": "" }, { "docid": "4d2b0b01fae0ff2402fc2feaa5657574", "text": "In this paper, we give an algorithm for the analysis and correction of the distorted QR barcode (QR-code) image. The introduced algorithm is based on the code area finding by four corners detection for 2D barcode. We combine Canny edge detection with contours finding algorithms to erase noises and reduce computation and utilize two tangents to approximate the right-bottom point. Then, we give a detail description on how to use inverse perspective transformation in rebuilding a QR-code image from a distorted one. We test our algorithm on images taken by mobile phones. The experiment shows that our algorithm is effective.", "title": "" }, { "docid": "634b30b81da7139082927109b4c22d5e", "text": "Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes that are difficult to train and often-times specific to a single measurement matrix. It was recently demonstrated that iterative sparse-signal-recovery algorithms can be “unrolled” to form interpretable deep networks. Taking inspiration from this work, we develop a novel neural network architecture that mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm. We call this new network Learned D-AMP (LDAMP). The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance. Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over 50× faster than BM3D-AMP and hundreds of times faster than NLR-CS.", "title": "" }, { "docid": "62376954e4974ea2d52e96b373c67d8a", "text": "Imagine the following situation. You’re in your car, listening to the radio and suddenly you hear a song that catches your attention. It’s the best new song you have heard for a long time, but you missed the announcement and don’t recognize the artist. Still, you would like to know more about this music. What should you do? You could call the radio station, but that’s too cumbersome. Wouldn’t it be nice if you could push a few buttons on your mobile phone and a few seconds later the phone would respond with the name of the artist and the title of the music you’re listening to? Perhaps even sending an email to your default email address with some supplemental information. In this paper we present an audio fingerprinting system, which makes the above scenario possible. By using the fingerprint of an unknown audio clip as a query on a fingerprint database, which contains the fingerprints of a large library of songs, the audio clip can be identified. At the core of the presented system are a highly robust fingerprint extraction method and a very efficient fingerprint search strategy, which enables searching a large fingerprint database with only limited computing resources.", "title": "" }, { "docid": "03d5eadaefc71b1da1b26f4e2923a082", "text": "Sleep is characterized by a structured combination of neuronal oscillations. In the hippocampus, slow-wave sleep (SWS) is marked by high-frequency network oscillations (approximately 200 Hz \"ripples\"), whereas neocortical SWS activity is organized into low-frequency delta (1-4 Hz) and spindle (7-14 Hz) oscillations. While these types of hippocampal and cortical oscillations have been studied extensively in isolation, the relationships between them remain unknown. Here, we demonstrate the existence of temporal correlations between hippocampal ripples and cortical spindles that are also reflected in the correlated activity of single neurons within these brain structures. Spindle-ripple episodes may thus constitute an important mechanism of cortico-hippocampal communication during sleep. This coactivation of hippocampal and neocortical pathways may be important for the process of memory consolidation, during which memories are gradually translated from short-term hippocampal to longer-term neocortical stores.", "title": "" }, { "docid": "0a63a875b57b963372640f8fb527bd5c", "text": "KEMI-TORNIO UNIVERSITY OF APPLIED SCIENCES Degree programme: Business Information Technology Writer: Guo, Shuhang Thesis title: Analysis and evaluation of similarity metrics in collaborative filtering recommender system Pages (of which appendix): 62 (1) Date: May 15, 2014 Thesis instructor: Ryabov, Vladimir This research is focused on the field of recommender systems. The general aims of this thesis are to summary the state-of-the-art in recommendation systems, evaluate the efficiency of the traditional similarity metrics with varies of data sets, and propose an ideology to model new similarity metrics. The literatures on recommender systems were studied for summarizing the current development in this filed. The implementation of the recommendation and evaluation was achieved by Apache Mahout which provides an open source platform of recommender engine. By importing data information into the project, a customized recommender engine was built. Since the recommending results of collaborative filtering recommender significantly rely on the choice of similarity metrics and the types of the data, several traditional similarity metrics provided in Apache Mahout were examined by the evaluator offered in the project with five data sets collected by some academy groups. From the evaluation, I found out that the best performance of each similarity metric was achieved by optimizing the adjustable parameters. The features of each similarity metric were obtained and analyzed with practical data sets. In addition, an ideology by combining two traditional metrics was proposed in the thesis and it was proven applicable and efficient by the metrics combination of Pearson correlation and Euclidean distance. The observation and evaluation of traditional similarity metrics with practical data is helpful to understand their features and suitability, from which new models can be created. Besides, the ideology proposed for modeling new similarity metrics can be found useful both theoretically and practically.", "title": "" }, { "docid": "0028061d8bd57be4aaf6a01995b8c3bb", "text": "Steganography is the art of concealing the existence of information within seemingly harmless carriers. It is a method similar to covert channels, spread spectrum communication and invisible inks which adds another step in security. A message in cipher text may arouse suspicion while an invisible message will not. A digital image is a flexible medium used to carry a secret message because the slight modification of a cover image is hard to distinguish by human eyes. In this paper, we propose a revised version of information hiding scheme using Sudoku puzzle. The original work was proposed by Chang et al. in 2008, and their work was inspired by Zhang and Wang's method and Sudoku solutions. Chang et al. successfully used Sudoku solutions to guide cover pixels to modify pixel values so that secret messages can be embedded. Our proposed method is a modification of Chang et al’s method. Here a 27 X 27 Reference matrix is used instead of 256 X 256 reference matrix as proposed in the previous method. The earlier version is for a grayscale image but the proposed method is for a colored image.", "title": "" }, { "docid": "40bdadc044f5342534ba5387c47c6456", "text": "A numerical study of atmospheric turbulence effects on wind-turbine wakes is presented. Large-eddy simulations of neutrally-stratified atmospheric boundary layer flows through stand-alone wind turbines were performed over homogeneous flat surfaces with four different aerodynamic roughness lengths. Emphasis is placed on the structure and characteristics of turbine wakes in the cases where the incident flows to the turbine have the same mean velocity at the hub height but different mean wind shears and turbulence intensity levels. The simulation results show that the different turbulence intensity levels of the incoming flow lead to considerable influence on the spatial distribution of the mean velocity deficit, turbulence intensity, and turbulent shear stress in the wake region. In particular, when the turbulence intensity level of the incoming flow is higher, the turbine-induced wake (velocity deficit) recovers faster, and the locations of the maximum turbulence intensity and turbulent stress are closer to the turbine. A detailed analysis of the turbulence kinetic energy budget in the wakes reveals also an important effect of the incoming flow turbulence level on the magnitude and spatial distribution of the shear production and transport terms.", "title": "" }, { "docid": "718cf9a405a81b9a43279a1d02f5e516", "text": "In cross-cultural psychology, one of the major sources of the development and display of human behavior is the contact between cultural populations. Such intercultural contact results in both cultural and psychological changes. At the cultural level, collective activities and social institutions become altered, and at the psychological level, there are changes in an individual's daily behavioral repertoire and sometimes in experienced stress. The two most common research findings at the individual level are that there are large variations in how people acculturate and in how well they adapt to this process. Variations in ways of acculturating have become known by the terms integration, assimilation, separation, and marginalization. Two variations in adaptation have been identified, involving psychological well-being and sociocultural competence. One important finding is that there are relationships between how individuals acculturate and how well they adapt: Often those who integrate (defined as being engaged in both their heritage culture and in the larger society) are better adapted than those who acculturate by orienting themselves to one or the other culture (by way of assimilation or separation) or to neither culture (marginalization). Implications of these findings for policy and program development and for future research are presented.", "title": "" }, { "docid": "b1ecd3c12161f64640ffb1ac2b02b68a", "text": "Our goal is to construct a domain-targeted, high precision knowledge base (KB), containing general (subject,predicate,object) statements about the world, in support of a downstream question-answering (QA) application. Despite recent advances in information extraction (IE) techniques, no suitable resource for our task already exists; existing resources are either too noisy, too named-entity centric, or too incomplete, and typically have not been constructed with a clear scope or purpose. To address these, we have created a domain-targeted, high precision knowledge extraction pipeline, leveraging Open IE, crowdsourcing, and a novel canonical schema learning algorithm (called CASI), that produces high precision knowledge targeted to a particular domain - in our case, elementary science. To measure the KB’s coverage of the target domain’s knowledge (its “comprehensiveness” with respect to science) we measure recall with respect to an independent corpus of domain text, and show that our pipeline produces output with over 80% precision and 23% recall with respect to that target, a substantially higher coverage of tuple-expressible science knowledge than other comparable resources. We have made the KB publicly available.", "title": "" }, { "docid": "85c3dc3dae676f0509a99c6d27db8423", "text": "Swarming, or aggregations of organisms in groups, can be found in nature in many organisms ranging from simple bacteria to mammals. Such behavior can result from several different mechanisms. For example, individuals may respond directly to local physical cues such as concentration of nutrients or distribution of some chemicals as seen in some bacteria and social insects, or they may respond directly to other individuals as seen in fish, birds, and herds of mammals. In this dissertation, we consider models for aggregating and social foraging swarms and perform rigorous stability analysis of emerging collective behavior. Moreover, we consider formation control of a general class of multi-agent systems in the framework of nonlinear output regulation problem with application on formation control of mobile robots. First, an individual-based continuous time model for swarm aggregation in an n-dimensional space is identified and its stability properties are analyzed. The motion of each individual is determined by two factors: (i) attraction to the other individuals on long distances and (ii) repulsion from the other individuals on short distances. It is shown that the individuals (autonomous agents or biological creatures) will form a cohesive swarm in a finite time. Moreover, explicit bounds on the swarm size and time of convergence are derived. Then, the results are generalized to a more general class of attraction/repulsion functions and extended to handle formation stabilization and uniform swarm density. After that, we consider social foraging swarms. We ii assume that the swarm is moving in an environment with an ”attractant/repellent” profile (i.e., a profile of nutrients or toxic substances) which also affects the motion of each individual by an attraction to the more favorable or nutrient rich regions (or repulsion from the unfavorable or toxic regions) of the profile. The stability properties of the collective behavior of the swarm for different profiles are studied and conditions for collective convergence to more favorable regions are provided. Then, we use the ideas for modeling and analyzing the behavior of honey bee clusters and in-transit swarms, a phenomena seen during the reproduction of the bees. After that, we consider one-dimensional asynchronous swarms with time delays. We prove that, despite the asynchronism and time delays in the motion of the individuals, the swarm will converge to a comfortable position with comfortable intermember spacing. Finally, we consider formation control of a multi-agent system with general nonlinear dynamics. It is assumed that the formation is required to follow a virtual leader whose dynamics are generated by an autonomous neutrally stable system. We develop a decentralized control strategy based on the nonlinear output regulation (servomechanism) theory. We illustrate the procedure with application to formation control of mobile robots.", "title": "" }, { "docid": "7dde662184f9dc0363df5cfeffc4724e", "text": "WordNet is a lexical reference system, developed by the university of Princeton. This paper gives a detailed documentation of the Prolog database of WordNet and predicates to interface it. 1", "title": "" }, { "docid": "3a6197322da0e5fe2c2d98a8fcba7a42", "text": "The amygdala and hippocampal complex, two medial temporal lobe structures, are linked to two independent memory systems, each with unique characteristic functions. In emotional situations, these two systems interact in subtle but important ways. Specifically, the amygdala can modulate both the encoding and the storage of hippocampal-dependent memories. The hippocampal complex, by forming episodic representations of the emotional significance and interpretation of events, can influence the amygdala response when emotional stimuli are encountered. Although these are independent memory systems, they act in concert when emotion meets memory.", "title": "" }, { "docid": "497d72ce075f9bbcb2464c9ab20e28de", "text": "Eukaryotic organisms radiated in Proterozoic oceans with oxygenated surface waters, but, commonly, anoxia at depth. Exceptionally preserved fossils of red algae favor crown group emergence more than 1200 million years ago, but older (up to 1600-1800 million years) microfossils could record stem group eukaryotes. Major eukaryotic diversification ~800 million years ago is documented by the increase in the taxonomic richness of complex, organic-walled microfossils, including simple coenocytic and multicellular forms, as well as widespread tests comparable to those of extant testate amoebae and simple foraminiferans and diverse scales comparable to organic and siliceous scales formed today by protists in several clades. Mid-Neoproterozoic establishment or expansion of eukaryophagy provides a possible mechanism for accelerating eukaryotic diversification long after the origin of the domain. Protists continued to diversify along with animals in the more pervasively oxygenated oceans of the Phanerozoic Eon.", "title": "" }, { "docid": "218813a16c9dd6db5f4ce5a55250c1f6", "text": "The hippocampus frequently replays memories of past experiences during sharp-wave ripple (SWR) events. These events can represent spatial trajectories extending from the animal's current location to distant locations, suggesting a role in the evaluation of upcoming choices. While SWRs have been linked to learning and memory, the specific role of awake replay remains unclear. Here we show that there is greater coordinated neural activity during SWRs preceding correct, as compared to incorrect, trials in a spatial alternation task. As a result, the proportion of cell pairs coactive during SWRs was predictive of subsequent correct or incorrect responses on a trial-by-trial basis. This effect was seen specifically during early learning, when the hippocampus is essential for task performance. SWR activity preceding correct trials represented multiple trajectories that included both correct and incorrect options. These results suggest that reactivation during awake SWRs contributes to the evaluation of possible choices during memory-guided decision making.", "title": "" } ]
scidocsrr
88dfe199847320a540146e0a510a0db7
Automated anomaly detection and performance modeling of enterprise applications
[ { "docid": "9b628f47102a0eee67e469e223ece837", "text": "We present a method for automatically extracting from a running system an indexable signature that distills the essential characteristic from a system state and that can be subjected to automated clustering and similarity-based retrieval to identify when an observed system state is similar to a previously-observed state. This allows operators to identify and quantify the frequency of recurrent problems, to leverage previous diagnostic efforts, and to establish whether problems seen at different installations of the same site are similar or distinct. We show that the naive approach to constructing these signatures based on simply recording the actual ``raw'' values of collected measurements is ineffective, leading us to a more sophisticated approach based on statistical modeling and inference. Our method requires only that the system's metric of merit (such as average transaction response time) as well as a collection of lower-level operational metrics be collected, as is done by existing commercial monitoring tools. Even if the traces have no annotations of prior diagnoses of observed incidents (as is typical), our technique successfully clusters system states corresponding to similar problems, allowing diagnosticians to identify recurring problems and to characterize the ``syndrome'' of a group of problems. We validate our approach on both synthetic traces and several weeks of production traces from a customer-facing geoplexed 24 x 7 system; in the latter case, our approach identified a recurring problem that had required extensive manual diagnosis, and also aided the operators in correcting a previous misdiagnosis of a different problem.", "title": "" }, { "docid": "7e0c7042c7bc4d1084234f48dd2e0333", "text": "Many interesting large-scale systems are distributed systems of multiple communicating components. Such systems can be very hard to debug, especially when they exhibit poor performance. The problem becomes much harder when systems are composed of \"black-box\" components: software from many different (perhaps competing) vendors, usually without source code available. Typical solutions-provider employees are not always skilled or experienced enough to debug these systems efficiently. Our goal is to design tools that enable modestly-skilled programmers (and experts, too) to isolate performance bottlenecks in distributed systems composed of black-box nodes.We approach this problem by obtaining message-level traces of system activity, as passively as possible and without any knowledge of node internals or message semantics. We have developed two very different algorithms for inferring the dominant causal paths through a distributed system from these traces. One uses timing information from RPC messages to infer inter-call causality; the other uses signal-processing techniques. Our algorithms can ascribe delay to specific nodes on specific causal paths. Unlike previous approaches to similar problems, our approach requires no modifications to applications, middleware, or messages.", "title": "" } ]
[ { "docid": "7b4400c6ef5801e60a6f821810538381", "text": "A CMOS self-biased fully differential amplifier is presented. Due to the self-biasing structure of the amplifier and its associated negative feedback, the amplifier is compensated to achieve low sensitivity to process, supply voltage and temperature (PVT) variations. The output common-mode voltage of the amplifier is adjusted through the same biasing voltages provided by the common-mode feedback (CMFB) circuit. The amplifier core is based on a simple structure that uses two CMOS inverters to amplify the input differential signal. Despite its simple structure, the proposed amplifier is attractive to a wide range of applications, specially those requiring low power and small silicon area. As two examples, a sample-and-hold circuit and a second order multi-bit sigma-delta modulator either employing the proposed amplifier are presented. Besides these application examples, a set of amplifier performance parameters is given.", "title": "" }, { "docid": "3a066516f52dec6150fcf4a8e081605f", "text": "Writer: Julie Risbourg Title: Breaking the ‘glass ceiling’ Subtitle: Language: A Critical Discourse Analysis of how powerful businesswomen are portrayed in The Economist online English Pages: 52 Women still represent a minority in the executive world. Much research has been aimed at finding possible explanations concerning the underrepresentation of women in the male dominated executive sphere. The findings commonly suggest that a patriarchal society and the maintenance of gender stereotypes lead to inequalities and become obstacles for women to break the so-called ‘glass ceiling’. This thesis, however, aims to explore how businesswomen are represented once they have broken the glass ceiling and entered the executive world. Within the Forbes’ list of the 100 most powerful women of 2017, the two first businesswomen on the list were chosen, and their portrayals were analysed through articles published by The Economist online. The theoretical framework of this thesis includes Goffman’s framing theory and takes a cultural feminist perspective on exploring how the media outlet frames businesswomen Sheryl Sandberg and Mary Barra. The thesis also examines how these frames relate to the concepts of stereotyping, commonly used in the coverage of women in the media. More specifically, the study investigates whether negative stereotypes concerning their gender are present in the texts or if positive stereotypes such as idealisation are used to portray them. Those concepts are coupled with the theoretical aspect of the method, which is Critical Discourse Analysis. This method is chosen in order to explore the underlying meanings and messages The Economist chose to refer to these two businesswomen. This is done through the use of linguistic and visual tools, such as lexical choices, word connotations, nomination/functionalisation and gaze. The findings show that they were portrayed positively within a professional environment, and the publication celebrated their success and hard work. Moreover, the results also show that gender related traits were mentioned, showing a subjective representation, which is countered by their idealisation, via their presence in not only the executive world, but also having such high-working titles in male dominated industries.", "title": "" }, { "docid": "5090070d6d928b83bd22d380f162b0a6", "text": "The Federal Aviation Administration (FAA) has been increasing the National Airspace System (NAS) capacity to accommodate the predicted rapid growth of air traffic. One method to increase the capacity is reducing air traffic controller workload so that they can handle more air traffic. It is crucial to measure the impact of the increasing future air traffic on controller workload. Our experimental data show a linear relationship between the number of aircraft in the en route center sector and controllers’ perceived workload. Based on the extensive range of aircraft count from 14 to 38 in the experiment, we can predict en route center controllers working as a team of Radar and Data controllers with the automation tools available in the our experiment could handle up to about 28 aircraft. This is 33% more than the 21 aircraft that en route center controllers typically handle in a busy sector.", "title": "" }, { "docid": "2b1a9f7131b464d9587137baf828cd3a", "text": "The description of the spatial characteristics of twoand three-dimensional objects, in the framework of MPEG-7, is considered. The shape of an object is one of its fundamental properties, and this paper describes an e$cient way to represent the coarse shape, scale and composition properties of an object. This representation is invariant to resolution, translation and rotation, and may be used for both two-dimensional (2-D) and three-dimensional (3-D) objects. This coarse shape descriptor will be included in the eXperimentation Model (XM) of MPEG-7. Applications of such a description to search object databases, in particular the CAESAR anthropometric database are discussed. ( 2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "b27ab468a885a3d52ec2081be06db2ef", "text": "The beautification of human photos usually requires professional editing softwares, which are difficult for most users. In this technical demonstration, we propose a deep face beautification framework, which is able to automatically modify the geometrical structure of a face so as to boost the attractiveness. A learning based approach is adopted to capture the underlying relations between the facial shape and the attractiveness via training the Deep Beauty Predictor (DBP). Relying on the pre-trained DBP, we construct the BeAuty SHaper (BASH) to infer the \"flows\" of landmarks towards the maximal aesthetic level. BASH modifies the facial landmarks with the direct guidance of the beauty score estimated by DBP.", "title": "" }, { "docid": "00f0ba62d43b775ffd1c0809acef9175", "text": "1. T. Shiratori, A. Nakazawa, K. Ikeuchi, “Dancing-to-Music Character Animation”, In Computer Graphics Forum, Vol. 25, No. 3 (also in Eurographics 2006), Sep. 2006 (to appear) 2. T. Shiratori, A. Nakazawa, K. Ikeuchi, “Synthesizing Dance Performance Using Musical and Motion Features”, In Proc. of IEEE International Conference on Robotics and Automation (ICRA 2006), May 2006 A Dancing-to-Music ability for CG characters & humanoids", "title": "" }, { "docid": "dcdb6242febbef358efe5a1461957291", "text": "Neuromorphic Engineering has emerged as an exciting research area, primarily owing to the paradigm shift from conventional computing architectures to data-driven, cognitive computing. There is a diversity of work in the literature pertaining to neuromorphic systems, devices and circuits. This review looks at recent trends in neuromorphic engineering and its sub-domains, with an attempt to identify key research directions that would assume significance in the future. We hope that this review would serve as a handy reference to both beginners and experts, and provide a glimpse into the broad spectrum of applications of neuromorphic hardware and algorithms. Our survey indicates that neuromorphic engineering holds a promising future, particularly with growing data volumes, and the imminent need for intelligent, versatile computing.", "title": "" }, { "docid": "c4e80fd8e2c5b1795c016c9542f8f33e", "text": "Duckweeds, plants of the Lemnaceae family, have the distinction of being the smallest angiosperms in the world with the fastest doubling time. Together with its naturally ability to thrive on abundant anthropogenic wastewater, these plants hold tremendous potential to helping solve critical water, climate and fuel issues facing our planet this century. With the conviction that rapid deployment and optimization of the duckweed platform for biomass production will depend on close integration between basic and applied research of these aquatic plants, the first International Conference on Duckweed Research and Applications (ICDRA) was organized and took place in Chengdu, China, from October 7th to 10th of 2011. Co-organized with Rutgers University of New Jersey (USA), this Conference attracted participants from Germany, Denmark, Japan, Australia, in addition to those from the US and China. The following are concise summaries of the various oral presentations and final discussions over the 2.5 day conference that serve to highlight current research interests and applied research that are paving the way for the imminent deployment of this novel aquatic crop. We believe the sharing of this information with the broad Plant Biology community is an important step toward the renaissance of this excellent plant model that will have important impact on our quest for sustainable development of the world.", "title": "" }, { "docid": "2804384964bc8996e6574bdf67ed9cb5", "text": "In the past 2 decades, correlational and experimental studies have found a positive association between violent video game play and aggression. There is less evidence, however, to support a long-term relation between these behaviors. This study examined sustained violent video game play and adolescent aggressive behavior across the high school years and directly assessed the socialization (violent video game play predicts aggression over time) versus selection hypotheses (aggression predicts violent video game play over time). Adolescents (N = 1,492, 50.8% female) were surveyed annually from Grade 9 to Grade 12 about their video game play and aggressive behaviors. Nonviolent video game play, frequency of overall video game play, and a comprehensive set of potential 3rd variables were included as covariates in each analysis. Sustained violent video game play was significantly related to steeper increases in adolescents' trajectory of aggressive behavior over time. Moreover, greater violent video game play predicted higher levels of aggression over time, after controlling for previous levels of aggression, supporting the socialization hypothesis. In contrast, no support was found for the selection hypothesis. Nonviolent video game play also did not predict higher levels of aggressive behavior over time. Our findings, and the fact that many adolescents play video games for several hours every day, underscore the need for a greater understanding of the long-term relation between violent video games and aggression, as well as the specific game characteristics (e.g., violent content, competition, pace of action) that may be responsible for this association.", "title": "" }, { "docid": "e8c9067f13c9a57be46823425deb783b", "text": "In order to utilize the tremendous computing power of graphics hardware and to automatically adapt to the fast and frequent changes in its architecture and performance characteristics, this paper implements an automatic tuning system to generate high-performance matrix-multiplication implementation on graphics hardware. The automatic tuning system uses a parameterized code generator to generate multiple versions of matrix multiplication, whose performances are empirically evaluated by actual execution on the target platform. An ad-hoc search engine is employed to search over the implementation space for the version that yields the best performance. In contrast to similar systems on CPUs, which utilize cache blocking, register tiling, instruction scheduling tuning strategies, this paper identifies and exploits several tuning strategies that are unique for graphics hardware. These tuning strategies include optimizing for multiple-render-targets, SIMD instructions with data packing, overcoming limitations on instruction count and dynamic branch instruction. The generated implementations have comparable performance with expert manually tuned version in spite of the significant overhead incurred due to the use of the high-level BrookGPU language.", "title": "" }, { "docid": "e66e7677aa769135a6a9b9ea5c807212", "text": "At ICSE'2013, there was the first session ever dedicated to automatic program repair. In this session, Kim et al. presented PAR, a novel template-based approach for fixing Java bugs. We strongly disagree with key points of this paper. Our critical review has two goals. First, we aim at explaining why we disagree with Kim and colleagues and why the reasons behind this disagreement are important for research on automatic software repair in general. Second, we aim at contributing to the field with a clarification of the essential ideas behind automatic software repair. In particular we discuss the main evaluation criteria of automatic software repair: understandability, correctness and completeness. We show that depending on how one sets up the repair scenario, the evaluation goals may be contradictory. Eventually, we discuss the nature of fix acceptability and its relation to the notion of software correctness.", "title": "" }, { "docid": "29e360b1e1999a284d4e464ce4c9ed51", "text": "To study the role of brain oscillations in working memory, we recorded the scalp electroencephalogram (EEG) during the retention interval of a modified Sternberg task. A power spectral analysis of the EEG during the retention interval revealed a clear peak at 9-12 Hz, a frequency in the alpha band (8-13 Hz). In apparent conflict with previous ideas according to which alpha band oscillations represent brain \"idling\", we found that the alpha peak systematically increased with the number of items held in working memory. The enhancement was prominent over the posterior and bilateral central regions. The enhancement over posterior regions is most likely explained by the well known alpha rhythm produced close to the parietal-occipital fissure, whereas the lateral enhancement could be explained by sources in somato-motor cortex. A time-frequency analysis revealed that the enhancement was present throughout the last 2.5 s of the 2.8 s retention interval and that alpha power rapidly diminished following the probe. The load dependence and the tight temporal regulation of alpha provide strong evidence that the alpha generating system is directly or indirectly linked to the circuits responsible for working memory. Although a clear peak in the theta band (5-8 Hz) was only detectable in one subject, other lines of evidence indicate that theta occurs and also has a role in working memory. Hypotheses concerning the role of alpha band activity in working memory are discussed.", "title": "" }, { "docid": "4691ef360395aefb51a8fb086ae50991", "text": "Estimating 3D pose of a known object from a given 2D image is an important problem with numerous studies for robotics and augmented reality applications. While the state-of-the-art Perspective-n-Point algorithms perform well in pose estimation, the success hinges on whether feature points can be extracted and matched correctly on targets with rich texture. In this work, we propose a robust direct method for 3D pose estimation with high accuracy that performs well on both textured and textureless planar targets. First, the pose of a planar target with respect to a calibrated camera is approximately estimated by posing it as a template matching problem. Next, the object pose is further refined and disambiguated with a gradient descent search scheme. Extensive experiments on both synthetic and real datasets demonstrate the proposed direct pose estimation algorithm performs favorably against state-of-the-art feature-based approaches in terms of robustness and accuracy under several varying conditions.", "title": "" }, { "docid": "262f1e965b311bf866ef5b924b6085a7", "text": "By considering the amount of uncertainty perceived and the willingness to bear uncertainty concomitantly, we provide a more complete conceptual model of entrepreneurial action that allows for examination of entrepreneurial action at the individual level of analysis while remaining consistent with a rich legacy of system-level theories of the entrepreneur. Our model not only exposes limitations of existing theories of entrepreneurial action but also contributes to a deeper understanding of important conceptual issues, such as the nature of opportunity and the potential for philosophical reconciliation among entrepreneurship scholars.", "title": "" }, { "docid": "38a7f57900474553f6979131e7f39e5d", "text": "A cascade switched-capacitor ΔΣ analog-to-digital converter, suitable for WLANs, is presented. It uses a double-sampling scheme with single set of DAC capacitors, and an improved low-distortion architecture with an embedded-adder integrator. The proposed architecture eliminates one active stage, and reduces the output swings in the loop-filter and hence the non-linearity. It was fabricated with a 0.18um CMOS process. The prototype chip achieves 75.5 dB DR, 74 dB SNR, 73.8 dB SNDR, −88.1 dB THD, and 90.2 dB SFDR over a 10 MHz signal band with an FoM of 0.27 pJ/conv-step.", "title": "" }, { "docid": "22a2779e79ec8fcc2f3e20ffef52e219", "text": "Despite the great progress achieved in unconstrained face recognition, pose variations still remain a challenging and unsolved practical issue. We propose a novel framework for multi-view face recognition based on extracting and matching pose-robust face signatures from 2D images. Specifically, we propose an efficient method for monocular 3D face reconstruction, which is used to lift the 2D facial appearance to a canonical texture space and estimate the self-occlusion. On the lifted facial texture we then extract various local features, which are further enhanced by the occlusion encodings computed on the self-occlusion mask, resulting in a pose-robust face signature, a novel feature representation of the original 2D facial image. Extensive experiments on two public datasets demonstrate that our method not only simplifies the matching of multi-view 2D facial images by circumventing the requirement for pose-adaptive classifiers, but also achieves superior performance.", "title": "" }, { "docid": "3f5e8ac89e893d3166f5e3c50f91b8cc", "text": "Biosequences typically have a small alphabet, a long length, and patterns containing gaps (i.e., \"don't care\") of arbitrary size. Mining frequent patterns in such sequences faces a different type of explosion than in transaction sequences primarily motivated in market-basket analysis. In this paper, we study how this explosion affects the classic sequential pattern mining, and present a scalable two-phase algorithm to deal with this new explosion. The <i>Segment Phase</i> first searches for short patterns containing no gaps, called <i>segments</i>. This phase is efficient. The <i>Pattern Phase</i> searches for long patterns containing multiple segments separated by variable length gaps. This phase is time consuming. The purpose of two phases is to exploit the information obtained from the first phase to speed up the pattern growth and matching and to prune the search space in the second phase. We evaluate this approach on synthetic and real life data sets.", "title": "" }, { "docid": "9c59eb4f1843db91a2511db2ad5fd35c", "text": "Segmentation is an important task of any Optical Character Recognition (OCR) system. It separates the image text documents into lines, words and characters. The accuracy of OCR system mainly depends on the segmentation algorithm being used. Segmentation of handwritten text of some Indian languages like Kannada, Telugu, Assamese is difficult when compared with Latin based languages because of its structural complexity and increased character set. It contains vowels, consonants and compound characters. Some of the characters may overlap together. Despite several successful works in OCR all over the world, development of OCR tools in Indian languages is still an ongoing process. Character segmentation plays an important role in character recognition because incorrectly segmented characters are unlikely to be recognized correctly. In this paper, a segmentation scheme for segmenting handwritten Kannada scripts into lines, words and characters using morphological operations and projection profiles is proposed. The method was tested on totally unconstrained handwritten Kannada scripts, which pays more challenge and difficulty due to the complexity involved in the script. Usage of the morphology made extracting text lines efficient by an average extraction rate of 94.5% .Because of the varying inter and intra word gaps an average segmentation rate of 82.35% and 73.08% for words and characters respectively is obtained.", "title": "" }, { "docid": "aa16ca139a7648f7d9bb3ff81aaf0bbc", "text": "Atherosclerosis has an important inflammatory component and acute cardiovascular events can be initiated by inflammatory processes occurring in advanced plaques. Fatty acids influence inflammation through a variety of mechanisms; many of these are mediated by, or associated with, the fatty acid composition of cell membranes. Human inflammatory cells are typically rich in the n-6 fatty acid arachidonic acid, but the contents of arachidonic acid and of the marine n-3 fatty acids eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) can be altered through oral administration of EPA and DHA. Eicosanoids produced from arachidonic acid have roles in inflammation. EPA also gives rise to eicosanoids and these are usually biologically weak. EPA and DHA give rise to resolvins which are anti-inflammatory and inflammation resolving. EPA and DHA also affect production of peptide mediators of inflammation (adhesion molecules, cytokines, etc.). Thus, the fatty acid composition of human inflammatory cells influences their function; the contents of arachidonic acid, EPA and DHA appear to be especially important. The anti-inflammatory effects of marine n-3 polyunsaturated fatty acids (PUFAs) may contribute to their protective actions towards atherosclerosis and plaque rupture.", "title": "" }, { "docid": "e52a2c807612cb383076f2fae508c6cc", "text": "We present a new corpus for computational stylometry, more specifically authorship attribution and the prediction of author personality from text. Because of the large number of authors (145), the corpus will allow previously impossible studies of variation in features considered predictive for writing style. The innovative meta-information (personality profiles of the authors) associated with these texts allows the study of personality prediction, a not yet very well researched aspect of style. In this paper, we describe the contents of the corpus and show its use in both authorship attribution and personality prediction. We focus on features that have been proven useful in the field of author recognition. Syntactic features like part-of-speech n-grams are generally accepted as not being under the author’s conscious control and therefore providing good clues for predicting gender or authorship. We want to test whether these features are helpful for personality prediction and authorship attribution on a large set of authors. Both tasks are approached as text categorization tasks. First a document representation is constructed based on feature selection from the linguistically analyzed corpus (using the Memory-Based Shallow Parser (MBSP)). These are associated with each of the 145 authors or each of the four components of the Myers-Briggs Type Indicator (Introverted-Extraverted, Sensing-iNtuitive, Thinking-Feeling, JudgingPerceiving). Authorship attribution on 145 authors achieves results around 50% accuracy. Preliminary results indicate that the first two personality dimensions can be predicted fairly accurately.", "title": "" } ]
scidocsrr
c07a68b567778d8078092945d68bc154
Crowdfunding : An Industrial Organization Perspective ∗
[ { "docid": "540a6dd82c7764eedf99608359776e66", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/aea.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.", "title": "" } ]
[ { "docid": "21b04c71f6c87b18f544f6b3f6570dd7", "text": "Fuzzy logic methods have been used successfully in many real-world applications, but the foundations of fuzzy logic remain under attack. Taken together, these two facts constitute a paradox. A second paradox is that almost all of the successful fuzzy logic applications are embedded controllers, while most of the theoretical papers on fuzzy methods deal with knowledge representation and reasoning. I hope to resolve these paradoxes by identifying which aspects of fuzzy logic render it useful in practice, and which aspects are inessential. My conclusions are based on a mathematical result, on a survey of literature on the use of fuzzy logic in heuristic control and in expert systems, and on practical experience in developing expert systems.<<ETX>>", "title": "" }, { "docid": "3c4a8623330c48558ca178a82b68f06c", "text": "Humans assimilate information from the traffic environment mainly through visual perception. Obviously, the dominant information required to conduct a vehicle can be acquired with visual sensors. However, in contrast to most other sensor principles, video signals contain relevant information in a highly indirect manner and hence visual sensing requires sophisticated machine vision and image understanding techniques. This paper provides an overview on the state of research in the field of machine vision for intelligent vehicles. The functional spectrum addressed covers the range from advanced driver assistance systems to autonomous driving. The organization of the article adopts the typical order in image processing pipelines that successively condense the rich information and vast amount of data in video sequences. Data-intensive low-level “early vision” techniques first extract features that are later grouped and further processed to obtain information of direct relevance for vehicle guidance. Recognition and classification schemes allow to identify specific objects in a traffic scene. Recently, semantic labeling techniques using convolutional neural networks have achieved impressive results in this field. High-level decisions of intelligent vehicles are often influenced by map data. The emerging role of machine vision in the mapping and localization process is illustrated at the example of autonomous driving. Scene representation methods are discussed that organize the information from all sensors and data sources and thus build the interface between perception and planning. Recently, vision benchmarks have been tailored to various tasks in traffic scene perception that provide a metric for the rich diversity of machine vision methods. Finally, the paper addresses computing architectures suited to real-time implementation. Throughout the paper, numerous specific examples and real world experiments with prototype vehicles are presented.", "title": "" }, { "docid": "7d53fcce145badeeaeff55b5299010b9", "text": "Cloud computing is today’s most emphasized Information and Communications Technology (ICT) paradigm that is directly or indirectly used by almost every online user. However, such great significance comes with the support of a great infrastructure that includes large data centers comprising thousands of server units and other supporting equipment. Their share in power consumption generates between 1.1% and 1.5% of the total electricity use worldwide and is projected to rise even more. Such alarming numbers demand rethinking the energy efficiency of such infrastructures. However, before making any changes to infrastructure, an analysis of the current status is required. In this article, we perform a comprehensive analysis of an infrastructure supporting the cloud computing paradigm with regards to energy efficiency. First, we define a systematic approach for analyzing the energy efficiency of most important data center domains, including server and network equipment, as well as cloud management systems and appliances consisting of a software utilized by end users. Second, we utilize this approach for analyzing available scientific and industrial literature on state-of-the-art practices in data centers and their equipment. Finally, we extract existing challenges and highlight future research directions.", "title": "" }, { "docid": "961cc1dc7063706f8f66fc136da41661", "text": "From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible \"statistical\" properties that are the object of learning. Much less attention has been given to defining what \"learning\" is in the context of \"statistical learning.\" One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL.", "title": "" }, { "docid": "c797e42772802ee9924a970593e5c81e", "text": "Information systems have been widely adopted to support service processes in various domains, e.g., in the telecommunication, finance, and health sectors. Recently, work on process mining showed how management of these processes, and engineering of supporting systems, can be guided by models extracted from the event logs that are recorded during process operation. In this work, we establish a queueing perspective in operational process mining. We propose to consider queues as first-class citizens and use queueing theory as a basis for queue mining techniques. To demonstrate the value of queue mining, we revisit the specific operational problem of online delay prediction: using event data, we show that queue mining yields accurate online predictions of case delay.", "title": "" }, { "docid": "6965b52a011bc47eb302d7602dd8bcba", "text": "We have developed a simple and expandable procedure for classification and validation of extracellular data based on a probabilistic model of data generation. This approach relies on an empirical characterization of the recording noise. We first use this noise characterization to optimize the clustering of recorded events into putative neurons. As a second step, we use the noise model again to assess the quality of each cluster by comparing the within-cluster variability to that of the noise. This second step can be performed independently of the clustering algorithm used, and it provides the user with quantitative as well as visual tests of the quality of the classification.", "title": "" }, { "docid": "899e3e436cdaed9efb66b7c9c296ea90", "text": "Background estimation and foreground segmentation are important steps in many high-level vision tasks. Many existing methods estimate background as a low-rank component and foreground as a sparse matrix without incorporating the structural information. Therefore, these algorithms exhibit degraded performance in the presence of dynamic backgrounds, photometric variations, jitter, shadows, and large occlusions. We observe that these backgrounds often span multiple manifolds. Therefore, constraints that ensure continuity on those manifolds will result in better background estimation. Hence, we propose to incorporate the spatial and temporal sparse subspace clustering into the robust principal component analysis (RPCA) framework. To that end, we compute a spatial and temporal graph for a given sequence using motion-aware correlation coefficient. The information captured by both graphs is utilized by estimating the proximity matrices using both the normalized Euclidean and geodesic distances. The low-rank component must be able to efficiently partition the spatiotemporal graphs using these Laplacian matrices. Embedded with the RPCA objective function, these Laplacian matrices constrain the background model to be spatially and temporally consistent, both on linear and nonlinear manifolds. The solution of the proposed objective function is computed by using the linearized alternating direction method with adaptive penalty optimization scheme. Experiments are performed on challenging sequences from five publicly available datasets and are compared with the 23 existing state-of-the-art methods. The results demonstrate excellent performance of the proposed algorithm for both the background estimation and foreground segmentation.", "title": "" }, { "docid": "edc97560247ca1a6270c957de44217c4", "text": "Fuzzing is a well-known black-box approach to the security testing of applications. Fuzzing has many advantages in terms of simplicity and effectiveness over more complex, expensive testing approaches. Unfortunately, current fuzzing tools suffer from a number of limitations, and, in particular, they provide little support for the fuzzing of stateful protocols. In this paper, we present SNOOZE, a tool for building flexible, securityoriented, network protocol fuzzers. SNOOZE implements a stateful fuzzing approach that can be used to effectively identify security flaws in network protocol implementations. SNOOZE allows a tester to describe the stateful operation of a protocol and the messages that need to be generated in each state. In addition, SNOOZEprovides attack-specific fuzzing primitives that allow a tester to focus on specific vulnerability classes. We used an initial prototype of the SNOOZE tool to test programs that implement the SIP protocol, with promising results. SNOOZE supported the creation of sophisticated fuzzing scenarios that were able to expose realworld bugs in the programs analyzed.", "title": "" }, { "docid": "f86eea3192fe3dd8548cec52e53553e0", "text": "Acromioclavicular (AC) joint separations are common injuries of the shoulder girdle, especially in the young and active population. Typically the mechanism of this injury is a direct force against the lateral aspect of the adducted shoulder, the magnitude of which affects injury severity. While low-grade injuries are frequently managed successfully using non-surgical measures, high-grade injuries frequently warrant surgical intervention to minimize pain and maximize shoulder function. Factors such as duration of injury and activity level should also be taken into account in an effort to individualize each patient's treatment. A number of surgical techniques have been introduced to manage symptomatic, high-grade injuries. The purpose of this article is to review the important anatomy, biomechanical background, and clinical management of this entity.", "title": "" }, { "docid": "0f325e4fe9faf6c43a68ea2721b85f58", "text": "Prosopis juliflora is characterized by distinct and profuse growth even in nutritionally poor soil and environmentally stressed conditions and is believed to harbor some novel heavy metal-resistant bacteria in the rhizosphere and endosphere. This study was performed to isolate and characterize Cr-resistant bacteria from the rhizosphere and endosphere of P. juliflora growing on the tannery effluent contaminated soil. A total of 5 and 21 bacterial strains were isolated from the rhizosphere and endosphere, respectively, and were shown to tolerate Cr up to 3000 mg l(-1). These isolates also exhibited tolerance to other toxic heavy metals such as, Cd, Cu, Pb, and Zn, and high concentration (174 g l(-1)) of NaCl. Moreover, most of the isolated bacterial strains showed one or more plant growth-promoting activities. The phylogenetic analysis of the 16S rRNA gene showed that the predominant species included Bacillus, Staphylococcus and Aerococcus. As far as we know, this is the first report analyzing rhizo- and endophytic bacterial communities associated with P. juliflora growing on the tannery effluent contaminated soil. The inoculation of three isolates to ryegrass (Lolium multiflorum L.) improved plant growth and heavy metal removal from the tannery effluent contaminated soil suggesting that these bacteria could enhance the establishment of the plant in contaminated soil and also improve the efficiency of phytoremediation of heavy metal-degraded soils.", "title": "" }, { "docid": "cbdb038d8217ec2e0c4174519d6f2012", "text": "Many information retrieval algorithms rely on the notion of a good distance that allows to efficiently compare objects of different nature. Recently, a new promising metric called Word Mover’s Distance was proposed to measure the divergence between text passages. In this paper, we demonstrate that this metric can be extended to incorporate term-weighting schemes and provide more accurate and computationally efficient matching between documents using entropic regularization. We evaluate the benefits of both extensions in the task of cross-lingual document retrieval (CLDR). Our experimental results on eight CLDR problems suggest that the proposed methods achieve remarkable improvements in terms of Mean Reciprocal Rank compared to several baselines.", "title": "" }, { "docid": "2b74640b9f95e1004ffa10979946a4e6", "text": "A generic framework for the automated classification of human movements using an accelerometry monitoring system is introduced. The framework was structured around a binary decision tree in which movements were divided into classes and subclasses at different hierarchical levels. General distinctions between movements were applied in the top levels, and successively more detailed subclassifications were made in the lower levels of the tree. The structure was modular and flexible: parts of the tree could be reordered, pruned or extended, without the remainder of the tree being affected. This framework was used to develop a classifier to identify basic movements from the signals obtained from a single, waist-mounted triaxial accelerometer. The movements were first divided into activity and rest. The activities were classified as falls, walking, transition between postural orientations, or other movement. The postural orientations during rest were classified as sitting, standing or lying. In controlled laboratory studies in which 26 normal, healthy subjects carried out a set of basic movements, the sensitivity of every classification exceeded 87%, and the specificity exceeded 94%; the overall accuracy of the system, measured as the number of correct classifications across all levels of the hierarchy, was a sensitivity of 97.7% and a specificity of 98.7% over a data set of 1309 movements.", "title": "" }, { "docid": "8be72e103853aeac601aa65b61b98fd2", "text": "Opinion surveys usually employ multiple items to measure the respondent’s underlying value, belief, or attitude. To analyze such types of data, researchers have often followed a two-step approach by first constructing a composite measure and then using it in subsequent analysis. This paper presents a class of hierarchical item response models that help integrate measurement and analysis. In this approach, individual responses to multiple items stem from a latent preference, of which both the mean and variance may depend on observed covariates. Compared with the two-step approach, the hierarchical approach reduces bias, increases efficiency, and facilitates direct comparison across surveys covering different sets of items. Moreover, it enables us to investigate not only how preferences differ among groups, vary across regions, and evolve over time, but also levels, patterns, and trends of attitude polarization and ideological constraint. An open-source R package, hIRT, is available for fitting the proposed models. ∗Direct all correspondence to Xiang Zhou, Department of Government, Harvard University, 1737 Cambridge Street, Cambridge, MA 02138, USA; email: xiang zhou@fas.harvard.edu. The author thanks Kenneth Bollen, Bryce Corrigan, Ryan Enos, Max Goplerud, Gary King, Jonathan Kropko, Horacio Larreguy, Jie Lv, Christoph Mikulaschek, Barum Park, Pia Raffler, Yunkyu Sohn, Yu-Sung Su, Dustin Tingley, Yuhua Wang, Yu Xie, and Kazuo Yamaguchi for helpful comments on previous versions of this work.", "title": "" }, { "docid": "8717ccb9a12b4532aca5a747a3aeeeb2", "text": "The diaphragm is the primary muscle involved in active inspiration and serves also as an important anatomical landmark that separates the thoracic and abdominal cavity. However, the diaphragm muscle like other structures and organs in the human body has more than one function, and displays many anatomic links throughout the body, thereby forming a 'network of breathing'. Besides respiratory function, it is important for postural control as it stabilises the lumbar spine during loading tasks. It also plays a vital role in the vascular and lymphatic systems, as well as, is greatly involved in gastroesophageal functions such as swallowing, vomiting, and contributing to the gastroesophageal reflux barrier. In this paper we set out in detail the anatomy and embryology of the diaphragm and attempt to show it serves as both: an important exchange point of information, originating in different areas of the body, and a source of information in itself. The study also discusses all of its functions related to breathing.", "title": "" }, { "docid": "41eec7ed2d93fb415dfd197933975028", "text": "Open Information Extraction (OIE) is a recent unsupervised strategy to extract great amounts of basic propositions (verb-based triples) from massive text corpora which scales to Web-size document collections. We propose a multilingual rule-based OIE method that takes as input dependency parses in the CoNLL-X format, identifies argument structures within the dependency parses, and extracts a set of basic propositions from each argument structure. Our method requires no training data and, according to experimental studies, obtains higher recall and higher precision than existing approaches relying on training data. Experiments were performed in three languages: English, Portuguese, and Spanish.", "title": "" }, { "docid": "7143c97b6ea484566f521e36a3fa834e", "text": "To determine the reliability and concurrent validity of a visual analogue scale (VAS) for disability as a single-item instrument measuring disability in chronic pain patients was the objective of the study. For the reliability study a test-retest design and for the validity study a cross-sectional design was used. A general rehabilitation centre and a university rehabilitation centre was the setting for the study. The study population consisted of patients over 18 years of age, suffering from chronic musculoskeletal pain; 52 patients in the reliability study, 344 patients in the validity study. Main outcome measures were as follows. Reliability study: Spearman's correlation coefficients (rho values) of the test and retest data of the VAS for disability; validity study: rho values of the VAS disability scores with the scores on four domains of the Short-Form Health Survey (SF-36) and VAS pain scores, and with Roland-Morris Disability Questionnaire scores in chronic low back pain patients. Results were as follows: in the reliability study rho values varied from 0.60 to 0.77; and in the validity study rho values of VAS disability scores with SF-36 domain scores varied from 0.16 to 0.51, with Roland-Morris Disability Questionnaire scores from 0.38 to 0.43 and with VAS pain scores from 0.76 to 0.84. The conclusion of the study was that the reliability of the VAS for disability is moderate to good. Because of a weak correlation with other disability instruments and a strong correlation with the VAS for pain, however, its validity is questionable.", "title": "" }, { "docid": "d5233cdbe0044f2296be6136f459edcf", "text": "Road detection is one of the key issues of scene understanding for Advanced Driving Assistance Systems (ADAS). Recent approaches has addressed this issue through the use of different kinds of sensors, features and algorithms. KITTI-ROAD benchmark has provided an open-access dataset and standard evaluation mean for road area detection. In this paper, we propose an improved road detection algorithm that provides a pixel-level confidence map. The proposed approach is inspired from our former work based on road feature extraction using illuminant intrinsic image and plane extraction from v-disparity map segmentation. In the former research, detection results of road area are represented by binary map. The novelty of this improved algorithm is to introduce likelihood theory to build a confidence map of road detection. Such a strategy copes better with ambiguous environments, compared to a simple binary map. Evaluations and comparisons of both, binary map and confidence map, have been done using the KITTI-ROAD benchmark.", "title": "" }, { "docid": "bcf69b1d42d28b8ba66b133ad6421cc4", "text": "This paper presents SO-Net, a permutation invariant architecture for deep learning with orderless point clouds. The SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM). Based on the SOM, SO-Net performs hierarchical feature extraction on individual points and SOM nodes, and ultimately represents the input point cloud by a single feature vector. The receptive field of the network can be systematically adjusted by conducting point-to-node k nearest neighbor search. In recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval, our proposed network demonstrates performance that is similar with or better than state-of-the-art approaches. In addition, the training speed is significantly faster than existing point cloud recognition networks because of the parallelizability and simplicity of the proposed architecture. Our code is available at the project website.1", "title": "" } ]
scidocsrr
3b13ed8021bf0a68f8d5ef6227655b20
MMD GAN: Towards Deeper Understanding of Moment Matching Network
[ { "docid": "839b6bd24c7e020b0feef197cd6d9f92", "text": "We consider training a deep neural network to generate samples from an unknown distribution given i.i.d. data. We frame learning as an optimization minimizing a two-sample test statistic—informally speaking, a good generator network produces samples that cause a twosample test to fail to reject the null hypothesis. As our two-sample test statistic, we use an unbiased estimate of the maximum mean discrepancy, which is the centerpiece of the nonparametric kernel two-sample test proposed by Gretton et al. [2]. We compare to the adversarial nets framework introduced by Goodfellow et al. [1], in which learning is a two-player game between a generator network and an adversarial discriminator network, both trained to outwit the other. From this perspective, the MMD statistic plays the role of the discriminator. In addition to empirical comparisons, we prove bounds on the generalization error incurred by optimizing the empirical MMD.", "title": "" } ]
[ { "docid": "415bc7233b0f20d370d4018298ef45c5", "text": "A modular layered acetabular component (metal-polyethylene-ceramic) was developed in Japan for use in alumina ceramic-on-ceramic total hip replacement. Between May 1999 and July 2000, we performed 35 alumina ceramic-on-ceramic total hip replacements in 30 consecutive patients, using this layered component and evaluated the clinical and radiological results over a mean follow-up of 5.8 years (5 to 6.5). A total of six hips underwent revision, one for infection, two for dislocation with loosening of the acetabular component, two for alumina liner fractures and one for component dissociation with pelvic osteolysis. There were no fractures of the ceramic heads, and no loosening of the femoral or acetabular component in the unrevised hips was seen at final follow-up. Osteolysis was not observed in any of the unrevised hips. The survivorship analysis at six years after surgery was 83%. The layered acetabular component in our experience, has poor durability because of unexpected mechanical failures including alumina liner fracture and component dissociation.", "title": "" }, { "docid": "c10916da1311b2ccee2e6000b7ce907c", "text": "We proposed a deformable patches based method for single image super-resolution. By the concept of deformation, a patch is not regarded as a fixed vector but a flexible deformation flow. Via deformable patches, the dictionary can cover more patterns that do not appear, thus becoming more expressive. We present the energy function with slow, smooth and flexible prior for deformation model. During example-based super-resolution, we develop the deformation similarity based on the minimized energy function for basic patch matching. For robustness, we utilize multiple deformed patches combination for the final reconstruction. Experiments evaluate the deformation effectiveness and super-resolution performance, showing that the deformable patches help improve the representation accuracy and perform better than the state-of-art methods.", "title": "" }, { "docid": "f56bac3cb4ea99626afa51907e909fa3", "text": "An overview of technologies concerned with distributing the execution of simulation programs across multiple processors is presented. Here, particular emphasis is placed on discrete event simulations. The High Level Architecture (HLA) developed by the Department of Defense in the United States is first described to provide a concrete example of a contemporary approach to distributed simulation. The remainder of this paper is focused on time management, a central issue concerning the synchronization of computations on different processors. Time management algorithms broadly fall into two categories, termed conservative and optimistic synchronization. A survey of both conservative and optimistic algorithms is presented focusing on fundamental principles and mechanisms. Finally, time management in the HLA is discussed as a means to illustrate how this standard supports both approaches to synchronization.", "title": "" }, { "docid": "8296954ffde770f611d86773f72fb1b4", "text": "Group and async. commit? Better I/O performance But contention unchanged It reduces buffer contention, but...  Log space partitioning: by page or xct? – Impacts locality, recovery strategy  Dependency tracking: before commit, T4 must persist log records written by: – itself – direct xct deps: T4  T2 – direct page deps: T4  T3 – transitive deps: T4  {T3, T2}  T1  Storage is slow – T4 flushes all four logs upon commit (instead of one) Log work (20%) Log contention (46%) Other work (21%) CPU cycles: Lock manager Other contention", "title": "" }, { "docid": "4dc05debbbe6c8103d772d634f91c86c", "text": "In this paper we shows the experimental results using a microcontroller and hardware integration with the EMC2 software, using the Fuzzy Gain Scheduling PI Controller in a mechatronic prototype. The structure of the fuzzy 157 Research in Computing Science 116 (2016) pp. 157–169; rec. 2016-03-23; acc. 2016-05-11 controller is composed by two-inputs and two-outputs, is a TITO system. The error control feedback and their derivative are the inputs, while the proportional and integral gains are the fuzzy controller outputs. Was defined five Gaussian membership functions for the fuzzy sets by each input, the product fuzzy logic operator (AND connective) and the centroid defuzzifier was used to infer the gains outputs. The structure of fuzzy rule base are type Sugeno, zero-order. The experimental result in closed-loop shows the viability end effectiveness of the position fuzzy controller strategy. To verify the robustness of this controller structure, two different experiments was making: undisturbed and disturbance both in closed-loop. This work presents comparative experimental results, using the Classical tune rule of Ziegler-Nichols and the Fuzzy Gain Scheduling PI Controller, for a mechatronic system widely used in various industries applications.", "title": "" }, { "docid": "d4cf47c898268ffe01dc9aab75810d7c", "text": "In this paper, a new robust fault detection and isolation (FDI) methodology for an unmanned aerial vehicle (UAV) is proposed. The fault diagnosis scheme is constructed based on observer-based techniques according to fault models corresponding to each component (actuator, sensor, and structure). The proposed fault diagnosis method takes advantage of the structural perturbation of the UAV model due to the icing (the main structural fault in aircraft), sensor, and actuator faults to reduce the error of observers that are used in the FDI module in addition to distinguishing among faults in different components. Moreover, the accuracy of the FDI module is increased by considering the structural perturbation of the UAV linear model due to wind disturbances which is the major environmental disturbance affecting an aircraft. Our envisaged FDI strategy is capable of diagnosing recurrent faults through properly designed residuals with different responses to different types of faults. Simulation results are provided to illustrate and demonstrate the effectiveness of our proposed FDI approach due to faults in sensors, actuators, and structural components of unmanned aerial vehicles.", "title": "" }, { "docid": "b6f3dab3391a594712fdad3b31be2062", "text": "Social media has become a part of our daily life and we use it for many reasons. One of its uses is to get our questions answered. Given a multitude of social media sites, however, one immediate challenge is to pick the most relevant site for a question. This is a challenging problem because (1) questions are usually short, and (2) social media sites evolve. In this work, we propose to utilize topic specialization to find the most relevant social media site for a given question. In particular, semantic knowledge is considered for topic specialization as it can not only make a question more specific, but also dynamically represent the content of social sites, which relates a given question to a social media site. Thus, we propose to rank social media sites based on combined search engine query results. Our algorithm yields compelling results for providing a meaningful and consistent site recommendation. This work helps further understand the innate characteristics of major social media platforms for the design of social Q&A systems.", "title": "" }, { "docid": "1b2cb24f86191947973d9e1847908ec7", "text": "As a long-standing problem in computer vision, face detection has attracted much attention in recent decades for its practical applications. With the availability of face detection benchmark WIDER FACE dataset, much of the progresses have been made by various algorithms in recent years. Among them, the Selective Refinement Network (SRN) face detector introduces the two-step classification and regression operations selectively into an anchor-based face detector to reduce false positives and improve location accuracy simultaneously. Moreover, it designs a receptive field enhancement block to provide more diverse receptive field. In this report, to further improve the performance of SRN, we exploit some existing techniques via extensive experiments, including new data augmentation strategy, improved backbone network, MS COCO pretraining, decoupled classification module, segmentation branch and Squeeze-and-Excitation block. Some of these techniques bring performance improvements, while few of them do not well adapt to our baseline. As a consequence, we present an improved SRN face detector by combining these useful techniques together and obtain the best performance on widely used face detection benchmark WIDER FACE dataset.", "title": "" }, { "docid": "3dcf5f63798458ed697a23664675f2fe", "text": "Volatility plays crucial roles in financial markets, such as in derivative pricing, portfolio risk management, and hedging strategies. Therefore, accurate prediction of volatility is critical. We propose a new hybrid long short-term memory (LSTM) model to forecast stock price volatility that combines the LSTM model with various generalized autoregressive conditional heteroscedasticity (GARCH)-type models. We use KOSPI 200 index data to discover proposed hybrid models that combine an LSTM with one to three GARCH-type models. In addition, we compare their performance with existing methodologies by analyzing single models, such as the GARCH, exponential GARCH, exponentially weighted moving average, a deep feedforward neural network (DFN), and the LSTM, as well as the hybrid DFN models combining a DFN with one GARCH-type model. Their performance is compared with that of the proposed hybrid LSTM models. We discover that GEW-LSTM, a proposed hybrid model combining the LSTM model with three GARCH-type models, has the lowest prediction errors in terms of mean absolute error (MAE), mean squared error (MSE), heteroscedasticity adjusted MAE (HMAE), and heteroscedasticity adjusted MSE (HMSE). The MAE of GEW-LSTM is 0.0107, which is 37.2% less than that of the E-DFN (0.017), the model combining EGARCH and DFN and the best model among those existing. In addition, the GEWLSTM has 57.3%, 24.7%, and 48% smaller MSE, HMAE, and HMSE, respectively. The first contribution of this study is its hybrid LSTM model that combines excellent sequential pattern learning with improved prediction performance in stock market volatility. Second, our proposed model markedly enhances prediction performance of the existing literature by combining a neural network model with multiple econometric models rather than only a single econometric model. Finally, the proposed methodology can be extended to various fields as an integrated model combining time-series and neural network models as well as forecasting stock market volatility.", "title": "" }, { "docid": "9433fc835573173c38598517a0fac87c", "text": "Recommendation and review sites offer a wealth of information beyond ratings. For instance, on IMDb users leave reviews, commenting on different aspects of a movie (e.g. actors, plot, visual effects), and expressing their sentiments (positive or negative) on these aspects in their reviews. This suggests that uncovering aspects and sentiments will allow us to gain a better understanding of users, movies, and the process involved in generating ratings.\n The ability to answer questions such as \"Does this user care more about the plot or about the special effects?\" or \"What is the quality of the movie in terms of acting?\" helps us to understand why certain ratings are generated. This can be used to provide more meaningful recommendations.\n In this work we propose a probabilistic model based on collaborative filtering and topic modeling. It allows us to capture the interest distribution of users and the content distribution for movies; it provides a link between interest and relevance on a per-aspect basis and it allows us to differentiate between positive and negative sentiments on a per-aspect basis. Unlike prior work our approach is entirely unsupervised and does not require knowledge of the aspect specific ratings or genres for inference.\n We evaluate our model on a live copy crawled from IMDb. Our model offers superior performance by joint modeling. Moreover, we are able to address the cold start problem -- by utilizing the information inherent in reviews our model demonstrates improvement for new users and movies.", "title": "" }, { "docid": "efc62afce10aab0be9b1ffebb2c38fee", "text": "Most regression problems in practice require ¯exible semiparametric forms of the predictor for modelling the dependence of responses on covariates. Moreover, it is often necessary to add random effects accounting for overdispersion caused by unobserved heterogeneity or for correlation in longitudinal or spatial data. We present a uni®ed approach for Bayesian inference via Markov chain Monte Carlo simulation in generalized additive and semiparametric mixed models. Different types of covariates, such as the usual covariates with ®xed effects, metrical covariates with non-linear effects, unstructured random effects, trend and seasonal components in longitudinal data and spatial covariates, are all treated within the same general framework by assigning appropriate Markov random ®eld priors with different forms and degrees of smoothness. We applied the approach in several case-studies and consulting cases, showing that the methods are also computationally feasible in problems with many covariates and large data sets. In this paper, we choose two typical applications.", "title": "" }, { "docid": "11d9274a302192914e4191249ce6d7bd", "text": "Language serves as a cornerstone of human cognition. However, our knowledge about its neural basis is still a matter of debate, partly because ‘language’ is often ill-defined. Rather than equating language with ‘speech’ or ‘communication’, we propose that language is best described as a biologically determined computational cognitive mechanism that yields an unbounded array of hierarchically structured expressions. The results of recent brain imaging studies are consistent with this view of language as an autonomous cognitive mechanism, leading to a view of its neural organization, whereby language involves dynamic interactions of syntactic and semantic aspects represented in neural networks that connect the inferior frontal and superior temporal cortices functionally and structurally. Friederici et al. outline a view of the neural organization of language that is compatible with a description of language as a biologically determined computational mechanism that yields an infinite number of hierarchically structured expressions.", "title": "" }, { "docid": "6af82b74f0c5f78a013aba63e1ad08b1", "text": "Background/Objective:Many studies have identified early-life risk factors for subsequent childhood overweight/obesity, but few have evaluated how they combine to influence risk of childhood overweight/obesity. We examined associations, individually and in combination, of potentially modifiable risk factors in the first 1000 days after conception with childhood adiposity and risk of overweight/obesity in an Asian cohort.Methods:Six risk factors were examined: maternal pre-pregnancy overweight/obesity (body mass index (BMI) ⩾25 kg m−2), paternal overweight/obesity at 24 months post delivery, maternal excessive gestational weight gain, raised maternal fasting glucose during pregnancy (⩾5.1 mmol l−1), breastfeeding duration <4 months and early introduction of solid foods (<4 months). Associations between number of risk factors and adiposity measures (BMI, waist-to-height ratio (WHtR), sum of skinfolds (SSFs), fat mass index (FMI) and overweight/obesity) at 48 months were assessed using multivariable regression models.Results:Of 858 children followed up at 48 months, 172 (19%) had none, 274 (32%) had 1, 244 (29%) had 2, 126 (15%) had 3 and 42 (5%) had ⩾4 risk factors. Adjusting for confounders, significant graded positive associations were observed between number of risk factors and adiposity outcomes at 48 months. Compared with children with no risk factors, those with four or more risk factors had s.d. unit increases of 0.78 (95% confidence interval 0.41–1.15) for BMI, 0.79 (0.41–1.16) for WHtR, 0.46 (0.06–0.83) for SSF and 0.67 (0.07–1.27) for FMI. The adjusted relative risk of overweight/obesity in children with four or more risk factors was 11.1(2.5–49.1) compared with children with no risk factors. Children exposed to maternal pre-pregnancy (11.8(9.8–13.8)%) or paternal overweight status (10.6(9.6-11.6)%) had the largest individual predicted probability of child overweight/obesity.Conclusions:Early-life risk factors added cumulatively to increase childhood adiposity and risk of overweight/obesity. Early-life and preconception intervention programmes may be more effective in preventing overweight/obesity if they concurrently address these multiple modifiable risk factors.", "title": "" }, { "docid": "6d2ebecdd8120fb6bcfa805bd62d2899", "text": "The oxidation of organic and inorganic compounds during ozonation can occur via ozone or OH radicals or a combination thereof. The oxidation pathway is determined by the ratio of ozone and OH radical concentrations and the corresponding kinetics. A huge database with several hundred rate constants for ozone and a few thousand rate constants for OH radicals is available. Ozone is an electrophile with a high selectivity. The second-order rate constants for oxidation by ozone vary over 10 orders of magnitude, between o0.1M s 1 and about 7 10M s . The reactions of ozone with drinking-water relevant inorganic compounds are typically fast and occur by an oxygen atom transfer reaction. Organic micropollutants are oxidized with ozone selectively. Ozone reacts mainly with double bonds, activated aromatic systems and non-protonated amines. In general, electron-donating groups enhance the oxidation by ozone whereas electron-withdrawing groups reduce the reaction rates. Furthermore, the kinetics of direct ozone reactions depend strongly on the speciation (acid-base, metal complexation). The reaction of OH radicals with the majority of inorganic and organic compounds is nearly diffusion-controlled. The degree of oxidation by ozone and OH radicals is given by the corresponding kinetics. Product formation from the ozonation of organic micropollutants in aqueous systems has only been established for a few compounds. It is discussed for olefines, amines and aromatic compounds. r 2002 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "6418f3a7c825353802066702dc353af2", "text": "Graphene is establishing itself as a new photonic material with huge potential in a variety of applications ranging from transparent electrodes in displays and photovoltaic modules to saturable absorber in mode-locked lasers. Its peculiar bandstructure and electron transport characteristics naturally suggest graphene could also form the basis for a new generation of high-performance devices operating in the terahertz (THz) range of the electromagnetic spectrum. The region between 300 GHz and 10 THz is in fact still characterized by a lack of efficient, compact, solid state photonic components capable of operating well at 300 K. Recent works have already shown very promising results in the development of high-speed modulators as well as of bolometer and plasma-wave detectors. Furthermore, several concepts have been proposed aiming at the realization of lasers and oscillators. This paper will review the latest achievements in graphene-based THz photonics and discuss future perspectives of this rapidly developing research field.", "title": "" }, { "docid": "af4d583cf45d13c09e59a927905a7794", "text": "Background and aims: Addiction to internet and mobile phone may be affecting all aspect of student’s life. Knowledge about prevalence and related factors of internet and mobile phone addiction is necessary for planning for prevention and treatment. This study was conducted to evaluate the prevalence of internet and mobile phone addiction among Iranian students. Methods: This cross sectional study conducted from Jun to April 2015 in Rasht Iran. With using stratified sampling method, 581 high school students from two region of Rasht in North of Iran were recruited as the subjects for this study. Data were collected with using demographics questionnaire, Cell phone Overuse Scale (COS), and the Internet Addiction Test (IAT). Analysis was performed using Statistical Package for Social Sciences (SPSS) 17 21 version. Results: Of the 581 students, who participate in present study, 53.5% were female and the rest were male. The mean age of students was 16.28±1.01 years. The mean score of IAT was 42.03±18.22. Of the 581 students, 312 (53.7%), 218 (37.5%) and 51 (8.8%) showed normal, mild and moderate level of internet addiction. The mean score of COS was 55.10±19.86.Of the 581 students, 27(6/4%), 451(6/77) and 103 (7/17) showed low, moderate and high level of mobile phone addiction. Conclusion: according to finding of present study, rate of mobile phone and internet addiction were high among Iranian students. Health care authorities should pay more attention to these problems.", "title": "" }, { "docid": "7efa29d888e6b1bc000ab01c5d620304", "text": "Cowden syndrome (CS) is an autosomal dominant genodermatosis that frequently affects several tissues with hamartomatous growth. The oral cavity is quite commonly involved with papillomatous lesions, which can be crucial to early diagnosis of this disease. In this series, 10 patients with a great diversity of manifestations associated with CS are presented, in whom oral papillomatosis was a constant and relevant finding to establish the diagnosis of CS. The role of the dentist in recognizing the oral lesions, the other diagnostic criteria, the risk for the development of malignancies, and the importance of lifetime follow-up are discussed.", "title": "" }, { "docid": "17806963c91f6d6981f1dcebf3880927", "text": "The ability to assess the reputation of a member in a web community is a need addressed in many different ways according to the many different stages in which the nature of communities has evolved over time. In the case of reputation of goods/services suppliers, the solutions available to prevent the feedback abuse are generally reliable but centralized under the control of few big Internet companies. In this paper we show how a decentralized and distributed feedback management system can be built on top of the Bitcoin blockchain.", "title": "" }, { "docid": "978e84b111435040668e0654dfb0d1c2", "text": "Detection of outliers in radar signals is a considerable challenge in maritime surveillance applications. High-Frequency Surface-Wave (HFSW) radars have attracted significant interest as potential tools for long-range target identification and outlier detection at over-the-horizon (OTH) distances. However, a number of disadvantages, such as their low spatial resolution and presence of clutter, have a negative impact on their accuracy. In this paper, we explore the applicability of deep learning techniques for detecting deviations from the norm in behavioral patterns of vessels (outliers) as they are tracked from an OTH radar. The proposed methodology exploits the nonlinear mapping capabilities of deep stacked autoencoders in combination with density-based clustering. A comparative experimental evaluation of the approach shows promising results in terms of the proposed methodology's performance.", "title": "" } ]
scidocsrr
937bec8416217a0f5577d1223c514146
Active Learning of Inverse Models with Intrinsically Motivated Goal Exploration in Robots
[ { "docid": "749e11a625e94ab4e1f03a74aa6b3ab2", "text": "We present Confidence-Based Autonomy (CBA), an interactive algorithm for policy learning from demonstration. The CBA algorithm consists of two components which take advantage of the complimentary abilities of humans and computer agents. The first component, Confident Execution, enables the agent to identify states in which demonstration is required, to request a demonstration from the human teacher and to learn a policy based on the acquired data. The algorithm selects demonstrations based on a measure of action selection confidence, and our results show that using Confident Execution the agent requires fewer demonstrations to learn the policy than when demonstrations are selected by a human teacher. The second algorithmic component, Corrective Demonstration, enables the teacher to correct any mistakes made by the agent through additional demonstrations in order to improve the policy and future task performance. CBA and its individual components are compared and evaluated in a complex simulated driving domain. The complete CBA algorithm results in the best overall learning performance, successfully reproducing the behavior of the teacher while balancing the tradeoff between number of demonstrations and number of incorrect actions during learning.", "title": "" } ]
[ { "docid": "fe3570c283fbf8b1f504e7bf4c2703a8", "text": "We propose ThalNet, a deep learning model inspired by neocortical communication via the thalamus. Our model consists of recurrent neural modules that send features through a routing center, endowing the modules with the flexibility to share features over multiple time steps. We show that our model learns to route information hierarchically, processing input data by a chain of modules. We observe common architectures, such as feed forward neural networks and skip connections, emerging as special cases of our architecture, while novel connectivity patterns are learned for the text8 compression task. Our model outperforms standard recurrent neural networks on several sequential benchmarks.", "title": "" }, { "docid": "9da1449675af42a2fc75ba8259d22525", "text": "The purpose of the research reported here was to test empirically a conceptualization of brand associations that consists of three dimensions: brand image, brand attitude and perceived quality. A better understanding of brand associations is needed to facilitate further theoretical development and practical measurement of the construct. Three studies were conducted to: test a protocol for developing product category specific measures of brand image; investigate the dimensionality of the brand associations construct; and explore whether the degree of dimensionality of brand associations varies depending upon a brand's familiarity. Findings confirm the efficacy of the brand image protocol and indicate that brand associations differ across brands and product categories. The latter finding supports the conclusion that brand associations for different products should be measured using different items. As predicted, dimensionality of brand associations was found to be influenced by brand familiarity. Research interest in branding continues to be strong in the marketing literature (e.g. Alden et al., 1999; Kirmani et al., 1999; Erdem, 1998). Likewise, marketing managers continue to realize the power of brands, manifest in the recent efforts of many companies to build strong Internet `̀ brands'' such as amazon.com and msn.com (Narisetti, 1998). The way consumers perceive brands is a key determinant of long-term businessconsumer relationships (Fournier, 1998). Hence, building strong brand perceptions is a top priority for many firms today (Morris, 1996). Despite the importance of brands and consumer perceptions of them, marketing researchers have not used a consistent definition or measurement technique to assess consumer perceptions of brands. To address this, two scholars have recently developed extensive conceptual treatments of branding and related issues. Keller (1993; 1998) refers to consumer perceptions of brands as brand knowledge, consisting of brand awareness (recognition and recall) and brand image. Keller defines brand image as `̀ perceptions about a brand as reflected by the brand associations held in consumer memory''. These associations include perceptions of brand quality and attitudes toward the brand. Similarly, Aaker (1991, 1996a) proposes that brand associations are anything linked in memory to a brand. Keller and Aaker both appear to hypothesize that consumer perceptions of brands are The current issue and full text archive of this journal is available at http://www.emerald-library.com The authors thank Paul Herr, Donnie Lichtenstein, Rex Moody, Dave Cravens and Julie Baker for helpful comments on earlier versions of this manuscript. Funding was provided by the Graduate School of the University of Colorado and the Charles Tandy American Enterprise Center at Texas Christian University. Top priority for many firms today 350 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000, pp. 350-368, # MCB UNIVERSITY PRESS, 1061-0421 An executive summary for managers and executive readers can be found at the end of this article multi-dimensional, yet many of the dimensions they identify appear to be very similar. Furthermore, Aaker's and Keller's conceptualizations of consumers' psychological representation of brands have not been subjected to empirical validation. Consequently, it is difficult to determine if the various constructs they discuss, such as brand attitudes and perceived quality, are separate dimensions of brand associations, (multi-dimensional) as they propose, or if they are simply indicators of brand associations (unidimensional). A number of studies have appeared recently which measure some aspect of consumer brand associations, but these studies do not use consistent measurement techniques and hence, their results are not comparable. They also do not discuss the issue of how to conceptualize brand associations, but focus on empirically identifying factors which enhance or diminish one component of consumer perceptions of brands (e.g. Berthon et al., 1997; Keller and Aaker, 1997; Keller et al., 1998; RoedderJohn et al., 1998; Simonin and Ruth, 1998). Hence, the proposed multidimensional conceptualizations of brand perceptions have not been tested empirically, and the empirical work operationalizes these perceptions as uni-dimensional. Our goal is to provide managers of brands a practical measurement protocol based on a parsimonious conceptual model of brand associations. The specific objectives of the research reported here are to: . test a protocol for developing category-specific measures of brand image; . examine the conceptualization of brand associations as a multidimensional construct by testing brand image, brand attitude, and perceived quality in the same model; and . explore whether the degree of dimensionality of brand associations varies depending on a brand's familiarity. In subsequent sections of this paper we explain the theoretical background of our research, describe three studies we conducted to test our conceptual model, and discuss the theoretical and managerial implications of the results. Conceptual background Brand associations According to Aaker (1991), brand associations are the category of a brand's assets and liabilities that include anything `̀ linked'' in memory to a brand (Aaker, 1991). Keller (1998) defines brand associations as informational nodes linked to the brand node in memory that contain the meaning of the brand for consumers. Brand associations are important to marketers and to consumers. Marketers use brand associations to differentiate, position, and extend brands, to create positive attitudes and feelings toward brands, and to suggest attributes or benefits of purchasing or using a specific brand. Consumers use brand associations to help process, organize, and retrieve information in memory and to aid them in making purchase decisions (Aaker, 1991, pp. 109-13). While several research efforts have explored specific elements of brand associations (Gardner and Levy, 1955; Aaker, 1991; 1996a; 1996b; Aaker and Jacobson, 1994; Aaker, 1997; Keller, 1993), no research has been reported that combined these elements in the same study in order to measure how they are interrelated. Practical measurement protocol Importance to marketers and consumers JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 351 Scales to measure partially brand associations have been developed. For example, Park and Srinivasan (1994) developed items to measure one dimension of toothpaste brand associations that included the brand's perceived ability to fight plaque, freshen breath and prevent cavities. This scale is clearly product category specific. Aaker (1997) developed a brand personality scale with five dimensions and 42 items. This scale is not practical to use in some applied studies because of its length. Also, the generalizability of the brand personality scale is limited because many brands are not personality brands, and no protocol is given to adapt the scale. As Aaker (1996b, p. 113) notes, `̀ using personality as a general indicator of brand strength will be a distortion for some brands, particularly those that are positioned with respect to functional advantages and value''. Hence, many previously developed scales are too specialized to allow for general use, or are too long to be used in some applied settings. Another important issue that has not been empirically examined in the literature is whether brand associations represent a one-dimensional or multi-dimensional construct. Although this may appear to be an obvious question, we propose later in this section the conditions under which this dimensionality may be more (or less) measurable. As previously noted, Aaker (1991) defines brand associations as anything linked in memory to a brand. Three related constructs that are, by definition, linked in memory to a brand, and which have been researched conceptually and measured empirically, are brand image, brand attitude, and perceived quality. We selected these three constructs as possible dimensions or indicators of brand associations in our conceptual model. Of the many possible components of brand associations we could have chosen, we selected these three constructs because they: (1) are the three most commonly cited consumer brand perceptions in the empirical marketing literature; (2) have established, reliable, published measures in the literature; and (3) are three dimensions discussed frequently in prior conceptual research (Aaker, 1991; 1996; Keller, 1993; 1998). We conceptualize brand image (functional and symbolic perceptions), brand attitude (overall evaluation of a brand), and perceived quality (judgments of overall superiority) as possible dimensions of brand associations (see Figure 1). Brand image, brand attitude, and perceived quality Brand image is defined as the reasoned or emotional perceptions consumers attach to specific brands (Dobni and Zinkhan,1990) and is the first consumer brand perception that was identified in the marketing literature (Gardner and Levy, 1955). Brand image consists of functional and symbolic brand beliefs. A measurement technique using semantic differential items generated for the relevant product category has been suggested for measuring brand image (Dolich, 1969; Fry and Claxton, 1971). Brand image associations are largely product category specific and measures should be customized for the unique characteristics of specific brand categories (Park and Srinivasan, 1994; Bearden and Etzel, 1982). Brand attitude is defined as consumers' overall evaluation of a brand ± whether good or bad (Mitchell and Olson, 1981). Semantic differential scales measuring brand attitude have frequently appeared in the marketing Linked in memory to a brand Reasoned or emotional perceptions 352 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 literature. Bruner and Hensel (1996) reported 66 published studies which measured brand attitud", "title": "" }, { "docid": "7fafda966819bb780b8b2b6ada4cc468", "text": "Acne inversa (AI) is a chronic and recurrent inflammatory skin disease. It occurs in intertriginous areas of the skin and causes pain, drainage, malodor and scar formation. While supposedly caused by an autoimmune reaction, bacterial superinfection is a secondary event in the disease process. A unique case of a 43-year-old male patient suffering from a recurring AI lesion in the left axilla was retrospectively analysed. A swab revealed Actinomyces neuii as the only agent growing in the lesion. The patient was then treated with Amoxicillin/Clavulanic Acid 3 × 1 g until he was cleared for surgical excision. The intraoperative swab was negative for A. neuii. Antibiotics were prescribed for another 4 weeks and the patient has remained relapse free for more than 12 months now. Primary cutaneous Actinomycosis is a rare entity and the combination of AI and Actinomycosis has never been reported before. Failure to detect superinfections of AI lesions with slow-growing pathogens like Actinomyces spp. might contribute to high recurrence rates after immunosuppressive therapy of AI. The present case underlines the potentially multifactorial pathogenesis of the disease and the importance of considering and treating potential infections before initiating immunosuppressive regimens for AI patients.", "title": "" }, { "docid": "627587e2503a2555846efb5f0bca833b", "text": "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the selfattention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.", "title": "" }, { "docid": "53307a72e0a50b65da45f83e5a8ff9f0", "text": "Although few studies dispute that there are gender differences in depression, the etiology is still unknown. In this review, we cover a number of proposed factors and the evidences for and against these factors that may account for gender differences in depression. These include the possible role of estrogens at puberty, differences in exposure to childhood trauma, differences in stress perception between men and women and the biological differences in stress response. None of these factors seem to explain gender differences in depression. Finally, we do know that when depressed, women show greater hypothalamic–pituitary–adrenal (HPA) axis activation than men and that menopause with loss of estrogens show the greatest HPA axis dysregulation. It may be the constantly changing steroid milieu that contributes to these phenomena and vulnerability to depression.", "title": "" }, { "docid": "2cac667e743d0a020ef136215339e1ed", "text": "We present the design and experimental validation of a scalable dc microgrid for rural electrification in emerging regions. A salient property of the dc microgrid architecture is the distributed control of the grid voltage, which enables both instantaneous power sharing and a metric for determining the available grid power. A droop-voltage power-sharing scheme is implemented wherein the bus voltage droops in response to low supply/high demand. In addition, the architecture of the dc microgrid aims to minimize the losses associated with stored energy by distributing storage to individual households. In this way, the number of conversion steps and line losses are reduced. We calculate that the levelized cost of electricity of the proposed dc microgrid over a 15-year time horizon is $0.35/kWh. We also present the experimental results from a scaled-down experimental prototype that demonstrates the steady-state behavior, the perturbation response, and the overall efficiency of the system. Moreover, we present fault mitigation strategies for various faults that can be expected to occur in a microgrid distribution system. The experimental results demonstrate the suitability of the presented dc microgrid architecture as a technically advantageous and cost-effective method for electrifying emerging regions.", "title": "" }, { "docid": "e9438241965b4cb6601624456b60f990", "text": "This paper proposes a model for designing games around Artificial Intelligence (AI). AI-based games put AI in the foreground of the player experience rather than in a supporting role as is often the case in many commercial games. We analyze the use of AI in a number of existing games and identify design patterns for AI in games. We propose a generative ideation technique to combine a design pattern with an AI technique or capacity to make new AI-based games. Finally, we demonstrate this technique through two examples of AI-based game prototypes created using these patterns.", "title": "" }, { "docid": "e567034595d9bb6a236d15b8623efce7", "text": "In this paper, we use artificial neural networks (ANNs) for voice conversion and exploit the mapping abilities of an ANN model to perform mapping of spectral features of a source speaker to that of a target speaker. A comparative study of voice conversion using an ANN model and the state-of-the-art Gaussian mixture model (GMM) is conducted. The results of voice conversion, evaluated using subjective and objective measures, confirm that an ANN-based VC system performs as good as that of a GMM-based VC system, and the quality of the transformed speech is intelligible and possesses the characteristics of a target speaker. In this paper, we also address the issue of dependency of voice conversion techniques on parallel data between the source and the target speakers. While there have been efforts to use nonparallel data and speaker adaptation techniques, it is important to investigate techniques which capture speaker-specific characteristics of a target speaker, and avoid any need for source speaker's data either for training or for adaptation. In this paper, we propose a voice conversion approach using an ANN model to capture speaker-specific characteristics of a target speaker and demonstrate that such a voice conversion approach can perform monolingual as well as cross-lingual voice conversion of an arbitrary source speaker.", "title": "" }, { "docid": "c27eecae33fe87779d3452002c1bdf8a", "text": "When intelligent agents learn visuomotor behaviors from human demonstrations, they may benefit from knowing where the human is allocating visual attention, which can be inferred from their gaze. A wealth of information regarding intelligent decision making is conveyed by human gaze allocation; hence, exploiting such information has the potential to improve the agents’ performance. With this motivation, we propose the AGIL (Attention Guided Imitation Learning) framework. We collect high-quality human action and gaze data while playing Atari games in a carefully controlled experimental setting. Using these data, we first train a deep neural network that can predict human gaze positions and visual attention with high accuracy (the gaze network) and then train another network to predict human actions (the policy network). Incorporating the learned attention model from the gaze network into the policy network significantly improves the action prediction accuracy and task performance.", "title": "" }, { "docid": "2b540b2e48d5c381e233cb71c0cf36fe", "text": "In this paper we review the most peculiar and interesting information-theoretic and communications features of fading channels. We first describe the statistical models of fading channels which are frequently used in the analysis and design of communication systems. Next, we focus on the information theory of fading channels, by emphasizing capacity as the most important performance measure. Both single-user and multiuser transmission are examined. Further, we describe how the structure of fading channels impacts code design, and finally overview equalization of fading multipath channels.", "title": "" }, { "docid": "d2c13b3daa3712b32172126404b14c20", "text": "To adequately perform perioral rejuvenation procedures, it is necessary to understand the morphologic changes caused by facial aging. Anthropometric analyses of standardized frontal view and profile photographs could help to investigate such changes. Photographs of 346 male individuals were evaluated using 12 anthropometric indices. Data from two groups of health subjects, the first exhibiting a mean age of nearly 20 and the second of nearly 60 years, were compared. To evaluate the influence of combined nicotine and alcohol abuse, the data of the second group were compared to a third group exhibiting a similar mean age who were known alcohol and nicotine abusers. Comparison of the first to the second group showed significant decrease of the vertical height of upper and lower vermilion and relative enlargement of the cutaneous part of upper and lower lips. This effect was stronger in the upper vermilion and medial upper lips. The sagging of the upper lips led to the appearance of an increased mouth width. In the third group the effect of sagging of the upper lips, and especially its medial portion was significantly higher compared to the second group. The photo-assisted anthropometric measurements investigated gave reproducible results related to perioral aging.", "title": "" }, { "docid": "00e56a93a3b8ee3a3d2cdab2fd27375e", "text": "Omnidirectional image and video have gained popularity thanks to availability of capture and display devices for this type of content. Recent studies have assessed performance of objective metrics in predicting visual quality of omnidirectional content. These metrics, however, have not been rigorously validated by comparing their prediction results with ground-truth subjective scores. In this paper, we present a set of 360-degree images along with their subjective quality ratings. The set is composed of four contents represented in two geometric projections and compressed with three different codecs at four different bitrates. A range of objective quality metrics for each stimulus is then computed and compared to subjective scores. Statistical analysis is performed in order to assess performance of each objective quality metric in predicting subjective visual quality as perceived by human observers. Results show the estimated performance of the state-of-the-art objective metrics for omnidirectional visual content. Objective metrics specifically designed for 360-degree content do not outperform conventional methods designed for 2D images.", "title": "" }, { "docid": "f395e3d72341bd20e1a16b97259bad7d", "text": "Malicious software in form of Internet worms, computer viru ses, and Trojan horses poses a major threat to the security of network ed systems. The diversity and amount of its variants severely undermine the effectiveness of classical signature-based detection. Yet variants of malware f milies share typical behavioral patternsreflecting its origin and purpose. We aim to exploit these shared patterns for classification of malware and propose a m thod for learning and discrimination of malware behavior. Our method proceed s in three stages: (a) behavior of collected malware is monitored in a sandbox envi ro ment, (b) based on a corpus of malware labeled by an anti-virus scanner a malware behavior classifieris trained using learning techniques and (c) discriminativ e features of the behavior models are ranked for explanation of classifica tion decisions. Experiments with di fferent heterogeneous test data collected over several month s using honeypots demonstrate the e ffectiveness of our method, especially in detecting novel instances of malware families previously not recognized by commercial anti-virus software.", "title": "" }, { "docid": "1e100608fd78b1e20020f892784199ed", "text": "In this paper we introduce a system for unsupervised object discovery and segmentation of RGBD-images. The system models the sensor noise directly from data, allowing accurate segmentation without sensor specific hand tuning of measurement noise models making use of the recently introduced Statistical Inlier Estimation (SIE) method [1]. Through a fully probabilistic formulation, the system is able to apply probabilistic inference, enabling reliable segmentation in previously challenging scenarios. In addition, we introduce new methods for filtering out false positives, significantly improving the signal to noise ratio. We show that the system significantly outperform state-of-the-art in on a challenging real-world dataset.", "title": "" }, { "docid": "335220bbad7798a19403d393bcbbf7fb", "text": "In today’s computerized and information-based society, text data is rich but messy. People are soaked with vast amounts of natural-language text data, ranging from news articles, social media post, advertisements, to a wide range of textual information from various domains (medical records, corporate reports). To turn such massive unstructured text data into actionable knowledge, one of the grand challenges is to gain an understanding of the factual information (e.g., entities, attributes, relations, events) in the text. In this tutorial, we introduce data-driven methods to construct structured information networks (where nodes are different types of entities attached with attributes, and edges are different relations between entities) for text corpora of different kinds (especially for massive, domain-specific text corpora) to represent their factual information. We focus on methods that are minimally-supervised, domain-independent, and languageindependent for fast network construction across various application domains (news, web, biomedical, reviews). We demonstrate on real datasets including news articles, scientific publications, tweets and reviews how these constructed networks aid in text analytics and knowledge discovery at a large scale.", "title": "" }, { "docid": "139d9d5866a1e455af954b2299bdbcf6", "text": "1 . I n t r o d u c t i o n Reasoning about knowledge and belief has long been an issue of concern in philosophy and artificial intelligence (cf. [Hil],[MH],[Mo]). Recently we have argued that reasoning about knowledge is also crucial in understanding and reasoning about protocols in distributed systems, since messages can be viewed as changing the state of knowledge of a system [HM]; knowledge also seems to be of v i tal importance in cryptography theory [Me] and database theory. In order to formally reason about knowledge, we need a good semantic model. Part of the difficulty in providing such a model is that there is no agreement on exactly what the properties of knowledge are or should * This author's work was supported in part by DARPA contract N00039-82-C-0250. be. For example, is it the case that you know what facts you know? Do you know what you don't know? Do you know only true things, or can something you \"know\" actually be false? Possible-worlds semantics provide a good formal tool for \"customizing\" a logic so that, by making minor changes in the semantics, we can capture different sets of axioms. The idea, first formalized by Hintikka [Hi l ] , is that in each state of the world, an agent (or knower or player: we use all these words interchangeably) has other states or worlds that he considers possible. An agent knows p exactly if p is true in all the worlds that he considers possible. As Kripke pointed out [Kr], by imposing various conditions on this possibil i ty relation, we can capture a number of interesting axioms. For example, if we require that the real world always be one of the possible worlds (which amounts to saying that the possibility relation is reflexive), then it follows that you can't know anything false. Similarly, we can show that if the relation is transitive, then you know what you know. If the relation is transitive and symmetric, then you also know what you don't know. (The one-knower models where the possibility relation is reflexive corresponds to the classical modal logic T, while the reflexive and transitive case corresponds to S4, and the reflexive, symmetric and transitive case corresponds to S5.) Once we have a general framework for modelling knowledge, a reasonable question to ask is how hard it is to reason about knowledge. In particular, how hard is it to decide if a given formula is valid or satisfiable? The answer to this question depends crucially on the choice of axioms. For example, in the oneknower case, Ladner [La] has shown that for T and S4 the problem of deciding satisfiability is complete in polynomial space, while for S5 it is NP-complete, J. Halpern and Y. Moses 481 and thus no harder than the satisf iabi l i ty problem for propos i t iona l logic. Our a im in th is paper is to reexamine the possiblewor lds f ramework for knowledge and belief w i t h four par t icu lar po ints of emphasis: (1) we show how general techniques for f inding decision procedures and complete ax iomat izat ions apply to models for knowledge and belief, (2) we show how sensitive the di f f icul ty of the decision procedure is to such issues as the choice of moda l operators and the ax iom system, (3) we discuss how not ions of common knowledge and impl ic i t knowl edge among a group of agents fit in to the possibleworlds f ramework, and, f inal ly, (4) we consider to what extent the possible-worlds approach is a viable one for model l ing knowledge and belief. We begin in Section 2 by reviewing possible-world semantics in deta i l , and prov ing tha t the many-knower versions of T, S4, and S5 do indeed capture some of the more common axiomatizat ions of knowledge. In Section 3 we t u r n to complexity-theoret ic issues. We review some standard not ions f rom complexi ty theory, and then reprove and extend Ladner's results to show tha t the decision procedures for the many-knower versions of T, S4, and S5 are a l l complete in po lynomia l space.* Th is suggests tha t for S5, reasoning about many agents' knowledge is qual i ta t ive ly harder than jus t reasoning about one agent's knowledge of the real wor ld and of his own knowledge. In Section 4 we t u rn our at tent ion to mod i fy ing the model so tha t i t can deal w i t h belief rather than knowledge, where one can believe something tha t is false. Th is turns out to be somewhat more compl i cated t han dropp ing the assumption of ref lexivi ty, but i t can s t i l l be done in the possible-worlds f ramework. Results about decision procedures and complete axiomat i i a t i ons for belief paral le l those for knowledge. In Section 5 we consider what happens when operators for common knowledge and implicit knowledge are added to the language. A group has common knowledge of a fact p exact ly when everyone knows tha t everyone knows tha t everyone knows ... tha t p is t rue. (Common knowledge is essentially wha t McCar thy 's \" f oo l \" knows; cf. [MSHI] . ) A group has i m p l ic i t knowledge of p i f, roughly speaking, when the agents poo l the i r knowledge together they can deduce p. (Note our usage of the not ion of \" imp l i c i t knowl edge\" here differs s l ight ly f rom the way it is used in [Lev2] and [FH].) As shown in [ H M l ] , common knowl edge is an essential state for reaching agreements and * A problem is said to be complete w i th respect to a complexity class if, roughly speaking, it is the hardest problem in that class (see Section 3 for more details). coordinating action. For very similar reasons, common knowledge also seems to play an important role in human understanding of speech acts (cf. [CM]). The notion of implicit knowledge arises when reasoning about what states of knowledge a group can attain through communication, and thus is also crucial when reasoning about the efficacy of speech acts and about communication protocols in distributed systems. It turns out that adding an implicit knowledge operator to the language does not substantially change the complexity of deciding the satisfiability of formulas in the language, but this is not the case for common knowledge. Using standard techniques from PDL (Propositional Dynamic Logic; cf. [FL],[Pr]), we can show that when we add common knowledge to the language, the satisfiability problem for the resulting logic (whether it is based on T, S4, or S5) is complete in deterministic exponential time, as long as there at least two knowers. Thus, adding a common knowledge operator renders the decision procedure qualitatively more complex. (Common knowledge does not seem to be of much interest in the in the case of one knower. In fact, in the case of S4 and S5, if there is only one knower, knowledge and common knowledge are identical.) We conclude in Section 6 with some discussion of the appropriateness of the possible-worlds approach for capturing knowledge and belief, particularly in light of our results on computational complexity. Detailed proofs of the theorems stated here, as well as further discussion of these results, can be found in the ful l paper ([HM2]). 482 J. Halpern and Y. Moses 2.2 Possib le-wor lds semant ics: Following Hintikka [H i l ] , Sato [Sa], Moore [Mo], and others, we use a posaible-worlds semantics to model knowledge. This provides us wi th a general framework for our semantical investigations of knowledge and belief. (Everything we say about \"knowledge* in this subsection applies equally well to belief.) The essential idea behind possible-worlds semantics is that an agent's state of knowledge corresponds to the extent to which he can determine what world he is in. In a given world, we can associate wi th each agent the set of worlds that, according to the agent's knowledge, could possibly be the real world. An agent is then said to know a fact p exactly if p is true in all the worlds in this set; he does not know p if there is at least one world that he considers possible where p does not hold. * We discuss the ramifications of this point in Section 6. ** The name K (m) is inspired by the fact that for one knower, the system reduces to the well-known modal logic K. J. Halpern and Y. Moses 483 484 J. Halpern and Y. Moses that can be said is that we are modelling a rather idealised reaaoner, who knows all tautologies and all the logical consequences of his knowledge. If we take the classical interpretation of knowledge as true, justified belief, then an axiom such as A3 seems to be necessary. On the other hand, philosophers have shown that axiom A5 does not hold wi th respect to this interpretation ([Len]). However, the S5 axioms do capture an interesting interpretation of knowledge appropriate for reasoning about distributed systems (see [HM1] and Section 6). We continue here wi th our investigation of all these logics, deferring further comments on their appropriateness to Section 6. Theorem 3 implies that the provable formulas of K (m) correspond precisely to the formulas that are valid for Kripke worlds. As Kripke showed [Kr], there are simple conditions that we can impose on the possibility relations Pi so that the valid formulas of the resulting worlds are exactly the provable formulas of T ( m ) , S4 (m) , and S5(m) respectively. We wi l l try to motivate these conditions, but first we need a few definitions. * Since Lemma 4(b) says that a relation that is both reflexive and Euclidean must also be transitive, the reader may auspect that axiom A4 ia redundant in S5. Thia indeed ia the caae. J. Halpern and Y. Moses 485 486 J. Halpern and Y. Moses", "title": "" }, { "docid": "5ca36a618eb3eee79e40228fa71dc029", "text": "To achieve the long-term goal of machines being able to engage humans in conversation, our models should be engaging. We focus on communication grounded in images, whereby a dialogue is conducted based on a given photo, a setup that is naturally engaging to humans (Hu et al., 2014). We collect a large dataset of grounded human-human conversations, where humans are asked to play the role of a given personality, as the use of personality in conversation has also been shown to be engaging (Shuster et al., 2018). Our dataset, ImageChat, consists of 202k dialogues and 401k utterances over 202k images using 215 possible personality traits. We then design a set of natural architectures using state-of-the-art image and text representations, considering various ways to fuse the components. Automatic metrics and human evaluations show the efficacy of approach, in particular where our best performing model is preferred over human conversationalists 47.7% of the time.", "title": "" }, { "docid": "20c3addef683da760967df0c1e83f8e3", "text": "An RF duplexer has been fabricated on a CMOS IC for use in 3G/4G cellular transceivers. The passive circuit sustains large voltage swings in the transmit path, and isolates the receive path from the transmitter by more than 45 dB across a bandwidth of 200 MHz in 3G/4G bands I, II, III, IV, and IX. A low noise amplifier embedded into the duplexer demonstrates a cascade noise figure of 5 dB with more than 27 dB of gain. The duplexer inserts 2.5 dB of loss between power amplifier and antenna.", "title": "" }, { "docid": "cc5126ea8a6f9ebca587970377966067", "text": "In this paper reliability model of the converter valves in VSC-HVDC system is analyzed. The internal structure and functions of converter valve are presented. Taking the StakPak IGBT from ABB Semiconductors for example, the mathematical reliability model for converter valve and its sub-module is established. By means of calculation and analysis, the reliability indices of converter valve under various voltage classes and redundancy designs are obtained, and then optimal redundant scheme is chosen. KeywordsReliability Analysis; VSC-HVDC; Converter Valve", "title": "" }, { "docid": "1e4f13016c846039f7bbed47810b8b3d", "text": "This paper characterizes general properties of useful, or Effective, explanations of recommendations. It describes a methodology based on focus groups, in which we elicit what helps moviegoers decide whether or not they would like a movie. Our results highlight the importance of personalizing explanations to the individual user, as well as considering the source of recommendations, user mood, the effects of group viewing, and the effect of explanations on user expectations.", "title": "" } ]
scidocsrr
9df8224d325ca1e50436263cac44e704
Deep Learning towards Mobile Applications
[ { "docid": "26dac00bc328dc9c8065ff105d1f8233", "text": "Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4 ~ 6× speed-up and 15 ~ 20× compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.", "title": "" } ]
[ { "docid": "abedd6f0896340a190750666b1d28d91", "text": "This study aimed to characterize the neural generators of the early components of the visual evoked potential (VEP) to isoluminant checkerboard stimuli. Multichannel scalp recordings, retinotopic mapping and dipole modeling techniques were used to estimate the locations of the cortical sources giving rise to the early C1, P1, and N1 components. Dipole locations were matched to anatomical brain regions visualized in structural magnetic resonance imaging (MRI) and to functional MRI (fMRI) activations elicited by the same stimuli. These converging methods confirmed previous reports that the C1 component (onset latency 55 msec; peak latency 90-92 msec) was generated in the primary visual area (striate cortex; area 17). The early phase of the P1 component (onset latency 72-80 msec; peak latency 98-110 msec) was localized to sources in dorsal extrastriate cortex of the middle occipital gyrus, while the late phase of the P1 component (onset latency 110-120 msec; peak latency 136-146 msec) was localized to ventral extrastriate cortex of the fusiform gyrus. Among the N1 subcomponents, the posterior N150 could be accounted for by the same dipolar source as the early P1, while the anterior N155 was localized to a deep source in the parietal lobe. These findings clarify the anatomical origin of these VEP components, which have been studied extensively in relation to visual-perceptual processes.", "title": "" }, { "docid": "d6441d868b19d397740ef87ff700b3e9", "text": "Distant supervised relation extraction is an efficient approach to scale relation extraction to very large corpora, and has been widely used to find novel relational facts from plain text. Recent studies on neural relation extraction have shown great progress on this task via modeling the sentences in low-dimensional spaces, but seldom considered syntax information to model the entities. In this paper, we propose to learn syntax-aware entity embedding for neural relation extraction. First, we encode the context of entities on a dependency tree as sentencelevel entity embedding based on tree-GRU. Then, we utilize both intra-sentence and inter-sentence attentions to obtain sentence set-level entity embedding over all sentences containing the focus entity pair. Finally, we combine both sentence embedding and entity embedding for relation classification. We conduct experiments on a widely used real-world dataset and the experimental results show that our model can make full use of all informative instances and achieve state-of-the-art performance of relation extraction.", "title": "" }, { "docid": "fed9defe1a4705390d72661f96b38519", "text": "Multivariate resultants generalize the Sylvester resultant of two polynomials and characterize the solvability of a polynomial system. They also reduce the computation of all common roots to a problem in linear algebra. We propose a determinantal formula for the sparse resultant of an arbitrary system of n + 1 polynomials in n variables. This resultant generalizes the classical one and has significantly lower degree for polynomials that are sparse in the sense that their mixed volume is lower than their Bézout number. Our algorithm uses a mixed polyhedral subdivision of the Minkowski sum of the Newton polytopes in order to construct a Newton matrix. Its determinant is a nonzero multiple of the sparse resultant and the latter equals the GCD of at most n + 1 such determinants. This construction implies a restricted version of an effective sparse Nullstellensatz. For an arbitrary specialization of the coefficients, there are two methods that use one extra variable and yield the sparse resultant. This is the first algorithm to handle the general case with complexity polynomial in the resultant degree and simply exponential in n. We conjecture its extension to producing an exact rational expression for the sparse resultant.", "title": "" }, { "docid": "2dd42cce112c61950b96754bb7b4df10", "text": "Hierarchical methods have been widely explored for object recognition, which is a critical component of scene understanding. However, few existing works are able to model the contextual information (e.g., objects co-occurrence) explicitly within a single coherent framework for scene understanding. Towards this goal, in this paper we propose a novel three-level (superpixel level, object level and scene level) hierarchical model to address the scene categorization problem. Our proposed model is a coherent probabilistic graphical model that captures the object co-occurrence information for scene understanding with a probabilistic chain structure. The efficacy of the proposed model is demonstrated by conducting experiments on the LabelMe dataset.", "title": "" }, { "docid": "9676c561df01b794aba095dc66b684f8", "text": "The differentiation of B lymphocytes in the bone marrow is guided by the surrounding microenvironment determined by cytokines, adhesion molecules, and the extracellular matrix. These microenvironmental factors are mainly provided by stromal cells. In this paper, we report the identification of a VCAM-1-positive stromal cell population by flow cytometry. This population showed the expression of cell surface markers known to be present on stromal cells (CD10, CD13, CD90, CD105) and had a fibroblastoid phenotype in vitro. Single cell RT-PCR analysis of its cytokine expression pattern revealed transcripts for haematopoietic cytokines important for either the early B lymphopoiesis like flt3L or the survival of long-lived plasma cells like BAFF or both processes like SDF-1. Whereas SDF-1 transcripts were detectable in all VCAM-1-positive cells, flt3L and BAFF were only expressed by some cells suggesting the putative existence of different subpopulations with distinct functional properties. In summary, the VCAM-1-positive cell population seems to be a candidate stromal cell population supporting either developing B cells and/or long-lived plasma cells in human bone marrow.", "title": "" }, { "docid": "3e0076e4f2e69238c5f5ebcdc1dbbda1", "text": "This work presents a self-biased MOSFET threshold voltage VT0 monitor. The threshold condition is defined based on a current-voltage relationship derived from a continuous physical model. The model is valid for any operating condition, from weak to strong inversion, and under triode or saturation regimes. The circuit consists in balancing two self-cascode cells operating at different inversion levels, where one of the transistors that compose these cells is biased at the threshold condition. The circuit is MOSFET-only (can be implemented in any standard digital process), and it operates with a power supply of less than 1 V, consuming tenths of nW. We propose a process independent design methodology, evaluating different trade-offs of accuracy, area and power consumption. Schematic simulation results, including Monte Carlo variability analysis, support the VT0 monitoring behavior of the circuit with good accuracy on a 180 nm process.", "title": "" }, { "docid": "837d1ef60937df15afc320b2408ad7b0", "text": "Zero-shot learning has tremendous application value in complex computer vision tasks, e.g. image classification, localization, image captioning, etc., for its capability of transferring knowledge from seen data to unseen data. Many recent proposed methods have shown that the formulation of a compatibility function and its generalization are crucial for the success of a zero-shot learning model. In this paper, we formulate a softmax-based compatibility function, and more importantly, propose a regularized empirical risk minimization objective to optimize the function parameter which leads to a better model generalization. In comparison to eight baseline models on four benchmark datasets, our model achieved the highest average ranking. Our model was effective even when the training set size was small and significantly outperforming an alternative state-of-the-art model in generalized zero-shot recognition tasks.", "title": "" }, { "docid": "5bfedcfae127e808974ceaf0dca7970c", "text": "A new information-theoretic approach is presented for finding the registration of volumetric medical images of differing modalities. Registration is achieved by adjustment of the relative position and orientation until the mutual information between the images is maximized. In our derivation of the registration procedure, few assumptions are made about the nature of the imaging process. As a result the algorithms are quite general and can foreseeably be used with a wide variety of imaging devices. This approach works directly with image data; no pre-processing or segmentation is required. This technique is, however, more flexible and robust than other intensity-based techniques like correlation. Additionally, it has an efficient implementation that is based on stochastic approximation. Experiments are presented that demonstrate the approach registering magnetic resonance (MR) images with computed tomography (CT) images, and with positron-emission tomography (PET) images. Surgical applications of the registration method are described.", "title": "" }, { "docid": "770d48a87dd718d20ea00c16ba0ac530", "text": "The purpose of this article is to describe emotion regulation, and how emotion regulation may be compromised in patients with autism spectrum disorder (ASD). This information may be useful for clinicians working with children with ASD who exhibit behavioral problems. Suggestions for practice are provided.", "title": "" }, { "docid": "ac09e4a989bb4a9b247aa0ba346f1d71", "text": "Many applications in information extraction, natural language understanding, information retrieval require an understanding of the semantic relations between entities. We present a comprehensive review of various aspects of the entity relation extraction task. Some of the most important supervised and semi-supervised classification approaches to the relation extraction task are covered in sufficient detail along with critical analyses. We also discuss extensions to higher-order relations. Evaluation methodologies for both supervised and semi-supervised methods are described along with pointers to the commonly used performance evaluation datasets. Finally, we also give short descriptions of two important applications of relation extraction, namely question answering and biotext mining.", "title": "" }, { "docid": "4463a242a313f82527c4bdfff3d3c13c", "text": "This paper examines the impact of capital structure on financial performance of Nigerian firms using a sample of thirty non-financial firms listed on the Nigerian Stock Exchange during the seven year period, 2004 – 2010. Panel data for the selected firms were generated and analyzed using ordinary least squares (OLS) as a method of estimation. The result shows that a firm’s capita structure surrogated by Debt Ratio, Dr has a significantly negative impact on the firm’s financial measures (Return on Asset, ROA, and Return on Equity, ROE). The study of these findings, indicate consistency with prior empirical studies and provide evidence in support of Agency cost theory.", "title": "" }, { "docid": "9bf26d0e444ab8332ac55ce87d1b7797", "text": "Toll like receptors (TLR)s have a central role in regulating innate immunity and in the last decade studies have begun to reveal their significance in potentiating autoimmune diseases such as rheumatoid arthritis (RA). Earlier investigations have highlighted the importance of TLR2 and TLR4 function in RA pathogenesis. In this review, we discuss the newer data that indicate roles for TLR5 and TLR7 in RA and its preclinical models. We evaluate the pathogenicity of TLRs in RA myeloid cells, synovial tissue fibroblasts, T cells, osteoclast progenitor cells and endothelial cells. These observations establish that ligation of TLRs can transform RA myeloid cells into M1 macrophages and that the inflammatory factors secreted from M1 and RA synovial tissue fibroblasts participate in TH-17 cell development. From the investigations conducted in RA preclinical models, we conclude that TLR-mediated inflammation can result in osteoclastic bone erosion by interconnecting the myeloid and TH-17 cell response to joint vascularization. In light of emerging unique aspects of TLR function, we summarize the novel approaches that are being tested to impair TLR activation in RA patients.", "title": "" }, { "docid": "7f8211ed8d7c8145f370c46b5bba3ddb", "text": "The adjectives of quantity (Q-adjectives) many, few, much and little stand out from other quantity expressions on account of their syntactic flexibility, occurring in positions that could be called quantificational (many students attended), predicative (John’s friends were many), attributive (the many students), differential (much more than a liter) and adverbial (slept too much). This broad distribution poses a challenge for the two leading theories of this class, which treat them as either quantifying determiners or predicates over individuals. This paper develops an analysis of Q-adjectives as gradable predicates of sets of degrees or (equivalently) gradable quantifiers over degrees. It is shown that this proposal allows a unified analysis of these items across the positions in which they occur, while also overcoming several issues facing competing accounts, among others the divergences between Q-adjectives and ‘ordinary’ adjectives, the operator-like behavior of few and little, and the use of much as a dummy element. Overall the findings point to the central role of degrees in the semantics of quantity.", "title": "" }, { "docid": "36efef11d536fa3b586af2eb5e0847fe", "text": "Coming with the emerging of depth sensors link Microsoft Kinect, human hand gesture recognition has received ever increasing research interests recently. A successful gesture recognition system has usually heavily relied on having a good feature representation of data, which is expected to be task-dependent as well as coping with the challenges and opportunities induced by depth sensor. In this paper, a feature learning approach based on sparse auto-encoder (SAE) and principle component analysis is proposed for recognizing human actions, i.e. finger-spelling or sign language, for RGB-D inputs. The proposed model of feature learning is consisted of two components: First, features are learned respectively from the RGB and depth channels, using sparse auto-encoder with convolutional neural networks. Second, the learned features from both channels is concatenated and fed into a multiple layer PCA to get the final feature. Experimental results on American sign language (ASL) dataset demonstrate that the proposed feature learning model is significantly effective, which improves the recognition rate from 75% to 99.05% and outperforms the state-of-the-art. & 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "af359933fad5d689718e2464d9c4966c", "text": "Distant supervision can effectively label data for relation extraction, but suffers from the noise labeling problem. Recent works mainly perform soft bag-level noise reduction strategies to find the relatively better samples in a sentence bag, which is suboptimal compared with making a hard decision of false positive samples in sentence level. In this paper, we introduce an adversarial learning framework, which we named DSGAN, to learn a sentencelevel true-positive generator. Inspired by Generative Adversarial Networks, we regard the positive samples generated by the generator as the negative samples to train the discriminator. The optimal generator is obtained until the discrimination ability of the discriminator has the greatest decline. We adopt the generator to filter distant supervision training dataset and redistribute the false positive instances into the negative set, in which way to provide a cleaned dataset for relation classification. The experimental results show that the proposed strategy significantly improves the performance of distant supervision relation extraction comparing to state-of-the-art systems.", "title": "" }, { "docid": "8bfefe89d708cd5573850db23b59d5d0", "text": "With the ever increasing volume of data and the ability to integrate different data sources, data quality problems abound. Duplicate detection, as an integral part of data cleansing, is essential in modern information systems. We present a complete duplicate detection workflow that utilizes the capabilities of modern graphics processing units (GPUs) to increase the efficiency of finding duplicates in very large datasets. Our solution covers several well-known algorithms for pair selection, attribute-wise similarity comparison, record-wise similarity aggregation, and clustering. We redesigned these algorithms to run memory-efficiently and in parallel on the GPU. Our experiments demonstrate that the GPU-based workflow is able to outperform a CPU-based implementation on large, real-world datasets. For instance, the GPU-based algorithm deduplicates a dataset with 1.8m entities 10 times faster than a common CPU-based algorithm using comparably priced hardware.", "title": "" }, { "docid": "b348a2835a16ac271f2140f9057dcaa1", "text": "The variational method has been introduced by Kass et al. (1987) in the field of object contour modeling, as an alternative to the more traditional edge detection-edge thinning-edge sorting sequence. since the method is based on a pre-processing of the image to yield an edge map, it shares the limitations of the edge detectors it uses. in this paper, we propose a modified variational scheme for contour modeling, which uses no edge detection step, but local computations instead—only around contour neighborhoods—as well as an “anticipating” strategy that enhances the modeling activity of deformable contour curves. many of the concepts used were originally introduced to study the local structure of discontinuity, in a theoretical and formal statement by leclerc & zucker (1987), but never in a practical situation such as this one. the first part of the paper introduces a region-based energy criterion for active contours, and gives an examination of its implications, as compared to the gradient edge map energy of snakes. then, a simplified optimization scheme is presented, accounting for internal and external energy in separate steps. this leads to a complete treatment, which is described in the last sections of the paper (4 and 5). the optimization technique used here is mostly heuristic, and is thus presented without a formal proof, but is believed to fill a gap between snakes and other useful image representations, such as split-and-merge regions or mixed line-labels image fields.", "title": "" }, { "docid": "dba3434c600ed7ddbb944f0a3adb1ba0", "text": "Although acoustic waves are the most versatile and widely used physical layer technology for underwater wireless communication networks (UWCNs), they are adversely affected by ambient noise, multipath propagation, and fading. The large propagation delays, low bandwidth, and high bit error rates of the underwater acoustic channel hinder communication as well. These operational limits call for complementary technologies or communication alternatives when the acoustic channel is severely degraded. Magnetic induction (MI) is a promising technique for UWCNs that is not affected by large propagation delays, multipath propagation, and fading. In this paper, the MI communication channel has been modeled. Its propagation characteristics have been compared to the electromagnetic and acoustic communication systems through theoretical analysis and numerical evaluations. The results prove the feasibility of MI communication in underwater environments. The MI waveguide technique is developed to reduce path loss. The communication range between source and destination is considerably extended to hundreds of meters in fresh water due to its superior bit error rate performance.", "title": "" }, { "docid": "c408992e89867e583b8232b18f37edf0", "text": "Fusion of information gathered from multiple sources is essential to build a comprehensive situation picture for autonomous ground vehicles. In this paper, an approach which performs scene parsing and data fusion for a 3D-LIDAR scanner (Velodyne HDL-64E) and a video camera is described. First of all, a geometry segmentation algorithm is proposed for detection of obstacles and ground areas from data collected by the Velodyne scanner. Then, corresponding image collected by the video camera is classified patch by patch into more detailed categories. After that, parsing result of each frame is obtained by fusing result of Velodyne data and that of image using the fuzzy logic inference framework. Finally, parsing results of consecutive frames are smoothed by the Markov random field based temporal fusion method. The proposed approach has been evaluated with datasets collected by our autonomous ground vehicle testbed in both rural and urban areas. The fused results are more reliable than that acquired via analysis of only images or Velodyne data. 2013 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "3c3f3a9d6897510d5d5d3d55c882502c", "text": "Error-tolerant graph matching is a powerful concept that has various applications in pattern recognition and machine vision. In the present paper, a new distance measure on graphs is proposed. It is based on the maximal common subgraph of two graphs. The new measure is superior to edit distance based measures in that no particular edit operations together with their costs need to be defined. It is formally shown that the new distance measure is a metric. Potential algorithms for the efficient computation of the new measure are discussed. q 1998 Elsevier Science B.V. All rights reserved.", "title": "" } ]
scidocsrr
875a0c8b9996acd05f79d9ee24fd7ab4
Reactors: A Case for Predictable, Virtualized OLTP Actor Database Systems
[ { "docid": "9f45eff73f8e11306a240890b4db5eaf", "text": "Distributed storage systems run transactions across machines to ensure serializability. Traditional protocols for distributed transactions are based on two-phase locking (2PL) or optimistic concurrency control (OCC). 2PL serializes transactions as soon as they conflict and OCC resorts to aborts, leaving many opportunities for concurrency on the table. This paper presents ROCOCO, a novel concurrency control protocol for distributed transactions that outperforms 2PL and OCC by allowing more concurrency. ROCOCO executes a transaction as a collection of atomic pieces, each of which commonly involves only a single server. Servers first track dependencies between concurrent transactions without actually executing them. At commit time, a transaction’s dependency information is sent to all servers so they can re-order conflicting pieces and execute them in a serializable order. We compare ROCOCO to OCC and 2PL using a scaled TPC-C benchmark. ROCOCO outperforms 2PL and OCC in workloads with varying degrees of contention. When the contention is high, ROCOCO’s throughput is 130% and 347% higher than that of 2PL and OCC.", "title": "" } ]
[ { "docid": "a5e23ca50545378ef32ed866b97fd418", "text": "In the framework of computer assisted diagnosis of diabetic retinopathy, a new algorithm for detection of exudates is presented and discussed. The presence of exudates within the macular region is a main hallmark of diabetic macular edema and allows its detection with a high sensitivity. Hence, detection of exudates is an important diagnostic task, in which computer assistance may play a major role. Exudates are found using their high grey level variation, and their contours are determined by means of morphological reconstruction techniques. The detection of the optic disc is indispensable for this approach. We detect the optic disc by means of morphological filtering techniques and the watershed transformation. The algorithm has been tested on a small image data base and compared with the performance of a human grader. As a result, we obtain a mean sensitivity of 92.8% and a mean predictive value of 92.4%. Robustness with respect to changes of the parameters of the algorithm has been evaluated.", "title": "" }, { "docid": "7acdc25c20b4aa16fc3391cb878a9577", "text": "Recurrent Neural Networks (RNNs) have long been recognized for their potential to model complex time series. However, it remains to be determined what optimization techniques and recurrent architectures can be used to best realize this potential. The experiments presented take a deep look into Hessian free optimization, a powerful second order optimization method that has shown promising results, but still does not enjoy widespread use. This algorithm was used to train to a number of RNN architectures including standard RNNs, long short-term memory, multiplicative RNNs, and stacked RNNs on the task of character prediction. The insights from these experiments led to the creation of a new multiplicative LSTM hybrid architecture that outperformed both LSTM and multiplicative RNNs. When tested on a larger scale, multiplicative LSTM achieved character level modelling results competitive with the state of the art for RNNs using very different methodology.", "title": "" }, { "docid": "b8573915765b33e1d57f34f7756cc235", "text": "Data mining is the process of finding correlations in the relational databases. There are different techniques for identifying malicious database transactions. Many existing approaches which profile is SQL query structures and database user activities to detect intrusion, the log mining approach is the automatic discovery for identifying anomalous database transactions. Mining of the Data is very helpful to end users for extracting useful business information from large database. Multi-level and multi-dimensional data mining are employed to discover data item dependency rules, data sequence rules, domain dependency rules, and domain sequence rules from the database log containing legitimate transactions. Database transactions that do not comply with the rules are identified as malicious transactions. The log mining approach can achieve desired true and false positive rates when the confidence and support are set up appropriately. The implemented system incrementally maintain the data dependency rule sets and optimize the performance of the intrusion detection process.", "title": "" }, { "docid": "a8b5f7a5ab729a7f1664c5a22f3b9d9b", "text": "The smart grid is an electronically controlled electrical grid that connects power generation, transmission, distribution, and consumers using information communication technologies. One of the key characteristics of the smart grid is its support for bi-directional information flow between the consumer of electricity and the utility provider. This two-way interaction allows electricity to be generated in real-time based on consumers’ demands and power requests. As a result, consumer privacy becomes an important concern when collecting energy usage data with the deployment and adoption of smart grid technologies. To protect such sensitive information it is imperative that privacy protection mechanisms be used to protect the privacy of smart grid users. We present an analysis of recently proposed smart grid privacy solutions and identify their strengths and weaknesses in terms of their implementation complexity, efficiency, robustness, and simplicity.", "title": "" }, { "docid": "a7fe6b1ba27c13c95d1a48ca401e25fd", "text": "BACKGROUND\nselecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal-variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD).\n\n\nORDINAL-TO-INTERVAL SCALE CONVERSION EXAMPLE\na breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests.\n\n\nRESULTS\nthe sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable.\n\n\nCONCLUSION\nby using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables.", "title": "" }, { "docid": "6a252976282ba1d0d354d8a86d0c49f1", "text": "Ethics of brain emulations Whole brain emulation attempts to achieve software intelligence by copying the function of biological nervous systems into software. This paper aims at giving an overview of the ethical issues of the brain emulation approach, and analyse how they should affect responsible policy for developing the field. Animal emulations have uncertain moral status, and a principle of analogy is proposed for judging treatment of virtual animals. Various considerations of developing and using human brain emulations are discussed. Introduction Whole brain emulation (WBE) is an approach to achieve software intelligence by copying the functional structure of biological nervous systems into software. Rather than attempting to understand the high-level processes underlying perception, action, emotions and intelligence, the approach assumes that they would emerge from a sufficiently close imitation of the low-level neural functions, even if this is done through a software process. (Sandberg 2013) of brain emulations have been discussed, little analysis of the ethics of the project so far has been done. The main questions of this paper are to what extent brain emulations are moral patients, and what new ethical concerns are introduced as a result of brain emulation technology. The basic idea is to take a particular brain, scan its structure in detail at some resolution, construct a software model of the physiology that is so faithful to the original that, when run on appropriate hardware, it will have an internal causal structure that is essentially the same as the original brain. All relevant functions on some level of description are present, and higher level functions supervene from these. While at present an unfeasibly ambitious challenge, the necessary computing power and various scanning methods are rapidly developing. Large scale computational brain models are a very active research area, at present reaching the size of mammalian nervous systems. al. 2012) WBE can be viewed as the logical endpoint of current trends in computational neuroscience and systems biology. Obviously the eventual feasibility depends on a number of philosophical issues (physicalism, functionalism, non-organicism) and empirical facts (computability, scale separation, detectability, scanning and simulation tractability) that cannot be predicted beforehand; WBE can be viewed as a program trying to test them empirically. (Sandberg 2013) Early projects are likely to merge data from multiple brains and studies, attempting to show that this can produce a sufficiently rich model to produce nontrivial behaviour but not attempting to emulate any particular individual. However, …", "title": "" }, { "docid": "139859fa0f16125f1066c55b9d3cc0d4", "text": "Knowledge graph embedding has been an active research topic for knowledge base completion, with progressive improvement from the initial TransE, TransH, DistMult et al to the current state-of-the-art ConvE. ConvE uses 2D convolution over embeddings and multiple layers of nonlinear features to model knowledge graphs. The model can be efficiently trained and scalable to large knowledge graphs. However, there is no structure enforcement in the embedding space of ConvE. The recent graph convolutional network (GCN) provides another way of learning graph node embedding by successfully utilizing graph connectivity structure. In this work, we propose a novel end-to-end StructureAware Convolutional Network (SACN) that takes the benefit of GCN and ConvE together. SACN consists of an encoder of a weighted graph convolutional network (WGCN), and a decoder of a convolutional network called Conv-TransE. WGCN utilizes knowledge graph node structure, node attributes and edge relation types. It has learnable weights that adapt the amount of information from neighbors used in local aggregation, leading to more accurate embeddings of graph nodes. Node attributes in the graph are represented as additional nodes in the WGCN. The decoder Conv-TransE enables the state-of-the-art ConvE to be translational between entities and relations while keeps the same link prediction performance as ConvE. We demonstrate the effectiveness of the proposed SACN on standard FB15k-237 and WN18RR datasets, and it gives about 10% relative improvement over the state-of-theart ConvE in terms of HITS@1, HITS@3 and HITS@10.", "title": "" }, { "docid": "c746d527ed6112760f7b047c922a0d46", "text": "New performance leaps has been achieved with multiprogramming and multi-core systems. Present parallel programming techniques and environment needs significant changes in programs to accomplish parallelism and also constitute complex, confusing and error-prone constructs and rules. Intel Cilk Plus is a C based computing system that presents a straight forward and well-structured model for the development, verification and analysis of multicore and parallel programming. In this article, two programs are developed using Intel Cilk Plus. Two sequential sorting programs in C/C++ language are converted to multi-core programs in Intel Cilk Plus framework to achieve parallelism and better performance. Converted program in Cilk Plus is then checked for various conditions using tools of Cilk and after that, comparison of performance and speedup achieved over the single-core sequential program is discussed and reported.", "title": "" }, { "docid": "ffccfdc91a1c0b30cf98d0461149580b", "text": "This paper presents design guidelines for ultra-low power Low Noise Amplifier (LNA) design by comparing input matching, gain, and noise figure (NF) characteristics of common-source (CS) and common-gate (CG) topologies. A current-reused ultra-low power 2.2 GHz CG LNA is proposed and implemented based on 0.18 um CMOS technology. Measurement results show 13.9 dB power gain, 5.14 dB NF, and −9.3 dBm IIP3, respectively, while dissipating 140 uA from a 1.5 V supply, which shows best figure of merit (FOM) among all published ultra-low power LNAs.", "title": "" }, { "docid": "a8695230b065ae2e4c5308dfe4f8c10e", "text": "The paper describes a solution for the Yandex Personalized Web Search Challenge. The goal of the challenge is to rerank top ten web search query results to bring most personally relevant results on the top, thereby improving the search quality. The paper focuses on feature engineering for learning to rank in web search, including a novel pair-wise feature, shortand long-term personal navigation features. The paper demonstrates that point-wise logistic regression can achieve the stat-of-the-art performance in terms of normalized discounted cumulative gain with capability to scale up.", "title": "" }, { "docid": "f77d44a34563be204ef04a2ac2041901", "text": "We introduce a tree-structured attention neural network for sentences and small phrases and apply it to the problem of sentiment classification. Our model expands the current recursive models by incorporating structural information around a node of a syntactic tree using both bottomup and top-down information propagation. Also, the model utilizes structural attention to identify the most salient representations during the construction of the syntactic tree. To our knowledge, the proposed models achieve state of the art performance on the Stanford Sentiment Treebank dataset.", "title": "" }, { "docid": "c3f2726c10ebad60d715609f15b67b43", "text": "Sleep-waking cycles are fundamental in human circadian rhythms and their disruption can have consequences for behaviour and performance. Such disturbances occur due to domestic or occupational schedules that do not permit normal sleep quotas, rapid travel across multiple meridians and extreme athletic and recreational endeavours where sleep is restricted or totally deprived. There are methodological issues in quantifying the physiological and performance consequences of alterations in the sleep-wake cycle if the effects on circadian rhythms are to be separated from the fatigue process. Individual requirements for sleep show large variations but chronic reduction in sleep can lead to immuno-suppression. There are still unanswered questions about the sleep needs of athletes, the role of 'power naps' and the potential for exercise in improving the quality of sleep.", "title": "" }, { "docid": "10ef865d0c70369d64c900fb46a1399d", "text": "This work introduces a set of scalable algorithms to identify patterns of human daily behaviors. These patterns are extracted from multivariate temporal data that have been collected from smartphones. We have exploited sensors that are available on these devices, and have identified frequent behavioral patterns with a temporal granularity, which has been inspired by the way individuals segment time into events. These patterns are helpful to both end-users and third parties who provide services based on this information. We have demonstrated our approach on two real-world datasets and showed that our pattern identification algorithms are scalable. This scalability makes analysis on resource constrained and small devices such as smartwatches feasible. Traditional data analysis systems are usually operated in a remote system outside the device. This is largely due to the lack of scalability originating from software and hardware restrictions of mobile/wearable devices. By analyzing the data on the device, the user has the control over the data, i.e., privacy, and the network costs will also be removed.", "title": "" }, { "docid": "38036ea0a6f79ff62027e8475859acb9", "text": "The constantly increasing demand for nutraceuticals is paralleled by a more pronounced request for natural ingredients and health-promoting foods. The multiple functional properties of cactus pear fit well this trend. Recent data revealed the high content of some chemical constituents, which can give added value to this fruit on a nutritional and technological functionality basis. High levels of betalains, taurine, calcium, magnesium, and antioxidants are noteworthy.", "title": "" }, { "docid": "667a2ea2b8ed7d2c709f04d8cd6617c6", "text": "Knowledge centric activities of developing new products and services are becoming the primary source of sustainable competitive advantage in an era characterized by short product life cycles, dynamic markets and complex processes. We Ž . view new product development NPD as a knowledge-intensive activity. Based on a case study in the consumer electronics Ž . industry, we identify problems associated with knowledge management KM in the context of NPD by cross-functional collaborative teams. We map these problems to broad Information Technology enabled solutions and subsequently translate these into specific system characteristics and requirements. A prototype system that meets these requirements developed to capture and manage tacit and explicit process knowledge is further discussed. The functionalities of the system include functions for representing context with informal components, easy access to process knowledge, assumption surfacing, review of past knowledge, and management of dependencies. We demonstrate the validity our proposed solutions using scenarios drawn from our case study. q 1999 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "46dc618a779bd658bfa019117c880d3a", "text": "The concept and deployment of Internet of Things (IoT) has continued to develop momentum over recent years. Several different layered architectures for IoT have been proposed, although there is no consensus yet on a widely accepted architecture. In general, the proposed IoT architectures comprise three main components: an object layer, one or more middle layers, and an application layer. The main difference in detail is in the middle layers. Some include a cloud services layer for managing IoT things. Some propose virtual objects as digital counterparts for physical IoT objects. Sometimes both cloud services and virtual objects are included.In this paper, we take a first step toward our eventual goal of developing an authoritative family of access control models for a cloud-enabled Internet of Things. Our proposed access-control oriented architecture comprises four layers: an object layer, a virtual object layer, a cloud services layer, and an application layer. This 4-layer architecture serves as a framework to build access control models for a cloud-enabled IoT. Within this architecture, we present illustrative examples that highlight some IoT access control issues leading to a discussion of needed access control research. We identify the need for communication control within each layer and across adjacent layers (particularly in the lower layers), coupled with the need for data access control (particularly in the cloud services and application layers).", "title": "" }, { "docid": "979a3ca422e92147b25ca1b8e8ff9e5a", "text": "Open Information Extraction (Open IE) is a promising approach for unrestricted Information Discovery (ID). While Open IE is a highly scalable approach, allowing unsupervised relation extraction from open domains, it currently has some limitations. First, it lacks the expressiveness needed to properly represent and extract complex assertions that are abundant in text. Second, it does not consolidate the extracted propositions, which causes simple queries above Open IE assertions to return insufficient or redundant information. To address these limitations, we propose in this position paper a novel representation for ID – Propositional Knowledge Graphs (PKG). PKGs extend the Open IE paradigm by representing semantic inter-proposition relations in a traversable graph. We outline an approach for constructing PKGs from single and multiple texts, and highlight a variety of high-level applications that may leverage PKGs as their underlying information discovery and representation framework.", "title": "" }, { "docid": "9e8cf31a711a77fa5c5dcc932473dc27", "text": "The opening book is an important component of a chess engine, and thus computer chess programmers have been developing automated methods to improve the quality of their books. For chess, which has a very rich opening theory, large databases of highquality games can be used as the basis of an opening book, from which statistics relating to move choices from given positions can be collected. In order to nd out whether the opening books used by modern chess engines in machine versus machine competitions are \\comparable\" to those used by chess players in human versus human competitions, we carried out analysis on 26 test positions using statistics from two opening books one compiled from humans’ games and the other from machines’ games. Our analysis using several nonparametric measures, shows that, overall, there is a strong association between humans’ and machines’ choices of opening moves when using a book to guide their choices.", "title": "" }, { "docid": "8c0cbfc060b3a6aa03fd8305baf06880", "text": "Learning-to-Rank models based on additive ensembles of regression trees have been proven to be very effective for scoring query results returned by large-scale Web search engines. Unfortunately, the computational cost of scoring thousands of candidate documents by traversing large ensembles of trees is high. Thus, several works have investigated solutions aimed at improving the efficiency of document scoring by exploiting advanced features of modern CPUs and memory hierarchies. In this article, we present QuickScorer, a new algorithm that adopts a novel cache-efficient representation of a given tree ensemble, performs an interleaved traversal by means of fast bitwise operations, and supports ensembles of oblivious trees. An extensive and detailed test assessment is conducted on two standard Learning-to-Rank datasets and on a novel very large dataset we made publicly available for conducting significant efficiency tests. The experiments show unprecedented speedups over the best state-of-the-art baselines ranging from 1.9 × to 6.6 × . The analysis of low-level profiling traces shows that QuickScorer efficiency is due to its cache-aware approach in terms of both data layout and access patterns and to a control flow that entails very low branch mis-prediction rates.", "title": "" }, { "docid": "a603c55eb09d858c629a71ab9285a1d1", "text": "We propose a neural network method for turning emotion into art. Our approach relies on a class-conditioned generative adversarial network trained on a dataset of modern artworks labeled with emotions. We generate this dataset through a large-scale user study of art perception with human subjects. Preliminary results show our framework generates images which, apart from aesthetically appealing, exhibit various features associated with the emotions they are conditioned on.", "title": "" } ]
scidocsrr
dd9275e0abc322020a02a0cccf6ceadf
Human Social Interaction Modeling Using Temporal Deep Networks
[ { "docid": "efd8a99b6fac8ca416f4eb6d825a611b", "text": "A variety of theoretical frameworks predict the resemblance of behaviors between two people engaged in communication, in the form of coordination, mimicry, or alignment. However, little is known about the time course of the behavior matching, even though there is evidence that dyads synchronize oscillatory motions (e.g., postural sway). This study examined the temporal structure of nonoscillatory actions-language, facial, and gestural behaviors-produced during a route communication task. The focus was the temporal relationship between matching behaviors in the interlocutors (e.g., facial behavior in one interlocutor vs. the same facial behavior in the other interlocutor). Cross-recurrence analysis revealed that within each category tested (language, facial, gestural), interlocutors synchronized matching behaviors, at temporal lags short enough to provide imitation of one interlocutor by the other, from one conversational turn to the next. Both social and cognitive variables predicted the degree of temporal organization. These findings suggest that the temporal structure of matching behaviors provides low-level and low-cost resources for human interaction.", "title": "" } ]
[ { "docid": "a0e5c8945212e8cde979b4c5decb71d0", "text": "Cybercrime is a pervasive threat for today's Internet-dependent society. While the real extent and economic impact is hard to quantify, scientists and officials agree that cybercrime is a huge and still growing problem. A substantial fraction of cybercrime's overall costs to society can be traced to indirect opportunity costs, resulting from unused online services. This paper presents a parsimonious model that builds on technology acceptance research and insights from criminology to identify factors that reduce Internet users' intention to use online services. We hypothesize that avoidance of online banking, online shopping and online social networking is increased by cybercrime victimization and media reports. The effects are mediated by the perceived risk of cybercrime and moderated by the user's confidence online. We test our hypotheses using a structural equation modeling analysis of a representative pan-European sample. Our empirical results confirm the negative impact of perceived risk of cybercrime on the use of all three online service categories and support the role of cybercrime experience as an antecedent of perceived risk of cybercrime. We further show that more confident Internet users perceive less cybercriminal risk and are more likely to use online banking and online shopping, which highlights the importance of consumer education.", "title": "" }, { "docid": "5c90cd6c4322c30efb90589b1a65192e", "text": "The sure thing principle and the law of total probability are basic laws in classic probability theory. A disjunction fallacy leads to the violation of these two classical laws. In this paper, an Evidential Markov (EM) decision making model based on Dempster-Shafer (D-S) evidence theory and Markov modelling is proposed to address this issue and model the real human decision-making process. In an evidential framework, the states are extended by introducing an uncertain state which represents the hesitance of a decision maker. The classical Markov model can not produce the disjunction effect, which assumes that a decision has to be certain at one time. However, the state is allowed to be uncertain in the EM model before the final decision is made. An extra uncertainty degree parameter is defined by a belief entropy, named Deng entropy, to assignment the basic probability assignment of the uncertain state, which is the key to predict the disjunction effect. A classical categorization decision-making experiment is used to illustrate the effectiveness and validity of EM model. The disjunction effect can be well predicted ∗Corresponding author at Wen Jiang: School of Electronics and Information, Northwestern Polytechnical University, Xi’an, Shaanxi 710072, China. Tel: (86-29)88431267. E-mail address: jiangwen@nwpu.edu.cn, jiangwenpaper@hotmail.com Preprint submitted to Elsevier May 19, 2017 and the free parameters are less compared with the existing models.", "title": "" }, { "docid": "260c12152d9bd38bd0fde005e0394e17", "text": "On the initiative of the World Health Organization, two meetings on the Standardization of Reporting Results of Cancer Treatment have been held with representatives and members of several organizations. Recommendations have been developed for standardized approaches to the recording of baseline data relating to the patient, the tumor, laboratory and radiologic data, the reporting of treatment, grading of acute and subacute toxicity, reporting of response, recurrence and disease-free interval, and reporting results of therapy. These recommendations, already endorsed by a number of organizations, are proposed for international acceptance and use to make it possible for investigators to compare validly their results with those of others.", "title": "" }, { "docid": "4621f0bd002f8bd061dd0b224f27977c", "text": "Organisations increasingly perceive their employees as a great asset that needs to be cared for; however, at the same time, they view employees as one of the biggest potential threats to their cyber security. Employees are widely acknowledged to be responsible for security breaches in organisations, and it is important that these are given as much attention as are technical issues. A significant number of researchers have argued that non-compliance with information security policy is one of the major challenges facing organisations. This is primarily considered to be a human problem rather than a technical issue. Thus, it is not surprising that employees are one of the major underlying causes of breaches in information security. In this paper, academic literature and reports of information security institutes relating to policy compliance are reviewed. The objective is to provide an overview of the key challenges surrounding the successful implementation of information security policies. A further aim is to investigate the factors that may have an influence upon employees' behaviour in relation to information security policy. As a result, challenges to information security policy have been classified into four main groups: security policy promotion; noncompliance with security policy; security policy management and updating; and shadow security. Furthermore, the factors influencing behaviour have been divided into organisational and human factors. Ultimately, this paper concludes that continuously subjecting users to targeted awareness raising and dynamically monitoring their adherence to information security policy should increase the compliance level.", "title": "" }, { "docid": "7baf37974303e6f83f52ff47c441387f", "text": "We present a novel Bayesian model for semi-supervised part-of-speech tagging. Our model extends the Latent Dirichlet Allocation model and incorporates the intuition that words’ distributions over tags, p(t|w), are sparse. In addition we introduce a model for determining the set of possible tags of a word which captures important dependencies in the ambiguity classes of words. Our model outperforms the best previously proposed model for this task on a standard dataset.", "title": "" }, { "docid": "7834cad6190a019c3b0086a3f0231182", "text": "In modern train control systems, a moving train retrieves its location information through passive transponders called balises, which are placed on the sleepers of the track at regular intervals. When the train-borne antenna energizes them using tele-powering signals, balises backscatter preprogrammed telegrams, which carry information about the train's current location. Since the telegrams are static in the existing implementations, the uplink signals from the balises could be recorded by an adversary and then replayed at a different location of the track, leading to what is well-known as the replay attack. Such an attack, while the legitimate balise is still functional, introduces ambiguity to the train about its location, can impact the physical operations of the trains. For balise-to-train communication, we propose a new communication framework referred to as cryptographic random fountains (CRF), where each balise, instead of transmitting telegrams with fixed information, transmits telegrams containing random signals. A salient feature of CRF is the use of challenge-response based interaction between the train and the balise for communication integrity. We present a thorough security analysis of CRF to showcase its ability to mitigate sophisticated replay attacks. Finally, we also discuss the implementation aspects of our framework.", "title": "" }, { "docid": "e350e4a5baf6a9c1b701b27aba5405f4", "text": "When a detector sensitive to the target plume IR seeker is used for tracking airborne targets, the seeker tends to follow the target hot point which is a point farther away from the target exhaust and its fuselage. In order to increase the missile effectiveness, it is necessary to modify the guidance law by adding a lead bias command. The resulting guidance is known as target adaptive guidance (TAG). First, the pure proportional navigation guidance (PPNG) in 3-dimensional state is explained in a new point of view. The main idea is based on the distinction between angular rate vector and rotation vector conceptions. The current innovation is based on selection of line of sight (LOS) coordinates. A comparison between two available choices for LOS coordinates system is proposed. An improvement is made by adding two additional terms. First term includes a cross range compensator which is used to provide and enhance path observability, and obtain convergent estimates of state variables. The second term is new concept lead bias term, which has been calculated by assuming an equivalent acceleration along the target longitudinal axis. Simulation results indicate that the lead bias term properly provides terminal conditions for accurate target interception.", "title": "" }, { "docid": "da5fc78a9a1be5125fe668ac4ca20ee5", "text": "This letter proposes a groundbreaking approach in the remote-sensing community to simulating the digital surface model (DSM) from a single optical image. This novel technique uses conditional generative adversarial networks whose architecture is based on an encoder–decoder network with skip connections (generator) and penalizing structures at the scale of image patches (discriminator). The network is trained on scenes where both the DSM and optical data are available to establish an image-to-DSM translation rule. The trained network is then utilized to simulate elevation information on target scenes where no corresponding elevation information exists. The capability of the approach is evaluated both visually (in terms of photographic interpretation) and quantitatively (in terms of reconstruction errors and classification accuracies) on subdecimeter spatial resolution data sets captured over Vaihingen, Potsdam, and Stockholm. The results confirm the promising performance of the proposed framework.", "title": "" }, { "docid": "3baf11f31351e92c7ff56b066434ae2c", "text": "Unlike images which are represented in regular dense grids, 3D point clouds are irregular and unordered, hence applying convolution on them can be difficult. In this paper, we extend the dynamic filter to a new convolution operation, named PointConv. PointConv can be applied on point clouds to build deep convolutional networks. We treat convolution kernels as nonlinear functions of the local coordinates of 3D points comprised of weight and density functions. With respect to a given point, the weight functions are learned with multi-layer perceptron networks and the density functions through kernel density estimation. A novel reformulation is proposed for efficiently computing the weight functions, which allowed us to dramatically scale up the network and significantly improve its performance. The learned convolution kernel can be used to compute translation-invariant and permutation-invariant convolution on any point set in the 3D space. Besides, PointConv can also be used as deconvolution operators to propagate features from a subsampled point cloud back to its original resolution. Experiments on ModelNet40, ShapeNet, and ScanNet show that deep convolutional neural networks built on PointConv are able to achieve state-ofthe-art on challenging semantic segmentation benchmarks on 3D point clouds. Besides, our experiments converting CIFAR-10 into a point cloud showed that networks built on PointConv can match the performance of convolutional networks in 2D images of a similar structure.", "title": "" }, { "docid": "a1cd4a4ce70c9c8672eee5ffc085bf63", "text": "Ternary logic is a promising alternative to conventional binary logic, since it is possible to achieve simplicity and energy efficiency due to the reduced circuit overhead. In this paper, a ternary magnitude comparator design based on Carbon Nanotube Field Effect Transistors (CNFETs) is presented. This design eliminates the usage of complex ternary decoder which is a part of existing designs. Elimination of decoder results in reduction of delay and power. Simulations of proposed and existing designs are done on HSPICE and results proves that the proposed 1-bit comparator consumes 81% less power and shows delay advantage of 41.6% compared to existing design. Further a methodology to extend the 1-bit comparator design to n-bit comparator design is also presented.", "title": "" }, { "docid": "c91e966b803826908ae4dd82cc4a483e", "text": "Many shallow natural language understanding tasks use dependency trees to extract relations between content words. However, strict surface-structure dependency trees tend to follow the linguistic structure of sentences too closely and frequently fail to provide direct relations between content words. To mitigate this problem, the original Stanford Dependencies representation also defines two dependency graph representations which contain additional and augmented relations that explicitly capture otherwise implicit relations between content words. In this paper, we revisit and extend these dependency graph representations in light of the recent Universal Dependencies (UD) initiative and provide a detailed account of an enhanced and an enhanced++ English UD representation. We further present a converter from constituency to basic, i.e., strict surface structure, UD trees, and a converter from basic UD trees to enhanced and enhanced++ English UD graphs. We release both converters as part of Stanford CoreNLP and the Stanford Parser.", "title": "" }, { "docid": "48fea4f95e6b7dfa7bb371f28751ac5a", "text": "The suppression mechanism of the differential-mode noise of an X capacitor in offline power supplies is, for the first time, attributed to two distinct concepts: 1) impedance mismatch (regarding a line impedance stabilization network or mains and the equivalent power supply noise source impedance) and 2) C(dv/dt) noise current balancing (to suppress mix-mode noise). The effectiveness of X capacitors is investigated with this theory, along with experimental supports. Understanding of the two aforementioned mechanisms gives better insight into filter effectiveness, which may lead to a more compact filter design.", "title": "" }, { "docid": "8e077186aef0e7a4232eec0d8c73a5a2", "text": "The appetite for up-to-date information about earth’s surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection. 2013 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS) Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "266d3ff38aec23ae748fa515dfd7bf60", "text": "Organizational learning (OL) and knowledge management (KM) research has gone through dramatic changes in the last twenty years and, without doubt, the fi eld will continue to change in the next ten years. Our research suggests that Cyert and March were the fi rst authors to reference organizational learning in their publication of 1963. It was just twenty years ago that a conference was held at Carnegie Mellon University to honor March and his contribution to the fi eld of organizational learning. Many of these presentations were published in a special issue of Organization Science in 1991. Since that time we have seen a rapid expansion in the number of journal articles— both academic and practitioner—devoted to organizational learning. Fields such as information technology, marketing and human resources have also jumped on the bandwagon. Doctoral programs are including seminars on organizational learning, and MBA courses on organizational learning are appearing. All of this refl ects acceptance of the concept that organizations have knowledge, do learn over time, and consider their knowledge base and social capital as valuable assets. It also reaffi rms the legitimacy of research on organizational learning and its practical applications to organizations. The fi rst edition of this Handbook was published in 2003 but most chapters were completed in 2001 or 2002. Our fi rst edition was widely used and it was clear—given the advancement of the fi eld—that a second edition was necessary. Some people might claim that it is foolhardy to seek to cover the full range of the literature within one volume. Our intent is to provide a resource that is useful to academics, practitioners, and students who want an overview of the current fi eld with full recognition that—to our delight—the fi eld continues to have major impact on research and management practices. Our response is", "title": "" }, { "docid": "018df705607ea7a71bf8a2a89b988eb7", "text": "Adult playfulness is a personality trait that enables people to frame or reframe everyday situations in such a way that they experience them as entertaining, intellectually stimulating, or personally interesting. Earlier research supports the notion that playfulness is associated with the pursuit of an active way of life. While playful children are typically described as being active, only limited knowledge exists on whether playfulness in adults is also associated with physical activity. Additionally, existing literature has not considered different facets of playfulness, but only global playfulness. Therefore, we employed a multifaceted model that allows distinguishing among Other-directed, Lighthearted, Intellectual, and Whimsical playfulness. For narrowing this gap in the literature, we conducted two studies addressing the associations of playfulness with health, activity, and fitness. The main aim of Study 1 was a comparison of self-ratings (N = 529) and ratings from knowledgeable others (N = 141). We tested the association of self- and peer-reported playfulness with self- and peer-reported physical activity, fitness, and health behaviors. There was a good convergence of playfulness among self- and peer-ratings (between r = 0.46 and 0.55, all p < 0.001). Data show that both self- and peer-ratings are differentially associated with physical activity, fitness, and health behaviors. For example, self-rated playfulness shared 3% of the variance with self-rated physical fitness and 14% with the pursuit of an active way of life. Study 2 provides data on the association between self-rated playfulness and objective measures of physical fitness (i.e., hand and forearm strength, lower body muscular strength and endurance, cardio-respiratory fitness, back and leg flexibility, and hand and finger dexterity) using a sample of N = 67 adults. Self-rated playfulness was associated with lower baseline and activity (climbing stairs) heart rate and faster recovery heart rate (correlation coefficients were between -0.19 and -0.24 for global playfulness). Overall, Study 2 supported the findings of Study 1 by showing positive associations of playfulness with objective indicators of physical fitness (primarily cardio-respiratory fitness). The findings represent a starting point for future studies on the relationships between playfulness, and health, activity, and physical fitness.", "title": "" }, { "docid": "8de1acc08d32f8840de8375078f2369a", "text": "Widespread acceptance of virtual reality has been partially handicapped by the inability of current systems to accommodate multiple viewpoints, thereby limiting their appeal for collaborative applications. We are exploring the ability to utilize passive, untracked participants in a powerwall environment. These participants see the same image as the active, immersive participant. This does present the passive user with a varying viewpoint that does not correspond to their current position. We demonstrate the impact this will have on the perceived image and show that human psychology is actually well adapted to compensating for what, on the surface, would seem to be a very drastic distortion. We present some initial guidelines for system design that minimize the negative impact of passive participation, allowing two or more collaborative participants. We then outline future experimentation to measure user compensation for these distorted viewpoints.", "title": "" }, { "docid": "8106487f98bcc94c1310799e74e7a173", "text": "We present a method to predict long-term motion of pedestrians, modeling their behavior as jump-Markov processes with their goal a hidden variable. Assuming approximately rational behavior, and incorporating environmental constraints and biases, including time-varying ones imposed by traffic lights, we model intent as a policy in a Markov decision process framework. We infer pedestrian state using a Rao-Blackwellized filter, and intent by planning according to a stochastic policy, reflecting individual preferences in aiming at the same goal.", "title": "" }, { "docid": "d9f442d281de14651ca17ec5d160b2d2", "text": "Query expansion of named entities can be employed in order to increase the retrieval effectiveness. A peculiarity of named entities compared to other vocabulary terms is that they are very dynamic in appearance, and synonym relationships between terms change with time. In this paper, we present an approach to extracting synonyms of named entities over time from the whole history of Wikipedia. In addition, we will use their temporal patterns as a feature in ranking and classifying them into two types, i.e., time-independent or time-dependent. Time-independent synonyms are invariant to time, while time-dependent synonyms are relevant to a particular time period, i.e., the synonym relationships change over time. Further, we describe how to make use of both types of synonyms to increase the retrieval effectiveness, i.e., query expansion with time-independent synonyms for an ordinary search, and query expansion with time-dependent synonyms for a search wrt. temporal criteria. Finally, through an evaluation based on TREC collections, we demonstrate how retrieval performance of queries consisting of named entities can be improved using our approach.", "title": "" }, { "docid": "2fa75232c6080f2c79897579b78f31d5", "text": "The rapid development of cloud computing promotes a wide deployment of data and computation outsourcing to cloud service providers by resource-limited entities. Based on a pay-per-use model, a client without enough computational power can easily outsource large-scale computational tasks to a cloud. Nonetheless, the issue of security and privacy becomes a major concern when the customer’s sensitive or confidential data is not processed in a fully trusted cloud environment. Recently, a number of publications have been proposed to investigate and design specific secure outsourcing schemes for different computational tasks. The aim of this survey is to systemize and present the cutting-edge technologies in this area. It starts by presenting security threats and requirements, followed with other factors that should be considered when constructing secure computation outsourcing schemes. In an organized way, we then dwell on the existing secure outsourcing solutions to different computational tasks such as matrix computations, mathematical optimization, and so on, treating data confidentiality as well as computation integrity. Finally, we provide a discussion of the literature and a list of open challenges in the area.", "title": "" } ]
scidocsrr
a3f195e484bb88140ae0511465acafcc
Triggering the sintering of silver nanoparticles at room temperature.
[ { "docid": "2e964b14ff4e45e3f1c339d7247a50d0", "text": "We report a method to additively build threedimensional (3-D) microelectromechanical systems (MEMS) and electrical circuitry by ink-jet printing nanoparticle metal colloids. Fabricating metallic structures from nanoparticles avoids the extreme processing conditions required for standard lithographic fabrication and molten-metal-droplet deposition. Nanoparticles typically measure 1 to 100 nm in diameter and can be sintered at plastic-compatible temperatures as low as 300 C to form material nearly indistinguishable from the bulk material. Multiple ink-jet print heads mounted to a computer-controlled 3-axis gantry deposit the 10% by weight metal colloid ink layer-by-layer onto a heated substrate to make two-dimensional (2-D) and 3-D structures. We report a high-Q resonant inductive coil, linear and rotary electrostatic-drive motors, and in-plane and vertical electrothermal actuators. The devices, printed in minutes with a 100 m feature size, were made out of silver and gold material with high conductivity,and feature as many as 400 layers, insulators, 10 : 1 vertical aspect ratios, and etch-released mechanical structure. These results suggest a route to a desktop or large-area MEMS fabrication system characterized by many layers, low cost, and data-driven fabrication for rapid turn-around time, and represent the first use of ink-jet printing to build active MEMS. [657]", "title": "" } ]
[ { "docid": "de6348bb8e3b4c1cfd1fa83557ae50c9", "text": "Cerebellar lesions can cause motor deficits and/or the cerebellar cognitive affective syndrome (CCAS; Schmahmann's syndrome). We used voxel-based lesion-symptom mapping to test the hypothesis that the cerebellar motor syndrome results from anterior lobe damage whereas lesions in the posterolateral cerebellum produce the CCAS. Eighteen patients with isolated cerebellar stroke (13 males, 5 females; 20-66 years old) were evaluated using measures of ataxia and neurocognitive ability. Patients showed a wide range of motor and cognitive performance, from normal to severely impaired; individual deficits varied according to lesion location within the cerebellum. Patients with damage to cerebellar lobules III-VI had worse ataxia scores: as predicted, the cerebellar motor syndrome resulted from lesions involving the anterior cerebellum. Poorer performance on fine motor tasks was associated primarily with strokes affecting the anterior lobe extending into lobule VI, with right-handed finger tapping and peg-placement associated with damage to the right cerebellum, and left-handed finger tapping associated with left cerebellar damage. Patients with the CCAS in the absence of cerebellar motor syndrome had damage to posterior lobe regions, with lesions leading to significantly poorer scores on language (e.g. right Crus I and II extending through IX), spatial (bilateral Crus I, Crus II, and right lobule VIII), and executive function measures (lobules VII-VIII). These data reveal clinically significant functional regions underpinning movement and cognition in the cerebellum, with a broad anterior-posterior distinction. Motor and cognitive outcomes following cerebellar damage appear to reflect the disruption of different cerebro-cerebellar motor and cognitive loops.", "title": "" }, { "docid": "105fe384f9dfb13aef82f4ff16f87821", "text": "Dengue hemorrhagic fever (DHF), a severe manifestation of dengue viral infection that can cause severe bleeding, organ impairment, and even death, affects between 15,000 and 105,000 people each year in Thailand. While all Thai provinces experience at least one DHF case most years, the distribution of cases shifts regionally from year to year. Accurately forecasting where DHF outbreaks occur before the dengue season could help public health officials prioritize public health activities. We develop statistical models that use biologically plausible covariates, observed by April each year, to forecast the cumulative DHF incidence for the remainder of the year. We perform cross-validation during the training phase (2000-2009) to select the covariates for these models. A parsimonious model based on preseason incidence outperforms the 10-y median for 65% of province-level annual forecasts, reduces the mean absolute error by 19%, and successfully forecasts outbreaks (area under the receiver operating characteristic curve = 0.84) over the testing period (2010-2014). We find that functions of past incidence contribute most strongly to model performance, whereas the importance of environmental covariates varies regionally. This work illustrates that accurate forecasts of dengue risk are possible in a policy-relevant timeframe.", "title": "" }, { "docid": "0ebdf5dae3ce2265b9b740aba5484a7c", "text": "The aim in high-resolution connectomics is to reconstruct complete neuronal connectivity in a tissue. Currently, the only technology capable of resolving the smallest neuronal processes is electron microscopy (EM). Thus, a common approach to network reconstruction is to perform (error-prone) automatic segmentation of EM images, followed by manual proofreading by experts to fix errors. We have developed an algorithm and software library to not only improve the accuracy of the initial automatic segmentation, but also point out the image coordinates where it is likely to have made errors. Our software, called gala (graph-based active learning of agglomeration), improves the state of the art in agglomerative image segmentation. It is implemented in Python and makes extensive use of the scientific Python stack (numpy, scipy, networkx, scikit-learn, scikit-image, and others). We present here the software architecture of the gala library, and discuss several designs that we consider would be generally useful for other segmentation packages. We also discuss the current limitations of the gala library and how we intend to address them.", "title": "" }, { "docid": "8565471c18407fc0741548d11d44a7d2", "text": "This study evaluated the clinical efficacy of 2% chlorhexidine (CHX) gel on intracanal bacteria reduction during root canal instrumentation. The additional antibacterial effect of an intracanal dressing (Ca[OH](2) mixed with 2% CHX gel) was also assessed. Forty-three patients with apical periodontitis were recruited. Four patients with irreversible pulpitis were included as negative controls. Teeth were instrumented using rotary instruments and 2% CHX gel as the disinfectant. Bacterial samples were taken upon access (S1), after instrumentation (S2), and after 2 weeks of intracanal dressing (S3). Anaerobic culture was performed. Four samples showed no bacteria growth at S1, which were excluded from further analysis. Of the samples cultured positively at S1, 10.3% (4/39) and 8.3% (4/36) sampled bacteria at S2 and S3, respectively. A significant difference in the percentage of positive culture between S1 and S2 (p < 0.001) but not between S2 and S3 (p = 0.692) was found. These results suggest that 2% CHX gel is an effective root canal disinfectant and additional intracanal dressing did not significantly improve the bacteria reduction on the sampled root canals.", "title": "" }, { "docid": "9c9a410422360df950a16bdddc0c71ca", "text": "We introduce a multiagent blackboard system for poetry generation with a special focus on emotional modelling. The emotional content is extracted from text, particularly blog posts, and is used as inspiration for generating poems. Our main objective is to create a system with an empathic emotional personality that would change its mood according to the affective content of the text, and express its feelings in the form of a poem. We describe here the system structure including experts with distinct roles in the process, and explain how they cooperate within the blackboard model by presenting an illustrative example of generation process. The system is evaluated considering the final outputs and the generation process. This computational creativity tool can be extended by incorporating new experts into the blackboard model, and used as an artistic enrichment of", "title": "" }, { "docid": "6bb01ba5f5d20c9c1b9e14573a825d04", "text": "Let K be a subset of the Euclidean sphere S d−1. As seen in Lecture #1, in analyzing how well a given random projection matrix S ∈ R m×d preserves vectors in K, a central object is the random variable Z(K) = sup u∈K Su 2 2 m − 1. (2.1) Suppose that our goal is to establish that, for some δ ∈ (0, 1), we have Z(K) ≤ δ with high probability. How large must the projection dimension m be, as a function of (δ, K), for this type of inequality to hold? In this lecture, we give a precise answer for Gaussian random projections.", "title": "" }, { "docid": "e1efeca0d73be6b09f5cf80437809bdb", "text": "Deep convolutional neural networks have been shown to be vulnerable to arbitrary geometric transformations. However, there is no systematic method to measure the invariance properties of deep networks to such transformations. We propose ManiFool as a simple yet scalable algorithm to measure the invariance of deep networks. In particular, our algorithm measures the robustness of deep networks to geometric transformations in a worst-case regime as they can be problematic for sensitive applications. Our extensive experimental results show that ManiFool can be used to measure the invariance of fairly complex networks on high dimensional datasets and these values can be used for analyzing the reasons for it. Furthermore, we build on ManiFool to propose a new adversarial training scheme and we show its effectiveness on improving the invariance properties of deep neural networks.1", "title": "" }, { "docid": "0ca3676df82502041647e3c5612b0ff2", "text": "OBJECTIVE\nTo evaluate the effects of 6 months of pool exercise combined with a 6 session education program for patients with fibromyalgia syndrome (FM).\n\n\nMETHODS\nThe study population comprised 58 patients, randomized to a treatment or a control group. Patients were instructed to match the pool exercises to their threshold of pain and fatigue. The education focused on strategies for coping with symptoms and encouragement of physical activity. The primary outcome measurements were the total score of the Fibromyalgia Impact Questionnaire (FIQ) and the 6 min walk test, recorded at study start and after 6 mo. Several other tests and instruments assessing functional limitations, severity of symptoms, disabilities, and quality of life were also applied.\n\n\nRESULTS\nSignificant differences between the treatment group and the control group were found for the FIQ total score (p = 0.017) and the 6 min walk test (p < 0.0001). Significant differences were also found for physical function, grip strength, pain severity, social functioning, psychological distress, and quality of life.\n\n\nCONCLUSION\nThe results suggest that a 6 month program of exercises in a temperate pool combined with education will improve the consequences of FM.", "title": "" }, { "docid": "cef6d9eb15f00eedcb7241d62e5a1b02", "text": "There has been a rapid increase in the use of social networking websites in the last few years. People most conveniently express their views and opinions on a wide array of topics via such websites. Sentiment analysis of such data which comprises of people's views is very important in order to gauge public opinion on a particular topic of interest. This paper reviews a number of techniques, both lexicon-based approaches as well as learning based methods that can be used for sentiment analysis of text. In order to adapt these techniques for sentiment analysis of data procured from one of the social networking websites, Twitter, a number of issues and challenges need to be addressed, which are put forward in this paper.", "title": "" }, { "docid": "99d99ce673dfc4a6f5bf3e7d808a5570", "text": "We introduce an online popularity prediction and tracking task as a benchmark task for reinforcement learning with a combinatorial, natural language action space. A specified number of discussion threads predicted to be popular are recommended, chosen from a fixed window of recent comments to track. Novel deep reinforcement learning architectures are studied for effective modeling of the value function associated with actions comprised of interdependent sub-actions. The proposed model, which represents dependence between sub-actions through a bi-directional LSTM, gives the best performance across different experimental configurations and domains, and it also generalizes well with varying numbers of recommendation requests.", "title": "" }, { "docid": "fa3641ad1afc65ca0a96c68aaf87c261", "text": "Recent work has explored methods for learning continuous vector space word representations reflecting the underlying semantics of words. Simple vector space arithmetic using cosine distances has been shown to capture certain types of analogies, such as reasoning about plurals from singulars, past tense from present tense, etc. In this paper, we introduce a new approach to capture analogies in continuous word representations, based on modeling not just individual word vectors, but rather the subspaces spanned by groups of words. We exploit the property that the set of subspaces in n-dimensional Euclidean space form a curved manifold space called the Grassmannian, a quotient subgroup of the Lie group of rotations in ndimensions. Based on this mathematical model, we develop a modified cosine distance model based on geodesic kernels that captures relation-specific distances across word categories. Our experiments on analogy tasks show that our approach performs significantly better than the previous approaches for the given task.", "title": "" }, { "docid": "21a917abee792625539e7eabb3a81f4c", "text": "This paper investigates the power operation in information system development (ISD) processes. Due to the fact that key actors in different departments possess different professional knowledge, their different contexts lead to some employees supporting IS, while others resist it to achieve their goals. We aim to interpret these power operations in ISD from the theory of technological frames. This study is based on qualitative data collected from KaoKang (pseudonym), a port authority in Taiwan. We attempt to understand the situations of different key actors (e.g. top manager, MIS professionals, employees of DP-1 division, consultants of KaoKang, and customers (outside users)) who wield power in ISD in different situations. In this respect, we interpret the data using a technological frame. Finally, we aim to gain fresh insight into power operation in ISD from this perspective.", "title": "" }, { "docid": "acdcdae606f9c046aab912075d4ec609", "text": "Community sensing, fusing information from populations of privately-held sensors, presents a great opportunity to create efficient and cost-effective sensing applications. Yet, reasonable privacy concerns often limit the access to such data streams. How should systems valuate and negotiate access to private information, for example in return for monetary incentives? How should they optimally choose the participants from a large population of strategic users with privacy concerns, and compensate them for information shared? In this paper, we address these questions and present a novel mechanism, SEQTGREEDY, for budgeted recruitment of participants in community sensing. We first show that privacy tradeoffs in community sensing can be cast as an adaptive submodular optimization problem. We then design a budget feasible, incentive compatible (truthful) mechanism for adaptive submodular maximization, which achieves near-optimal utility for a large class of sensing applications. This mechanism is general, and of independent interest. We demonstrate the effectiveness of our approach in a case study of air quality monitoring, using data collected from the Mechanical Turk platform. Compared to the state of the art, our approach achieves up to 30% reduction in cost in order to achieve a desired level of utility.", "title": "" }, { "docid": "4620525bfbfd492f469e948b290d73a2", "text": "This thesis contains the complete end-to-end simulation, development, implementation, and calibration of the wide bandwidth, low-Q, Kiwi-SAS synthetic aperture sonar (SAS). Through the use of a very stable towfish, a new novel wide bandwidth transducer design, and autofocus procedures, high-resolution diffraction limited imagery is produced. As a complete system calibration was performed, this diffraction limited imagery is not only geometrically calibrated, it is also calibrated for target cross-section or target strength estimation. Is is important to note that the diffraction limited images are formed without access to any form of inertial measurement information. Previous investigations applying the synthetic aperture technique to sonar have developed processors based on exact, but inefficient, spatial-temporal domain time-delay and sum beamforming algorithms, or they have performed equivalent operations in the frequency domain using fast-correlation techniques (via the fast Fourier transform (FFT)). In this thesis, the algorithms used in the generation of synthetic aperture radar (SAR) images are derived in their wide bandwidth forms and it is shown that these more efficient algorithms can be used to form diffraction limited SAS images. Several new algorithms are developed; accelerated chirp scaling algorithm represents an efficient method for processing synthetic aperture data, while modified phase gradient autofocus and a low-Q autofocus routine based on prominent point processing are used to focus both simulated and real target data that has been corrupted by known and unknown motion or medium propagation errors.", "title": "" }, { "docid": "d5a343b290765b934b0dfdf553383bfa", "text": "The advent of RGB-D cameras which provide synchronized range and video data creates new opportunities for exploiting both sensing modalities for various robotic applications. This paper exploits the strengths of vision and range measurements and develops a novel robust algorithm for localization using RGB-D cameras. We show how correspondences established by matching visual SIFT features can effectively initialize the generalized ICP algorithm as well as demonstrate situations where such initialization is not viable. We propose an adaptive architecture which computes the pose estimate from the most reliable measurements in a given environment and present thorough evaluation of the resulting algorithm against a dataset of RGB-D benchmarks, demonstrating superior or comparable performance in the absence of the global optimization stage. Lastly we demonstrate the proposed algorithm on a challenging indoor dataset and demonstrate improvements where pose estimation from either pure range sensing or vision techniques perform poorly.", "title": "" }, { "docid": "260f7258c3739efec1910028ec429471", "text": "Cryptography is considered to be a disciple of science of achieving security by converting sensitive information to an un-interpretable form such that it cannot be interpreted by anyone except the transmitter and intended recipient. An innumerable set of cryptographic schemes persist in which each of it has its own affirmative and feeble characteristics. In this paper we have we have developed a traditional or character oriented Polyalphabetic cipher by using a simple algebraic equation. In this we made use of iteration process and introduced a key K0 obtained by permuting the elements of a given key seed value. This key strengthens the cipher and it does not allow the cipher to be broken by the known plain text attack. The cryptanalysis performed clearly indicates that the cipher is a strong one.", "title": "" }, { "docid": "8ac596c8360e2d56b24fee750d58a8b8", "text": "Stemming is a process of reducing inflected words to their stem or root from a generally written word form. This process is used in many text mining application as a feature selection technique. Moreover, Arabic text summarization has increasingly become an important task in natural language processing area (NLP). Therefore, the aim of this paper is to evaluate the impact of three different Arabic stemmers (i.e. Khoja, Larekey and Alkhalil's stemmer) on the text summarization performance for Arabic language. The evaluation of the proposed system, with the three different stemmers and without stemming, on the dataset used shows that the best performance was achieved by Khoja stemmer in term of recall, precision and F1-measure. The evaluation also shows that the performances of the proposed system are significantly improved by applying the stemming process in the pre-processing stage.", "title": "" }, { "docid": "7e6a3a04c24a0fc24012619d60ebb87b", "text": "The recent trend toward democratization in countries throughout the globe has challenged scholars to pursue two potentially contradictory goals: to develop a differentiated conceptualization of democracy that captures the diverse experiences of these countries; and to extend the analysis to this broad range of cases without ‘stretching’ the concept. This paper argues that this dual challenge has led to a proliferation of conceptual innovations, including hundreds of subtypes of democracy—i.e., democracy ‘with adjectives.’ The paper explores the strengths and weaknesses of three important strategies of innovation that have emerged: ‘precising’ the definition of democracy; shifting the overarching concept with which democracy is associated; and generating various forms of subtypes. Given the complex structure of meaning produced by these strategies for refining the concept of democracy, we conclude by offering an old piece of advice with renewed urgency: It is imperative that scholars situate themselves in relation to this structure of meaning by clearly defining and explicating the conception of democracy they are employing.", "title": "" }, { "docid": "5546cbb6fac77d2d9fffab8ba0a50ed8", "text": "The next-generation electric power systems (smart grid) are studied intensively as a promising solution for energy crisis. One important feature of the smart grid is the integration of high-speed, reliable and secure data communication networks to manage the complex power systems effectively and intelligently. We provide in this paper a comprehensive survey on the communication architectures in the power systems, including the communication network compositions, technologies, functions, requirements, and research challenges. As these communication networks are responsible for delivering power system related messages, we discuss specifically the network implementation considerations and challenges in the power system settings. This survey attempts to summarize the current state of research efforts in the communication networks of smart grid, which may help us identify the research problems in the continued studies. 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "11ed7e0742ddb579efe6e1da258b0d3c", "text": "Supervisory Control and Data Acquisition(SCADA) systems are deeply ingrained in the fabric of critical infrastructure sectors. These computerized real-time process control systems, over geographically dispersed continuous distribution operations, are increasingly subject to serious damage and disruption by cyber means due to their standardization and connectivity to other networks. However, SCADA systems generally have little protection from the escalating cyber threats. In order to understand the potential danger and to protect SCADA systems, in this paper, we highlight their difference from standard IT systems and present a set of security property goals. Furthermore, we focus on systematically identifying and classifying likely cyber attacks including cyber-induced cyber-physical attack son SCADA systems. Determined by the impact on control performance of SCADA systems, the attack categorization criteria highlights commonalities and important features of such attacks that define unique challenges posed to securing SCADA systems versus traditional Information Technology(IT) systems.", "title": "" } ]
scidocsrr
8972762f87b614f4c4037d92dc1861e6
Sustainable Supply Chain Management Capability Maturity: Framework Development and initial Evaluation
[ { "docid": "21e47bd70185299e94f8553ca7e60a6e", "text": "Processes causing greenhouse gas (GHG) emissions benefit humans by providing consumer goods and services. This benefit, and hence the responsibility for emissions, varies by purpose or consumption category and is unevenly distributed across and within countries. We quantify greenhouse gas emissions associated with the final consumption of goods and services for 73 nations and 14 aggregate world regions. We analyze the contribution of 8 categories: construction, shelter, food, clothing, mobility, manufactured products, services, and trade. National average per capita footprints vary from 1 tCO2e/y in African countries to approximately 30/y in Luxembourg and the United States. The expenditure elasticity is 0.57. The cross-national expenditure elasticity for just CO2, 0.81, corresponds remarkably well to the cross-sectional elasticities found within nations, suggesting a global relationship between expenditure and emissions that holds across several orders of magnitude difference. On the global level, 72% of greenhouse gas emissions are related to household consumption, 10% to government consumption, and 18% to investments. Food accounts for 20% of GHG emissions, operation and maintenance of residences is 19%, and mobility is 17%. Food and services are more important in developing countries, while mobility and manufactured goods rise fast with income and dominate in rich countries. The importance of public services and manufactured goods has not yet been sufficiently appreciated in policy. Policy priorities hence depend on development status and country-level characteristics.", "title": "" } ]
[ { "docid": "abaf590dfff79cd3282b36db369c8a32", "text": "Classifying a visual concept merely from its associated online textual source, such as a Wikipedia article, is an attractive research topic in zero-shot learning because it alleviates the burden of manually collecting semantic attributes. Recent work has pursued this approach by exploring various ways of connecting the visual and text domains. In this paper, we revisit this idea by going further to consider one important factor: the textual representation is usually too noisy for the zero-shot learning application. This observation motivates us to design a simple yet effective zero-shot learning method that is capable of suppressing noise in the text. Specifically, we propose an l2,1-norm based objective function which can simultaneously suppress the noisy signal in the text and learn a function to match the text document and visual features. We also develop an optimization algorithm to efficiently solve the resulting problem. By conducting experiments on two large datasets, we demonstrate that the proposed method significantly outperforms those competing methods which rely on online information sources but with no explicit noise suppression. Furthermore, we make an in-depth analysis of the proposed method and provide insight as to what kind of information in documents is useful for zero-shot learning.", "title": "" }, { "docid": "13c7278393988ec2cfa9a396255e6ff3", "text": "Finding good transfer functions for rendering medical volumes is difficult, non-intuitive, and time-consuming. We introduce a clustering-based framework for the automatic generation of transfer functions for volumetric data. The system first applies mean shift clustering to oversegment the volume boundaries according to their low-high (LH) values and their spatial coordinates, and then uses hierarchical clustering to group similar voxels. A transfer function is then automatically generated for each cluster such that the number of occlusions is reduced. The framework also allows for semi-automatic operation, where the user can vary the hierarchical clustering results or the transfer functions generated. The system improves the efficiency and effectiveness of visualizing medical images and is suitable for medical imaging applications.", "title": "" }, { "docid": "d763198d3bfb1d30b153e13245c90c08", "text": "Inspired by the aerial maneuvering ability of lizards, we present the design and control of MSU (Michigan State University) tailbot - a miniature-tailed jumping robot. The robot can not only wheel on the ground, but also jump up to overcome obstacles. Moreover, once leaping into the air, it can control its body angle using an active tail to dynamically maneuver in midair for safe landings. We derive the midair dynamics equation and design controllers, such as a sliding mode controller, to stabilize the body at desired angles. To the best of our knowledge, this is the first miniature (maximum size 7.5 cm) and lightweight (26.5 g) robot that can wheel on the ground, jump to overcome obstacles, and maneuver in midair. Furthermore, tailbot is equipped with on-board energy, sensing, control, and wireless communication capabilities, enabling tetherless or autonomous operations. The robot in this paper exemplifies the integration of mechanical design, embedded system, and advanced control methods that will inspire the next-generation agile robots mimicking their biological counterparts. Moreover, it can serve as mobile sensor platforms for wireless sensor networks with many field applications.", "title": "" }, { "docid": "f474fd0bce5fa65e79ceb77a17ace260", "text": "One popular approach to controlling humanoid robots is through inverse kinematics (IK) with stiff joint position tracking. On the other hand, inverse dynamics (ID) based approaches have gained increasing acceptance by providing compliant motions and robustness to external perturbations. However, the performance of such methods is heavily dependent on high quality dynamic models, which are often very difficult to produce for a physical robot. IK approaches only require kinematic models, which are much easier to generate in practice. In this paper, we supplement our previous work with ID-based controllers by adding IK, which helps compensate for modeling errors. The proposed full body controller is applied to three tasks in the DARPA Robotics Challenge (DRC) Trials in Dec. 2013.", "title": "" }, { "docid": "ef77d042a04b7fa704f13a0fa5e73688", "text": "The nature of the cellular basis of learning and memory remains an often-discussed, but elusive problem in neurobiology. A popular model for the physiological mechanisms underlying learning and memory postulates that memories are stored by alterations in the strength of neuronal connections within the appropriate neural circuitry. Thus, an understanding of the cellular and molecular basis of synaptic plasticity will expand our knowledge of the molecular basis of learning and memory. The view that learning was the result of altered synaptic weights was first proposed by Ramon y Cajal in 1911 and formalized by Donald O. Hebb. In 1949, Hebb proposed his \" learning rule, \" which suggested that alterations in the strength of synapses would occur between two neurons when those neurons were active simultaneously (1). Hebb's original postulate focused on the need for synaptic activity to lead to the generation of action potentials in the postsynaptic neuron, although more recent work has extended this to include local depolarization at the synapse. One problem with testing this hypothesis is that it has been difficult to record directly the activity of single synapses in a behaving animal. Thus, the challenge in the field has been to relate changes in synaptic efficacy to specific behavioral instances of associative learning. In this chapter, we will review the relationship among synaptic plasticity, learning, and memory. We will examine the extent to which various current models of neuronal plasticity provide potential bases for memory storage and we will explore some of the signal transduction pathways that are critically important for long-term memory storage. We will focus on two systems—the gill and siphon withdrawal reflex of the invertebrate Aplysia californica and the mammalian hippocam-pus—and discuss the abilities of models of synaptic plasticity and learning to account for a range of genetic, pharmacological, and behavioral data.", "title": "" }, { "docid": "92341e8785da518ae05599c85a6de212", "text": "A novel dual-layer electrically small radio-frequency-identification (RFID) tag antenna is proposed for metallic object applications. With a proximity-coupled feed method, two rotationally symmetric loaded via-patches are fed through an embedded dual-element planar inverted-F antenna (PIFA) array. With this configuration, the antenna volume is reduced to 0.08 λ×0.04 λ×0.007 λ while the measured antenna gain is 0.08 dBi at the frequency of 923 MHz. Studies demonstrate that the proposed antenna is a good candidate for ultra-high-frequency RFID tags to be mounted on metallic surfaces, especially in size-constrained scenarios. Meanwhile, a figure of merit, namely, NBG, is presented, with which a comparison among electrically small tag antennas is carried out. Finally, several guidelines are given out to facilitate the miniaturization of RFID tag antennas for metallic object applications.", "title": "" }, { "docid": "bd10968ad7163e562922d94b6e474253", "text": "We propose a novel method for Acoustic Event Detection (AED). In contrast to speech, sounds coming from acoustic events may be produced by a wide variety of sources. Furthermore, distinguishing them often requires analyzing an extended time period due to the lack of a clear sub-word unit. In order to incorporate the long-time frequency structure for AED, we introduce a convolutional neural network (CNN) with a large input field. In contrast to previous works, this enables to train audio event detection end-to-end. Our architecture is inspired by the success of VGGNet [1] and uses small, 3×3 convolutions, but more depth than previous methods in AED. In order to prevent over-fitting and to take full advantage of the modeling capabilities of our network, we further propose a novel data augmentation method to introduce data variation. Experimental results show that our CNN significantly outperforms state of the art methods including Bag of Audio Words (BoAW) and classical CNNs, achieving a 16% absolute improvement.", "title": "" }, { "docid": "14b36f57ccc2d4814e8855fd7e3b102c", "text": "The functions of Klotho (KL) are multifaceted and include the regulation of aging and mineral metabolism. It was originally identified as the gene responsible for premature aging-like symptoms in mice and was subsequently shown to function as a coreceptor in the fibroblast growth factor (FGF) 23 signaling pathway. The discovery of KL as a partner for FGF23 led to significant advances in understanding of the molecular mechanisms underlying phosphate and vitamin D metabolism, and simultaneously clarified the pathogenic roles of the FGF23 signaling pathway in human diseases. These novel insights led to the development of new strategies to combat disorders associated with the dysregulated metabolism of phosphate and vitamin D, and clinical trials on the blockade of FGF23 signaling in X-linked hypophosphatemic rickets are ongoing. Molecular and functional insights on KL and FGF23 have been discussed in this review and were extended to how dysregulation of the FGF23/KL axis causes human disorders associated with abnormal mineral metabolism.", "title": "" }, { "docid": "a913255762a5ced0fe00d08c599333d9", "text": "The electroencephalogram (EEG) consists of an underlying background process with superimposed transient nonstationarities such as epileptic spikes (ESs). The detection of ESs in the EEG is of particular importance in the diagnosis of epilepsy. In this paper a new approach for detecting ESs in EEG recordings is presented. It is based on a time-varying autoregressive model (TVAR) that makes use of the nonstationarities of the EEG signal. The autoregressive (AR) parameters are estimated via Kalman filtering (KF). In our method, the EEG signal is first preprocessed to accentuate ESs and attenuate background activity, and then passed through a thresholding function to determine ES locations. The proposed method is evaluated using simulated signals as well as real inter-ictal EEGs", "title": "" }, { "docid": "2855a1f420ed782317c1598c9d9c185e", "text": "Ranking authors is vital for identifying a researcher’s impact and his standing within a scientific field. There are many different ranking methods (e.g., citations, publications, h-index, PageRank, and weighted PageRank), but most of them are topic-independent. This paper proposes topic-dependent ranks based on the combination of a topic model and a weighted PageRank algorithm. The Author-Conference-Topic (ACT) model was used to extract topic distribution of individual authors. Two ways for combining the ACT model with the PageRank algorithm are proposed: simple combination (I_PR) or using a topic distribution as a weighted vector for PageRank (PR_t). Information retrieval was chosen as the test field and representative authors for different topics at different time phases were identified. Principal Component Analysis (PCA) was applied to analyze the ranking difference between I_PR and PR_t.", "title": "" }, { "docid": "aa625f9e46914cb288fec3fd00fdcfda", "text": "Battery modelling is a significant component of advanced Battery Management Systems (BMSs). The full electrochemical model of a battery can represent high precision battery behavior during its operation. However, the high computational requirement to solve the coupled nonlinear partial differential equations (PDEs) that define these models limits their applicability in an online BMS, especially for a battery pack containing hundreds of cells. Therefore, a reduced SPM-Three parameter model is proposed in this paper to efficiently model a lithium ion cell with high accuracy in a specific range of cell operation. The reduced model is implemented in a Simulink block for developing an advanced battery modelling tool that can be applied to a wide variety of battery applications.", "title": "" }, { "docid": "cd54295ead776808dcdb04d13670620c", "text": "This study aimed to ascertain the reliability of the McCabe score in a healthcare-associated infection point prevalence survey.   A 10 European Union Member States survey in 20 hospitals (n = 1912) indicated that there was a moderate level of agreement (κ = 0.57) with the score. The reliability of the application of the score could be increased by training data collectors, particularly with reference to the ultimately fatal criteria. This is important if the score is to be used to risk adjust data to drive infection prevention and control interventions.", "title": "" }, { "docid": "a178871cd82edaa05a0b0befacb7fc38", "text": "The main applications and challenges of one of the hottest research areas in computer science.", "title": "" }, { "docid": "981e88bd1f4187972f8a3d04960dd2dd", "text": "The purpose of this study is to examine the appropriateness and effectiveness of the assistive use of robot projector based augmented reality (AR) to children’s dramatic activity. A system that employ a mobile robot mounted with a projector-camera is used to help manage children’s dramatic activity by projecting backdrops and creating a synthetic video imagery, where e.g. children’s faces is replaced with graphic characters. In this Delphi based study, a panel consist of 33 professionals include 11children education experts (college professors majoring in early childhood education), children field educators (kindergarten teachers and principals), and 11 AR and robot technology experts. The experts view the excerpts from the video taken from the actual usage situation. In the first stage of survey, we collect the panel's perspectives on applying the latest new technologies for instructing dramatic activity to children using an open ended questionnaire. Based on the results of the preliminary survey, the subsequent questionnaires (with 5 point Likert scales) are developed for the second and third in-depth surveys. In the second survey, 36 questions is categorized into 5 areas: (1) developmental and educational values, (2) impact on the teacher's role, (3) applicability and special considerations in the kindergarten, (4) external environment and required support, and (5) criteria for the selection of the story in the drama activity. The third survey mainly investigate how AR or robots can be of use in children’s dramatic activity in other ways (than as originally given) and to other educational domains. The surveys show that experts most appreciated the use of AR and robot for positive educational and developmental effects due to the children’s keen interests and in turn enhanced immersion into the dramatic activity. Consequently, the experts recommended that proper stories, scenes and technological realizations need to be selected carefully, in the light of children’s development, while lever aging on strengths of the technologies used.", "title": "" }, { "docid": "7323cf16224197b312d1a4c7ff4168ea", "text": "It is well known that animals can use neural and sensory feedback via vision, tactile sensing, and echolocation to negotiate obstacles. Similarly, most robots use deliberate or reactive planning to avoid obstacles, which relies on prior knowledge or high-fidelity sensing of the environment. However, during dynamic locomotion in complex, novel, 3D terrains, such as a forest floor and building rubble, sensing and planning suffer bandwidth limitation and large noise and are sometimes even impossible. Here, we study rapid locomotion over a large gap-a simple, ubiquitous obstacle-to begin to discover the general principles of the dynamic traversal of large 3D obstacles. We challenged the discoid cockroach and an open-loop six-legged robot to traverse a large gap of varying length. Both the animal and the robot could dynamically traverse a gap as large as one body length by bridging the gap with its head, but traversal probability decreased with gap length. Based on these observations, we developed a template that accurately captured body dynamics and quantitatively predicted traversal performance. Our template revealed that a high approach speed, initial body pitch, and initial body pitch angular velocity facilitated dynamic traversal, and successfully predicted a new strategy for using body pitch control that increased the robot's maximal traversal gap length by 50%. Our study established the first template of dynamic locomotion beyond planar surfaces, and is an important step in expanding terradynamics into complex 3D terrains.", "title": "" }, { "docid": "c66fc0dbd8774fdb5fea3990985e65d7", "text": "Since 1985 various evolutionary approaches to multiobjective optimization have been developed, capable of searching for multiple solutions concurrently in a single run. But the few comparative studies of different methods available to date are mostly qualitative and restricted to two approaches. In this paper an extensive, quantitative comparison is presented, applying four multiobjective evolutionary algorithms to an extended ~0/1 knapsack problem. 1 I n t r o d u c t i o n Many real-world problems involve simultaneous optimization of several incommensurable and often competing objectives. Usually, there is no single optimal solution, but rather a set of alternative solutions. These solutions are optimal in the wider sense that no other solutions in the search space are superior to them when all objectives are considered. They are known as Pareto-optimal solutions. Mathematically, the concept of Pareto-optimality can be defined as follows: Let us consider, without loss of generality, a multiobjective maximization problem with m parameters (decision variables) and n objectives: Maximize y = f (x ) = ( f l (x ) , f 2 ( x ) , . . . , f,~(x)) (1) where x = ( x l , x 2 , . . . , x m ) e X and y = ( y l , y 2 , . . . , y ~ ) E Y are tuple. A decision vector a E X is said to dominate a decision vector b E X (also written as a >-b) iff V i e { 1 , 2 , . . . , n } : l ~ ( a ) > _ f ~ ( b ) A ~ j e { 1 , 2 , . . . , n } : f j ( a ) > f j ( b ) (2) Additionally, in this study we say a covers b iff a ~b or a = b. All decision vectors which are not dominated by any other decision vector are called nondominated or Pareto-optimal. Often, there is a special interest in finding or approximating the Paretooptimal set, mainly to gain deeper insight into the problem and knowledge about alternate solutions, respectively. Evolutionary algorithms (EAs) seem to be especially suited for this task, because they process a set of solutions in parallel, eventually exploiting similarities of solutions by crossover. Some researcher suggest that multiobjective search and optimization might be a problem area where EAs do better than other blind search strategies [1][12]. Since the mid-eighties various multiob]ective EAs have been developed, capable of searching for multiple Pareto-optimal solutions concurrently in a single", "title": "" }, { "docid": "0f7a4ddeb2627b8815175aea809a1ca3", "text": "A deep community in a graph is a connected component that can only be seen after removal of nodes or edges from the rest of the graph. This paper formulates the problem of detecting deep communities as multi-stage node removal that maximizes a new centrality measure, called the local Fiedler vector centrality (LFVC), at each stage. The LFVC is associated with the sensitivity of algebraic connectivity to node or edge removals. We prove that a greedy node/edge removal strategy, based on successive maximization of LFVC, has bounded performance loss relative to the optimal, but intractable, combinatorial batch removal strategy. Under a stochastic block model framework, we show that the greedy LFVC strategy can extract deep communities with probability one as the number of observations becomes large. We apply the greedy LFVC strategy to real-world social network datasets. Compared with conventional community detection methods we demonstrate improved ability to identify important communities and key members in the network.", "title": "" }, { "docid": "ec1317bfb5fd80ec79fcec3163213167", "text": "The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular events. Non-contrast enhanced cardiac CT is considered a reference for quantification of CAC. Recently, it has been shown that CAC may be quantified in cardiac CT angiography (CCTA). We present a pattern recognition method that automatically identifies and quantifies CAC in CCTA. The study included CCTA scans of 50 patients equally distributed over five cardiovascular risk categories. CAC in CCTA was identified in two stages. In the first stage, potential CAC voxels were identified using a convolutional neural network (CNN). In the second stage, candidate CAC lesions were extracted based on the CNN output for analyzed voxels and thereafter described with a set of features and classified using a Random Forest. Ten-fold stratified cross-validation experiments were performed. CAC volume was quantified per patient and compared with manual reference annotations in the CCTA scan. Bland-Altman bias and limits of agreement between reference and automatic annotations were -15 (-198–168) after the first stage and -3 (-86 – 79) after the second stage. The results show that CAC can be automatically identified and quantified in CCTA using the proposed method. This might obviate the need for a dedicated non-contrast-enhanced CT scan for CAC scoring, which is regularly acquired prior to a CCTA scan, and thus reduce the CT radiation dose received by patients.", "title": "" }, { "docid": "28fbb71fab5ea16ef52611b31fcf1dfa", "text": "Gamification, an emerging idea for using game design elements and principles to make everyday tasks more engaging, is permeating many different types of information systems. Excitement surrounding gamification results from its many potential organizational benefits. However, few research and design guidelines exist regarding gamified information systems. We therefore write this commentary to call upon information systems scholars to investigate the design and use of gamified information systems from a variety of disciplinary perspectives and theories, including behavioral economics, psychology, social psychology, information systems, etc. We first explicate the idea of gamified information systems, provide real-world examples of successful and unsuccessful systems, and, based on a synthesis of the available literature, present a taxonomy of gamification design elements. We then develop a framework for research and design: its main theme is to create meaningful engagement for users; that is, gamified information systems should be designed to address the dual goals of instrumental and experiential outcomes. Using this framework, we develop a set of design principles and research questions, using a running case to illustrate some of our ideas. We conclude with a summary of opportunities for IS researchers to extend our knowledge of gamified information systems, and, at the same time, advance existing theories.", "title": "" }, { "docid": "43c91f3491daceb76906a48fba3663dc", "text": "A noninverting buck-boost dc-dc converter can work in buck, boost, or buck-boost mode. Hence, it provides a good solution when the input voltage may be higher or lower than the output voltage. However, a buck-boost converter requires four power transistors, rather than two. Therefore, its efficiency decreases, due to the conduction and switching losses of the two extra power transistors. Another issue of a buck-boost converter is how to smoothly switch its operational mode, when its input voltage approaches its output voltage. A hysteretic-current-mode noninverting buck-boost converter with high efficiency and smooth mode transition is proposed, and it was designed and fabricated using TSMC 0.35-μm CMOS 2P4 M 3.3 V/5V mixed-signal polycide process. The input voltage may range from 2.5 to 5 V, the output voltage is 3.3 V, and the maximal load current is 400 mA. According to the measured results, the maximal efficiency reaches 98.1%, and the efficiencies measured in the entire input voltage and loading ranges are all above 80%.", "title": "" } ]
scidocsrr
60a84068c2e713a16f3299ca5a274d5b
Analytical Modeling of Partially Shaded Photovoltaic Systems
[ { "docid": "ad6763de671234eb48b3629c25ab9113", "text": "Photovoltaic (PV) system performance is influenced by several factors, including irradiance, temperature, shading, degradation, mismatch losses, soiling, etc. Shading of a PV array, in particular, either complete or partial, can have a significant impact on its power output and energy yield, depending on array configuration, shading pattern, and the bypass diodes incorporated in the PV modules. In this paper, the effect of partial shading on multicrystalline silicon (mc-Si) PV modules is investigated. A PV module simulation model implemented in P-Spice is first employed to quantify the effect of partial shading on the I-V curve and the maximum power point (MPP) voltage and power. Then, generalized formulae are derived, which permit accurate enough evaluation of MPP voltage and power of mc-Si PV modules, without the need to resort to detailed modeling and simulation. The equations derived are validated via experimental results.", "title": "" }, { "docid": "320bde052bb8d325c90df45cb21ac5de", "text": "The power generated by solar photovoltaic (PV) module depends on surrounding irradiance, temperature and shading conditions. Under partial shading conditions (PSC) the power from the PV module can be dramatically reduced and maximum power point tracking (MPPT) control will be affected. This paper presents a hybrid simulation model of PV cell/module and system using Matlab®/Simulink® and Pspice®. The hybrid simulation model includes the solar PV cells and the converter power stage and can be expanded to add MPPT control and other functions. The model is able to simulate both the I-V characteristics curves and the P-V characteristics curves of PV modules under uniform shading conditions (USC) and PSC. The model is used to study different parameters variations effects on the PV array. The developed model is suitable to simulate several homogeneous or/and heterogeneous PV cells or PV panels connected in series and/or in parallel.", "title": "" }, { "docid": "5aedc933eaeef54893626359b89861fc", "text": "A photovoltaic cell converts the solar energy into the electrical energy by the photovoltaic effect. Solar cells are widely used in terrestrial and space applications. The photovoltaic cells must be operated at their maximum power point. The maximum power point varies with illumination, temperature, radiation dose and other ageing effects. In this paper, mathematical modeling, V-I and P-V characteristics of PV cells are studied. Different modeling techniques like empirical model and ANFIS model are proposed and developed. The result obtained by empirical models will be compared with ANFIS model and it will be proved that ANFIS gives better result. The simulated V-I and P-V characteristics of Photovoltaic cell for various temperature and irradiance are presented. Also ANFIS model outputs are presented. This can be used for sizing of PV system.", "title": "" } ]
[ { "docid": "8b3f597acb5a5a1333176a13e7dbbe43", "text": "Generalization bounds for time series prediction and other non-i.i.d. learning scenarios that can be found in the machine learning and statistics literature assume that observations come from a (strictly) stationary distribution. The first bounds for completely non-stationary setting were proved in [6]. In this work we present an extension of these results and derive novel algorithms for forecasting nonstationary time series. Our experimental results show that our algorithms significantly outperform standard autoregressive models commonly used in practice.", "title": "" }, { "docid": "36a538b833de4415d12cd3aa5103cf9b", "text": "Big data is an opportunity in the emergence of novel business applications such as “Big Data Analytics” (BDA). However, these data with non-traditional volumes create a real problem given the capacity constraints of traditional systems. The aim of this paper is to deal with the impact of big data in a decision-support environment and more particularly in the data integration phase. In this context, we developed a platform, called P-ETL (Parallel-ETL) for extracting (E), transforming (T) and loading (L) very large data in a data warehouse (DW). To cope with very large data, ETL processes under our P-ETL platform run on a cluster of computers in parallel way with MapReduce paradigm. The conducted experiment shows mainly that increasing tasks dealing with large data speeds-up the ETL process.", "title": "" }, { "docid": "b23cac0702e7f992dcf0362240f65670", "text": "A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments). A key driver of evolvability is the widespread modularity of biological networks--their organization as functional, sparsely connected subunits--but there is no consensus regarding why modularity itself evolved. Although most hypotheses assume indirect selection for evolvability, here we demonstrate that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks. Computational evolution experiments with selection pressures to maximize network performance and minimize connection costs yield networks that are significantly more modular and more evolvable than control experiments that only select for performance. These results will catalyse research in numerous disciplines, such as neuroscience and genetics, and enhance our ability to harness evolution for engineering purposes.", "title": "" }, { "docid": "1891bf842d446a7d323dc207b38ff5a9", "text": "We use linear programming techniques to obtain new upper bounds on the maximal squared minimum distance of spherical codes with fixed cardinality. Functions Qj(n, s) are introduced with the property that Qj(n, s) < 0 for some j > m iff the Levenshtein bound Lm(n, s) on A(n, s) = max{|W | : W is an (n, |W |, s) code} can be improved by a polynomial of degree at least m+1. General conditions on the existence of new bounds are presented. We prove that for fixed dimension n ≥ 5 there exist a constant k = k(n) such that all Levenshtein bounds Lm(n, s) for m ≥ 2k− 1 can be improved. An algorithm for obtaining new bounds is proposed and discussed.", "title": "" }, { "docid": "420ce4b16c38b220afaa9ffc013c311c", "text": "Human face constantly conveys information, both consciously and subconsciously. However, as basic as it is for humans to visually interpret this information, it is quite a big challenge for machines. Conventional semantic facial feature recognition and analysis techniques mostly lack robustness and suffer from high computation time. This paper aims to explore ways for machines to learn to interpret semantic information available in faces in an automated manner without requiring manual design of feature detectors, using the approach of Deep Learning. In this study, the effects of various factors and hyper-parameters of deep neural networks are investigated for an optimal network configuration that can accurately recognize semantic facial features like emotions, age, gender, ethnicity etc. Furthermore, the relation between the effect of high-level concepts on low level features is explored through the analysis of the similarities in low-level descriptors of different semantic features. This paper also demonstrates a novel idea of using a deep network to generate 3-D Active Appearance Models of faces from real-world 2-D images. For a more detailed report on this work, please see [1].", "title": "" }, { "docid": "1062bec2a56c8f8cf11d599144774630", "text": "The paper proposed a soft computing approach to solve document clustering problem. Document clustering is a specialized clustering problem in which textual documents autonomously segregated to a number of identifiable, subject homogenous and smaller sub-collections (also called clusters). Identifying implicit textual patterns within the documents is a challenging aspect as there can be thousands of such textual features. Partition clustering algorithm like k-means is mainly used for this problem. There are several drawbacks in k-means algorithm such as (i) initial seeds dependency, and (ii) it traps into local optimal solution. Although every k-means solution may contain some good partial arrangements for clustering. Meta-heuristic algorithm like Black Hole (BH) uses certain trade-off of randomization and local search for finding the optimal and near optimal solution. Our motivation comes from the fact that meta-heuristic optimization can quickly produce a global optimal solution using random k-means initial solution. The contributions from this research are (i) an implementation of black hole algorithm using k-mean as embedding (ii) The phenomena of global search and local search optimization are used as parameters adjustments. A series of experiments are performed with our proposed method on standard text mining datasetslike: (i) NEWS20, (ii) Reuters and (iii) WebKB and results are evaluated on Purity and Silhouette Index. In comparison the proposed method outperforms the basic k-means, GA with k-means embedding and quickly converges to global or near global optimal solution.", "title": "" }, { "docid": "641754ee9332e1032838d0dba7712607", "text": "Medication administration is an increasingly complex process, influenced by the number of medications on the market, the number of medications prescribed for each patient, new medical technology and numerous administration policies and procedures. Adverse events initiated by medication error are a crucial area to improve patient safety. This project looked at the complexity of the medication administration process at a regional hospital and the effect of two medication distribution systems. A reduction in work complexity and time spent gathering medication and supplies, was a goal of this work; but more importantly was determining what barriers to safety and efficiency exist in the medication administration process and the impact of barcode scanning and other technologies. The concept of mobile medication units is attractive to both managers and clinicians; however it is only one solution to the problems with medication administration. Introduction and Background Medication administration is an increasingly complex process, influenced by the number of medications on the market, the number of medications prescribed for each patient, and the numerous policies and procedures created for their administration. Mayo and Duncan (2004) found that a “single [hospital] patient can receive up to 18 medications per day, and a nurse can administer as many as 50 medications per shift” (p. 209). While some researchers indicated that the solution is more nurse education or training (e.g. see Mayo & Duncan, 2004; and Tang, Sheu, Yu, Wei, & Chen, 2007), it does not appear that they have determined the feasibility of this solution and the increased time necessary to look up every unfamiliar medication. Most of the research which focuses on the causes of medication errors does not examine the processes involved in the administration of the medication. And yet, understanding the complexity in the nurses’ processes and workflow is necessary to develop safeguards and create more robust systems that reduce the probability of errors and adverse events. Current medication administration processes include many \\ tasks, including but not limited to, assessing the patient to obtain pertinent data, gathering medications, confirming the five rights (right dose, patient, route, medication, and time), administering the medications, documenting administration, and observing for therapeutic and untoward effects. In studies of the delivery of nursing care in acute care settings, Potter et al. (2005) found that nurses spent 16% their time preparing or administering medication. In addition to the amount of time that the nurses spent in preparing and administering medication, Potter et al found that a significant number of interruptions occurred during this critical process. Interruptions impact the cognitive workload of the nurse, and create an environment where medication errors are more likely to occur. A second environmental factor that affects the nurses’ workflow, is the distance traveled to administer care during a shift. Welker, Decker, Adam, & Zone-Smith (2006) found that on average, ward nurses who were assigned three patients walked just over 4.1 miles per shift while a nurse assigned to six patients walked over 4.8 miles. As a large number of interruptions (22%) occurred within the medication rooms, which were highly visible and in high traffic locations (Potter et al., 2005), and while collecting supplies or traveling to and from patient rooms (Ebright, Patterson, Chalko, & Render, 2003), reducing the distances and frequency of repeated travel could have the ability to decrease the number of interruptions and possibly errors in medication administration. Adding new technology, revising policies and procedures, and providing more education have often been the approaches taken to reduce medication errors. Unfortunately these new technologies, such as computerized order entry and electronic medical records / charting, and new procedures, for instance bar code scanning both the medicine and the patient, can add complexity to the nurse’s taskload. The added complexity in correspondence with the additional time necessary to complete the additional steps can lead to workarounds and variations in care. Given the problems in the current medication administration processes, this work focused on facilitating the nurse’s role in the medication administration process. This study expands on the Braswell and Duggar (2006) investigation and compares processes at baseline and postintroduction of a new mobile medication system. To do this, the current medication administration and distribution process was fully documented to determine a baseline in workload complexity. Then a new mobile medication center was installed to allow nurses easier access to patient medications while traveling on the floor, and the medication administration and distribution process was remapped to demonstrate where process complexities were reduced and nurse workflow is more efficient. A similar study showed that the time nurses spend gathering medications and supplies can be dramatically reduced through this type of system (see Braswell & Duggar, 2006); however, they did not directly investigate the impact on the nursing process. Thus, this research is presented to document the impact of this technology on the nursing workflow at a regional hospital, and as an expansion on the work begun by Braswell and Duggar.", "title": "" }, { "docid": "ad0688b0c80cf6eeed13a2a9b112f97c", "text": "P2P lending is an emerging Internet-based application where individuals can directly borrow money from each other. The past decade has witnessed the rapid development and prevalence of online P2P lending platforms, examples of which include Prosper, LendingClub, and Kiva. Meanwhile, extensive research has been done that mainly focuses on the studies of platform mechanisms and transaction data. In this article, we provide a comprehensive survey on the research about P2P lending, which, to the best of our knowledge, is the first focused effort in this field. Specifically, we first provide a systematic taxonomy for P2P lending by summarizing different types of mainstream platforms and comparing their working mechanisms in detail. Then, we review and organize the recent advances on P2P lending from various perspectives (e.g., economics and sociology perspective, and data-driven perspective). Finally, we propose our opinions on the prospects of P2P lending and suggest some future research directions in this field. Meanwhile, throughout this paper, some analysis on real-world data collected from Prosper and Kiva are also conducted.", "title": "" }, { "docid": "b1b13e9695d59ef7d1f2b4db7afd1be6", "text": "PCR amplification of tetrameric short tandem repeats (STRs) can lead to Taq enzyme slippage and artefact products typically one repeat unit less in size than the parent STR. These back stutter or n-4 amplification products are low-level relative to the amplification of the parent STR but are widely seen in the forensic community where tetrameric STRs are employed in the generation of DNA profiles. To aid the interpretation of DNA mixtures where minor contributor(s) might be present in comparable amounts to the back stutter products, the typical amounts of back stutter generated have been well characterised and guidelines for interpretation are in place. However, further artefacts thought to be Taq enzyme slippage leading to products with one repeat unit greater than the parent sequence (n+4 or forward stutter) or two repeats less (n-8 or double back stutter) also occur, but these have not been well characterised despite their potential influence in mixture interpretations. Here we present findings with respect to these additional artefacts from a study of 10,000 alleles and include guidelines for interpretation.", "title": "" }, { "docid": "c1fbb1df350466239b26daf28a00f292", "text": "In this paper we show how the open standard modeling language Modelica can be effectively used to support model-based design and verification of cyber-physical systems stemming from complex power electronics systems. To this end we present a Modelica model for a Distributed Maximum Power Point Tracking system along with model validation results.", "title": "" }, { "docid": "a09248f7c017c532a3a0a580be14ba20", "text": "In the past ten years, the software aging phenomenon has been systematically researched, and recognized by both academic, and industry communities as an important obstacle to achieving dependable software systems. One of its main effects is the depletion of operating system resources, causing system performance degradation or crash/hang failures in running applications. When conducting experimental studies to evaluate the operational reliability of systems suffering from software aging, long periods of runtime are required to observe system failures. Focusing on this problem, we present a systematic approach to accelerate the software aging manifestation to reduce the experimentation time, and to estimate the lifetime distribution of the investigated system. First, we introduce the concept of ¿aging factor¿ that offers a fine control of the aging effects at the experimental level. The aging factors are estimated via sensitivity analyses based on the statistical design of experiments. Aging factors are then used together with the method of accelerated degradation test to estimate the lifetime distribution of the system under test at various stress levels. This approach requires us to estimate a relationship model between stress levels and aging degradation. Such models are called stress-accelerated aging relationships. Finally, the estimated relationship models enable us to estimate the lifetime distribution under use condition. The proposed approach is used in estimating the lifetime distribution of a web server with software aging symptoms. The main result is the reduction of the experimental time by a factor close to 685 in comparison with experiments executed without the use of our technique.", "title": "" }, { "docid": "d9df98fbd7281b67347df0f2643323fa", "text": "Predefined categories can be assigned to the natural language text using for text classification. It is a “bag-of-word” representation, previous documents have a word with values, it represents how frequently this word appears in the document or not. But large documents may face many problems because they have irrelevant or abundant information is there. This paper explores the effect of other types of values, which express the distribution of a word in the document. These values are called distributional features. All features are calculated by tfidf style equation and these features are combined with machine learning techniques. Term frequency is one of the major factor for distributional features it holds weighted item set. When the need is to minimize a certain score function, discovering rare data correlations is more interesting than mining frequent ones. This paper tackles the issue of discovering rare and weighted item sets, i.e., the infrequent weighted item set mining problem. The classifier which gives the more accurate result is selected for categorization. Experiments show that the distributional features are useful for text categorization.", "title": "" }, { "docid": "169ed8d452a7d0dd9ecf90b9d0e4a828", "text": "Technology is common in the domain of knowledge distribution, but it rarely enhances the process of knowledge use. Distribution delivers knowledge to the potential user's desktop but cannot dictate what he or she does with it thereafter. It would be interesting to envision technologies that help to manage personal knowledge as it applies to decisions and actions. The viewpoints about knowledge vary from individual, community, society, personnel development or national development. Personal Knowledge Management (PKM) integrates Personal Information Management (PIM), focused on individual skills, with Knowledge Management (KM). KM Software is a subset of Enterprise content management software and which contains a range of software that specialises in the way information is collected, stored and/or accessed. This article focuses on KM skills, PKM and PIM Open Sources Software, Social Personal Management and also highlights the Comparison of knowledge base management software and its use.", "title": "" }, { "docid": "32334cf8520dde6743aa66b4e35742ff", "text": "LinKBase® is a biomedical ontology. Its hierarchical structure, coverage, use of operational, formal and linguistic relationships, combined with its underlying language technology, make it an excellent ontology to support Natural Language Processing and Understanding (NLP/NLU) and data integration applications. In this paper we will describe the structure and coverage of LinKBase®. In addition, we will discuss the editing of LinKBase® and how domain experts are guided by specific editing rules to ensure modeling quality and consistency. Finally, we compare the structure of LinKBase® to the structure of third party terminologies and ontologies and discuss the integration of these data sources into", "title": "" }, { "docid": "d94f2c4123abe14ca4941c8d4aaee07b", "text": "Performance self tuning in database systems is a challenge work since it is hard to identify tuning parameters and make a balance to choose proper configuration values for them. In this paper, we propose a neural network based algorithm for performance self-tuning. We first extract Automatic Workload Repository report automatically, and then identify key system performance parameters and performance indicators. We then use the collected data to construct a Neural Network model. Finally, we develop a selftuning algorithm to tune these parameters. Experimental results for oracle database system in TPC-C workload environment show that the proposed method can dynamically improve the performance.", "title": "" }, { "docid": "3a55674e92d3b8dd38eaa5058aed3425", "text": "OBJECTIVE\nThe objective of the present systematic review was to analyze the potential effect of incorporation of cantilever extensions on the survival rate of implant-supported fixed partial dental prostheses (FPDPs) and the incidence of technical and biological complications, as reported in longitudinal studies with at least 5 years of follow-up.\n\n\nMETHODS\nA MEDLINE search was conducted up to and including November 2008 for longitudinal studies with a mean follow-up period of at least 5 years. Two reviewers performed screening and data abstraction independently. Prosthesis-based data on survival/failure rate, technical complications (prosthesis-related problems, implant loss) and biological complications (marginal bone loss) were analyzed.\n\n\nRESULTS\nThe search provided 103 titles with abstract. Full-text analysis was performed of 12 articles, out of which three were finally included. Two of the studies had a prospective or retrospective case-control design, whereas the third was a prospective cohort study. The 5-year survival rate of cantilever FPDPs varied between 89.9% and 92.7% (weighted mean 91.9%), with implant fracture as the main cause for failures. The corresponding survival rate for FPDPs without cantilever extensions was 96.3-96.2% (weighted mean 95.8%). Technical complications related to the supra-constructions in the three included studies were reported to occur at a frequency of 13-26% (weighted mean 20.3%) for cantilever FPDPs compared with 0-12% (9.7%) for non-cantilever FPDPs. The most common complications were minor porcelain fractures and bridge-screw loosening. For cantilever FPDPs, the 5-year event-free survival rate varied between 66.7% and 79.2% (weighted mean 71.7%) and between 83.1% and 96.3% (weighted mean 85.9%) for non-cantilever FPDPs. No statistically significant differences were reported with regard to peri-implant bone-level change between the two prosthetic groups, either at the prosthesis or at the implant level.\n\n\nCONCLUSION\nData on implant-supported FPDPs with cantilever extensions are limited and therefore survival and complication rates should be interpreted with caution. The incorporation of cantilevers into implant-borne prostheses may be associated with a higher incidence of minor technical complications.", "title": "" }, { "docid": "bf1e0eaf8ab49fa7093b49f0b005f1b0", "text": "The emergence of the paradigm of Internet of Things (IoT) has necessitated the development of machine-to-machine (M2M) protocols geared towards wireless sensor network interfacing to the Internet and implementing machine learning algorithms over the cloud. This paper discusses the viability of the MQ Telemetry Transport (MQTT) protocol for such applications. This paper introduces MQTT along with its merits and demerits and suitability towards IoT applications. Then it outlines an implementation of a typical IoT application involving ubiquitous sensing, M2M communication, cloud computing and semantic data extraction. The results of this experiment are then analyzed. Finally, the paper looks at future improvements in the proposed architecture for widespread use.", "title": "" }, { "docid": "a5db348e34b61d61db4e861cb8483f5b", "text": "Statistical rituals largely eliminate statistical thinking in the social sciences. Rituals are indispensable for identification with social groups, but they should be the subject rather than the procedure of science. What I call the “null ritual” consists of three steps: (1) set up a statistical null hypothesis, but do not specify your own hypothesis nor any alternative hypothesis, (2) use the 5% significance level for rejecting the null and accepting your hypothesis, and (3) always perform this procedure. I report evidence of the resulting collective confusion and fears about sanctions on the part of students and teachers, researchers and editors, as well as textbook writers. © 2004 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "32135b15574c700a5c1b47671db7072b", "text": "The problem of color constancy may be solved if we can recover the physical properties of illuminants and surfaces from photosensor responses. We consider this problem within the framework of Bayesian decision theory. First, we model the relation among illuminants, surfaces, and photosensor responses. Second, we construct prior distributions that describe the probability that particular illuminants and surfaces exist in the world. Given a set of photosensor responses, we can then use Bayes's rule to compute the posterior distribution for the illuminants and the surfaces in the scene. There are two widely used methods for obtaining a single best estimate from a posterior distribution. These are maximum a posteriori (MAP) and minimum mean-square-error (MMSE) estimation. We argue that neither is appropriate for perception problems. We describe a new estimator, which we call the maximum local mass (MLM) estimate, that integrates local probability density. The new method uses an optimality criterion that is appropriate for perception tasks: It finds the most probable approximately correct answer. For the case of low observation noise, we provide an efficient approximation. We develop the MLM estimator for the color-constancy problem in which flat matte surfaces are uniformly illuminated. In simulations we show that the MLM method performs better than the MAP estimator and better than a number of standard color-constancy algorithms. We note conditions under which even the optimal estimator produces poor estimates: when the spectral properties of the surfaces in the scene are biased.", "title": "" }, { "docid": "3ff58e78ac9fe623e53743ad05248a30", "text": "Clock gating is an effective technique for minimizing dynamic power in sequential circuits. Applying clock-gating at gate-level not only saves time compared to implementing clock-gating in the RTL code but also saves power and can easily be automated in the synthesis process. This paper presents simulation results on various types of clock-gating at different hierarchical levels on a serial peripheral interface (SPI) design. In general power savings of about 30% and 36% reduction on toggle rate can be seen with different complex clock- gating methods with respect to no clock-gating in the design.", "title": "" } ]
scidocsrr
cf67efe5867d322be8bafa5244d5bfb8
A hierarchical type-2 fuzzy logic control architecture for autonomous mobile robots
[ { "docid": "fdbca2e02ac52afd687331048ddee7d3", "text": "Type-2 fuzzy sets let us model and minimize the effects of uncertainties in rule-base fuzzy logic systems. However, they are difficult to understand for a variety of reasons which we enunciate. In this paper, we strive to overcome the difficulties by: 1) establishing a small set of terms that let us easily communicate about type-2 fuzzy sets and also let us define such sets very precisely, 2) presenting a new representation for type-2 fuzzy sets, and 3) using this new representation to derive formulas for union, intersection and complement of type-2 fuzzy sets without having to use the Extension Principle.", "title": "" }, { "docid": "c2aed51127b8753e4b71da3b331527cd", "text": "In this paper, we present the theory and design of interval type-2 fuzzy logic systems (FLSs). We propose an efficient and simplified method to compute the input and antecedent operations for interval type-2 FLSs; one that is based on a general inference formula for them. We introduce the concept of upper and lower membership functions (MFs) and illustrate our efficient inference method for the case of Gaussian primary MFs. We also propose a method for designing an interval type-2 FLS in which we tune its parameters. Finally, we design type-2 FLSs to perform time-series forecasting when a nonstationary time-series is corrupted by additive noise where SNR is uncertain and demonstrate improved performance over type-1 FLSs.", "title": "" }, { "docid": "338a8efaaf4a790b508705f1f88872b2", "text": "During the past several years, fuzzy control has emerged as one of the most active and fruitful areas for research in the applications of fuzzy set theory, especially in the realm of industrial processes, which do not lend themselves to control by conventional methods because of a lack of quantitative data regarding the input-output relations. Fuzzy control is based on fuzzy logic-a logical system that is much closer in spirit to human thinking and natural language than traditional logical systems. The fuzzy logic controller (FLC) based on fuzzy logic provides a means of converting a linguistic control strategy based on expert knowledge into an automatic control strategy. A survey of the FLC is presented ; a general methodology for constructing an FLC and assessing its performance is described; and problems that need further research are pointed out. In particular, the exposition includes a discussion of fuzzification and defuzzification strategies, the derivation of the database and fuzzy control rules, the definition of fuzzy implication, and an analysis of fuzzy reasoning mechanisms. A may be regarded as a means of emulating a skilled human operator. More generally, the use of an FLC may be viewed as still another step in the direction of model-ing human decisionmaking within the conceptual framework of fuzzy logic and approximate reasoning. In this context, the forward data-driven inference (generalized modus ponens) plays an especially important role. In what follows, we shall investigate fuzzy implication functions, the sentence connectives and and also, compositional operators, inference mechanisms, and other concepts that are closely related to the decisionmaking logic of an FLC. In general, a fuzzy control rule is a fuzzy relation which is expressed as a fuzzy implication. In fuzzy logic, there are many ways in which a fuzzy implication may be defined. The definition of a fuzzy implication may be expressed as a fuzzy implication function. The choice of a fuzzy implication function reflects not only the intuitive criteria for implication but also the effect of connective also. I) Basic Properties of a Fuuy Implication Function: The choice of a fuzzy implication function involves a number of criteria, which are discussed in considered the following basic characteristics of a fuzzy implication function: fundamental property, smoothness property, unrestricted inference, symmetry of generalized modus ponens and generalized modus tollens, and a measure of propagation of fuzziness. All of these properties are justified on purely intuitive grounds. We prefer to say …", "title": "" } ]
[ { "docid": "9a921d579e9a9a213939b6cf9fa2ac9a", "text": "This paper presents a generic methodology to optimize constellations based on their geometrical shaping for bit-interleaved coded modulation (BICM) systems. While the method can be applicable to any wireless standard design it has been tailored to two delivery scenarios typical of broadcast systems: 1) robust multimedia delivery and 2) UHDTV quality bitrate services. The design process is based on maximizing the BICM channel capacity for a given power constraint. The major contribution of this paper is a low complexity optimization algorithm for the design of optimal constellation schemes. The proposal consists of a set of initial conditions for a particle swarm optimization algorithm, and afterward, a customized post processing procedure for further improving the constellation alphabet. According to the broadcast application cases, the sizes of the constellations proposed range from 16 to 4096 symbols. The BICM channel capacities and performance of the designed constellations are compared to conventional quadrature amplitude modulation constellations for different application scenarios. The results show a significant improvement in terms of system performance and BICM channel capacities under additive white Gaussian noise and Rayleigh independently and identically distributed channel conditions.", "title": "" }, { "docid": "f370a8ff8722d341d6e839ec2c7217c1", "text": "We give the first O(mpolylog(n)) time algorithms for approximating maximum flows in undirected graphs and constructing polylog(n)-quality cut-approximating hierarchical tree decompositions. Our algorithm invokes existing algorithms for these two problems recursively while gradually incorporating size reductions. These size reductions are in turn obtained via ultra-sparsifiers, which are key tools in solvers for symmetric diagonally dominant (SDD) linear systems.", "title": "" }, { "docid": "7e439ac3ff2304b6e1aaa098ff44b0cb", "text": "Geological structures, such as faults and fractures, appear as image discontinuities or lineaments in remote sensing data. Geologic lineament mapping is a very important issue in geo-engineering, especially for construction site selection, seismic, and risk assessment, mineral exploration and hydrogeological research. Classical methods of lineaments extraction are based on semi-automated (or visual) interpretation of optical data and digital elevation models. We developed a freely available Matlab based toolbox TecLines (Tectonic Lineament Analysis) for locating and quantifying lineament patterns using satellite data and digital elevation models. TecLines consists of a set of functions including frequency filtering, spatial filtering, tensor voting, Hough transformation, and polynomial fitting. Due to differences in the mathematical background of the edge detection and edge linking procedure as well as the breadth of the methods, we introduce the approach in two-parts. In this first study, we present the steps that lead to edge detection. We introduce the data pre-processing using selected filters in spatial and frequency domains. We then describe the application of the tensor-voting framework to improve position and length accuracies of the detected lineaments. We demonstrate the robustness of the approach in a complex area in the northeast of Afghanistan using a panchromatic QUICKBIRD-2 image with 1-meter resolution. Finally, we compare the results of TecLines with manual lineament extraction, and other lineament extraction algorithms, as well as a published fault map of the study area. OPEN ACCESS Remote Sens. 2014, 6 5939", "title": "" }, { "docid": "453af7094a854afd1dfb2e7dc36a7cca", "text": "In this paper, we propose a new approach for the static detection of malicious code in executable programs. Our approach rests on a semantic analysis based on behaviour that even makes possible the detection of unknown malicious code. This analysis is carried out directly on binary code. Static analysis offers techniques for predicting properties of the behaviour of programs without running them. The static analysis of a given binary executable is achieved in three major steps: construction of an intermediate representation, flow-based analysis that catches securityoriented program behaviour, and static verification of critical behaviours against security policies (model checking). 1. Motivation and Background With the advent and the rising popularity of networks, Internet, intranets and distributed systems, security is becoming one of the focal points of research. As a matter of fact, more and more people are concerned with malicious code that could exist in software products. A malicious code is a piece of code that can affect the secrecy, the integrity, the data and control flow, and the functionality of a system. Therefore, ∗This research is jointly funded by a research grant from the Natural Sciences and Engineering Research Council, NSERC, Canada and also by a research contract from the Defence Research Establishment, Valcartier (DREV), 2459, Pie XI Nord, Val-Bélair, QC, Canada, G3J 1X5 their detection is a major concern within the computer science community as well as within the user community. As malicious code can affect the data and control flow of a program, static flow analysis may naturally be helpful as part of the detection process. In this paper, we address the problem of static detection of malicious code in binary executables. The primary objective of this research initiative is to elaborate practical methods and tools with robust theoretical foundations for the static detection of malicious code. The rest of the paper is organized in the following way. Section 2 is devoted to a comparison of static and dynamic approaches. Section 3 presents our approach to the detection of malices in binary executable code. Section 4 discusses the implementation of our approach. Finally, a few remarks and a discussion of future research are ultimately sketched as a conclusion in Section 5. 2. Static vs dynamic analysis There are two main approaches for the detection of malices : static analysis and dynamic analysis. Static analysis consists in examining the code of programs to determine properties of the dynamic execution of these programs without running them. This technique has been used extensively in the past by compiler developers to carry out various analyses and transformations aiming at optimizing the code [10]. Static analysis is also used in reverse engineering of software systems and for program understanding [3, 4]. Its use for the detection of malicious code is fairly recent. Dynamic analysis mainly consists in monitoring the execution of a program to detect malicious behaviour. Static analysis has the following advantages over dynamic analysis: • Static analysis techniques permit to make exhaustive analysis. They are not bound to a specific execution of a program and can give guarantees that apply to all executions of the program. In contrast, dynamic analysis techniques only allow examination of behaviours that correspond to selected test cases. • A verdict can be given before execution, where it may be difficult to determine the proper action to take in the presence of malices. • There is no run-time overhead. However, it may be impossible to certify statically that certain properties hold (e.g., due to undecidability). In this case, dynamic monitoring may be the only solution. Thus, static analysis and dynamic analysis are complementary. Static analysis can be used first, and properties that cannot be asserted statically can be monitored dynamically. As mentioned in the introduction, in this paper, we are concerned with static analysis techniques. Not much has been published about their use for the detection of malicious code. In [8], the authors propose a method for statically detecting malicious code in C programs. Their method is based on so-called tell-tale signs, which are program properties that allow one to distinguish between malicious and benign programs. The authors combine the tell-tale sign approach with program slicing in order to produce small fragments of large programs that can be easily analyzed. 3. Description of the Approach Static analysis techniques are generally used to operate on source code. However, as we explained in the introduction, we need to apply them to binary code, and thus, we had to adapt and evolve these techniques. Our approach is structured in three major steps: Firstly, the binary code is translated into an internal intermediate form (see Section 3.1) ; secondly, this intermediate form is abstracted through flowbased analysis as various relevant graphs (controlflow graph, data-flow graph, call graph, critical-API 1 graph, etc.) (Section 3.2); the third step is the static verification and consists in checking these graphs against security policies (Section 3.3). 3.1 Intermediate Representation A binary executable is the machine code version of a high-level or assembly program that has been compiled (or assembled) and linked for a particular platform and operating system. The general format of binary executables varies widely among operating systems. For example, the Portable Executable format (PE) is used by the Windows NT/98/95 operating system. The PE format includes comprehensive information about the different sections of the program that form the main part of the file, including the following segments: • .text, which contains the code and the entry point of the application, • .data, which contains various type of data, • .idata and .edata, which contain respectively the list of imported and exported APIs for an application or a Dynamic-Linking Library (DLL). The code segment (.text) constitutes the main part of the file; in fact, this section contains all the code that is to be analyzed. In order to translate an executable program into an equivalent high-level-language program, we use the disassembly tool IDA32 Pro [7], which can disassemble various types of executable files (ELF, EXE, PE, etc.) for several processors and operating systems (Windows 98, Windows NT, etc.). Also, IDA32 automatically recognizes calls to the standard libraries (i.e., API calls) for a long list of compilers. Statically analysing a program requires the construction of the syntax tree of this program, also called intermediate representation. The various techniques of static analysis are based on this abstract representation. The goal of the first step is to disassemble the binary code and then to parse the assembly code thus generated to produce the syntax tree (Figure 1). API: Application Program Interface.", "title": "" }, { "docid": "406e6a8966aa43e7538030f844d6c2f0", "text": "The idea of developing software components was envisioned more than forty years ago. In the past two decades, Component-Based Software Engineering (CBSE) has emerged as a distinguishable approach in software engineering, and it has attracted the attention of many researchers, which has led to many results being published in the research literature. There is a huge amount of knowledge encapsulated in conferences and journals targeting this area, but a systematic analysis of that knowledge is missing. For this reason, we aim to investigate the state-of-the-art of the CBSE area through a detailed literature review. To do this, 1231 studies dating from 1984 to 2012 were analyzed. Using the available evidence, this paper addresses five dimensions of CBSE: main objectives, research topics, application domains, research intensity and applied research methods. The main objectives found were to increase productivity, save costs and improve quality. The most addressed application domains are homogeneously divided between commercial-off-the-shelf (COTS), distributed and embedded systems. Intensity of research showed a considerable increase in the last fourteen years. In addition to the analysis, this paper also synthesizes the available evidence, identifies open issues and points out areas that call for further research. © 2015 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "1adc476c1e322d7cc7a0c93e726a8e2c", "text": "A wireless body area network is a radio-frequency- based wireless networking technology that interconnects tiny nodes with sensor or actuator capabilities in, on, or around a human body. In a civilian networking environment, WBANs provide ubiquitous networking functionalities for applications varying from healthcare to safeguarding of uniformed personnel. This article surveys pioneer WBAN research projects and enabling technologies. It explores application scenarios, sensor/actuator devices, radio systems, and interconnection of WBANs to provide perspective on the trade-offs between data rate, power consumption, and network coverage. Finally, a number of open research issues are discussed.", "title": "" }, { "docid": "d6bcf73a0237416318896154dfb0a764", "text": "Singular Value Decomposition (SVD) is a popular approach in various network applications, such as link prediction and network parameter characterization. Incremental SVD approaches are proposed to process newly changed nodes and edges in dynamic networks. However, incremental SVD approaches suffer from serious error accumulation inevitably due to approximation on incremental updates. SVD restart is an effective approach to reset the aggregated error, but when to restart SVD for dynamic networks is not addressed in literature. In this paper, we propose TIMERS, Theoretically Instructed Maximum-Error-bounded Restart of SVD, a novel approach which optimally sets the restart time in order to reduce error accumulation in time. Specifically, we monitor the margin between reconstruction loss of incremental updates and the minimum loss in SVD model. To reduce the complexity of monitoring, we theoretically develop a lower bound of SVD minimum loss for dynamic networks and use the bound to replace the minimum loss in monitoring. By setting a maximum tolerated error as a threshold, we can trigger SVD restart automatically when the margin exceeds this threshold. We prove that the time complexity of our method is linear with respect to the number of local dynamic changes, and our method is general across different types of dynamic networks. We conduct extensive experiments on several synthetic and real dynamic networks. The experimental results demonstrate that our proposed method significantly outperforms the existing methods by reducing 27% to 42% in terms of the maximum error for dynamic network reconstruction when fixing the number of restarts. Our method reduces the number of restarts by 25% to 50% when fixing the maximum error tolerated.", "title": "" }, { "docid": "8ac0bb34c0c393dddf91e81182632551", "text": "The choice of activation functions in deep networks has a significant effect on the training dynamics and task performance. Currently, the most successful and widely-used activation function is the Rectified Linear Unit (ReLU). Although various hand-designed alternatives to ReLU have been proposed, none have managed to replace it due to inconsistent gains. In this work, we propose to leverage automatic search techniques to discover new activation functions. Using a combination of exhaustive and reinforcement learning-based search, we discover multiple novel activation functions. We verify the effectiveness of the searches by conducting an empirical evaluation with the best discovered activation function. Our experiments show that the best discovered activation function, f(x) = x · sigmoid(βx), which we name Swish, tends to work better than ReLU on deeper models across a number of challenging datasets. For example, simply replacing ReLUs with Swish units improves top-1 classification accuracy on ImageNet by 0.9% for Mobile NASNet-A and 0.6% for Inception-ResNet-v2. The simplicity of Swish and its similarity to ReLU make it easy for practitioners to replace ReLUs with Swish units in any neural network.", "title": "" }, { "docid": "56d31440ed955158ecb29ff743029bb2", "text": "We propose a systematic method for creating constellations of unitary space-time signals for multipleantenna communication links. Unitary space-time signals, which are orthonormal in time across the antennas, have been shown to be well-tailored to a Rayleigh fading channel where neither the transmitter nor the receiver knows the fading coefficients. The signals can achieve low probability of error by exploiting multiple-antenna diversity. Because the fading coefficients are not known, the criterion for creating and evaluating the constellation is nonstandard and differs markedly from the familiar maximum-Euclidean-distance norm. Our construction begins with the first signal in the constellation—an oblong complex-valued matrix whose columns are orthonormal—and systematically produces the remaining signals by successively rotating this signal in a high-dimensional complex space. This construction easily produces large constellations of high-dimensional signals. We demonstrate its efficacy through examples involving one, two, and three transmitter antennas. Index Terms —Multi-element antenna arrays, wireless communications, fading channels, transmit diversity, receive diversity, Unitary Space-Time Modulation", "title": "" }, { "docid": "76dcd35124d95bffe47df5decdc5926a", "text": "While kernel drivers have long been know to poses huge security risks, due to their privileged access and lower code quality, bug-finding tools for drivers are still greatly lacking both in quantity and effectiveness. This is because the pointer-heavy code in these drivers present some of the hardest challenges to static analysis, and their tight coupling with the hardware make dynamic analysis infeasible in most cases. In this work, we present DR. CHECKER, a soundy (i.e., mostly sound) bug-finding tool for Linux kernel drivers that is based on well-known program analysis techniques. We are able to overcome many of the inherent limitations of static analysis by scoping our analysis to only the most bug-prone parts of the kernel (i.e., the drivers), and by only sacrificing soundness in very few cases to ensure that our technique is both scalable and precise. DR. CHECKER is a fully-automated static analysis tool capable of performing general bug finding using both pointer and taint analyses that are flow-sensitive, context-sensitive, and fieldsensitive on kernel drivers. To demonstrate the scalability and efficacy of DR. CHECKER, we analyzed the drivers of nine production Linux kernels (3.1 million LOC), where it correctly identified 158 critical zero-day bugs with an overall precision of 78%.", "title": "" }, { "docid": "8f0ed599cec42faa0928a0931ee77b28", "text": "This paper describes the Connector and Acceptor patterns. The intent of these patterns is to decouple the active and passive connection roles, respectively, from the tasks a communication service performs once connections are established. Common examples of communication services that utilize these patterns include WWW browsers, WWW servers, object request brokers, and “superservers” that provide services like remote login and file transfer to client applications. This paper illustrates how the Connector and Acceptor patterns can help decouple the connection-related processing from the service processing, thereby yielding more reusable, extensible, and efficient communication software. When used in conjunction with related patterns like the Reactor [1], Active Object [2], and Service Configurator [3], the Acceptor and Connector patterns enable the creation of highly extensible and efficient communication software frameworks [4] and applications [5]. This paper is organized as follows: Section 2 outlines background information on networking and communication protocols necessary to appreciate the patterns in this paper; Section 3 motivates the need for the Acceptor and Connector patterns and illustrates how they have been applied to a production application-level Gateway; Section 4 describes the Acceptor and Connector patterns in detail; and Section 5 presents concluding remarks.", "title": "" }, { "docid": "df00815ab7f96a286ca336ecd85ed821", "text": "In Compressive Sensing Magnetic Resonance Imaging (CS-MRI), one can reconstruct a MR image with good quality from only a small number of measurements. This can significantly reduce MR scanning time. According to structured sparsity theory, the measurements can be further reduced to O(K + log n) for tree-sparse data instead of O(K +K log n) for standard K-sparse data with length n. However, few of existing algorithms have utilized this for CS-MRI, while most of them model the problem with total variation and wavelet sparse regularization. On the other side, some algorithms have been proposed for tree sparse regularization, but few of them have validated the benefit of wavelet tree structure in CS-MRI. In this paper, we propose a fast convex optimization algorithm to improve CS-MRI. Wavelet sparsity, gradient sparsity and tree sparsity are all considered in our model for real MR images. The original complex problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved with an iterative scheme. Numerous experiments have been conducted and show that the proposed algorithm outperforms the state-of-the-art CS-MRI algorithms, and gain better reconstructions results on real MR images than general tree based solvers or algorithms.", "title": "" }, { "docid": "6a6063c05941c026b083bfcc573520f8", "text": "This paper describes how semantic indexing can help to generate a contextual overview of topics and visually compare clusters of articles. The method was originally developed for an innovative information exploration tool, called Ariadne, which operates on bibliographic databases with tens of millions of records (Koopman et al. in Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. doi: 10.1145/2702613.2732781 , 2015b). In this paper, the method behind Ariadne is further developed and applied to the research question of the special issue “Same data, different results”—the better understanding of topic (re-)construction by different bibliometric approaches. For the case of the Astro dataset of 111,616 articles in astronomy and astrophysics, a new instantiation of the interactive exploring tool, LittleAriadne, has been created. This paper contributes to the overall challenge to delineate and define topics in two different ways. First, we produce two clustering solutions based on vector representations of articles in a lexical space. These vectors are built on semantic indexing of entities associated with those articles. Second, we discuss how LittleAriadne can be used to browse through the network of topical terms, authors, journals, citations and various cluster solutions of the Astro dataset. More specifically, we treat the assignment of an article to the different clustering solutions as an additional element of its bibliographic record. Keeping the principle of semantic indexing on the level of such an extended list of entities of the bibliographic record, LittleAriadne in turn provides a visualization of the context of a specific clustering solution. It also conveys the similarity of article clusters produced by different algorithms, hence representing a complementary approach to other possible means of comparison.", "title": "" }, { "docid": "b776307764d3946fc4e7f6158b656435", "text": "Recent development advances have allowed silicon (Si) semiconductor technology to approach the theoretical limits of the Si material; however, power device requirements for many applications are at a point that the present Si-based power devices can not handle. The requirements include higher blocking voltages, switching frequencies, efficiency, and reliability. To overcome these limitations, new semiconductor materials for power device applications are needed. For high power requirements, wide band gap semiconductors like silicon carbide (SiC), gallium nitride (GaN), and diamond with their superior electrical properties are likely candidates to replace Si in the near future. This paper compares all the aforementioned wide bandgap semiconductors with respect to their promise and applicability for power applications and predicts the future of power device semiconductor materials.", "title": "" }, { "docid": "0d2e9d514586f083007f5e93d8bb9844", "text": "Recovering Matches: Analysis-by-Synthesis Results Starting point: Unsupervised learning of image matching Applications: Feature matching, structure from motion, dense optical flow, recognition, motion segmentation, image alignment Problem: Difficult to do directly (e.g. based on video) Insights: Image matching is a sub-problem of frame interpolation Frame interpolation can be learned from natural video sequences", "title": "" }, { "docid": "796af76343bbf770afb521b6c096fbdf", "text": "This paper presents a rapid hierarchical radiosity algorithm for illuminating scenes containing large polygonal patches. The algorithm constructs a hierarchical representation of the form factor matrix by adaptively subdividing patches into subpatches according to a user-supplied error bound. The algorithm guarantees that all form factors are calculated to the same precision, removing many common image artifacts due to inaccurate form factors. More importantly, the algorithm decomposes the form factor matrix into at most O(n) blocks (where n is the number of elements). Previous radiosity algorithms represented the element-to-element transport interactions with n2 form factors. Visibility algorithms are given that work well with this approach. Standard techniques for shooting and gathering can be used with the hierarchical representation to solve for equilibrium radiosities, but we also discuss using a brightness-weighted error criteria, in conjunction with multigridding, to even more rapidly progressively refine the image.", "title": "" }, { "docid": "df83a6388ce2b16060aa9da62a86894a", "text": "Embodied agents have received large amounts of interest in recent years. They are often equipped with the ability to express emotion, but without understanding the impact this can have on the user. Given the amount of research studies that are utilising agent technology with affective capabilities, now is an important time to review the influence of synthetic agent emotion on user attitudes, perceptions and behaviour. We therefore present a structured overview of the research into emotional simulation in agents, providing a summary of the main studies, re-formulating appropriate results in terms of the emotional effects demonstrated, and an in-depth analysis illustrating the similarities and inconsistencies between different experiments across a variety of different domains. We highlight important lessons, future areas for research, and provide a set of guidelines for conducting further research. r 2009 Published by Elsevier Ltd.", "title": "" }, { "docid": "4122375a509bf06cc7e8b89cb30357ff", "text": "Textile-based sensors offer an unobtrusive method of continually monitoring physiological parameters during daily activities. Chemical analysis of body fluids, noninvasively, is a novel and exciting area of personalized wearable healthcare systems. BIOTEX was an EU-funded project that aimed to develop textile sensors to measure physiological parameters and the chemical composition of body fluids, with a particular interest in sweat. A wearable sensing system has been developed that integrates a textile-based fluid handling system for sample collection and transport with a number of sensors including sodium, conductivity, and pH sensors. Sensors for sweat rate, ECG, respiration, and blood oxygenation were also developed. For the first time, it has been possible to monitor a number of physiological parameters together with sweat composition in real time. This has been carried out via a network of wearable sensors distributed around the body of a subject user. This has huge implications for the field of sports and human performance and opens a whole new field of research in the clinical setting.", "title": "" }, { "docid": "31d03e6933e1e289cd8cda641bd08b68", "text": "BACKGROUND\nAnterior cruciate ligament reconstruction (ACLR) has been established as the gold standard for treatment of complete ruptures of the anterior cruciate ligament (ACL) in active, symptomatic individuals. In contrast, treatment of partial tears of the ACL remains controversial. Biologically augmented ACL-repair techniques are expanding in an attempt to regenerate and improve healing and outcomes of both the native ACL and the reconstructed graft tissue.\n\n\nPURPOSE\nTo review the biologic treatment options for partial tears of the ACL.\n\n\nSTUDY DESIGN\nReview.\n\n\nMETHODS\nA literature review was performed that included searches of PubMed, Medline, and Cochrane databases using the following keywords: partial tear of the ACL, ACL repair, bone marrow concentrate, growth factors/healing enhancement, platelet-rich plasma (PRP), stem cell therapy.\n\n\nRESULTS\nThe use of novel biologic ACL repair techniques, including growth factors, PRP, stem cells, and bioscaffolds, have been reported to result in promising preclinical and short-term clinical outcomes.\n\n\nCONCLUSION\nThe potential benefits of these biological augmentation approaches for partial ACL tears are improved healing, better proprioception, and a faster return to sport and activities of daily living when compared with standard reconstruction procedures. However, long-term studies with larger cohorts of patients and with technique validation are necessary to assess the real effect of these approaches.", "title": "" }, { "docid": "fb0e9f6f58051b9209388f81e1d018ff", "text": "Because many databases contain or can be embellished with structural information, a method for identifying interesting and repetitive substructures is an essential component to discovering knowledge in such databases. This paper describes the SUBDUE system, which uses the minimum description length (MDL) principle to discover substructures that compress the database and represent structural concepts in the data. By replacing previously-discovered substructures in the data, multiple passes of SUBDUE produce a hierarchical description of the structural regularities in the data. Inclusion of background knowledgeguides SUBDUE toward appropriate substructures for a particular domain or discovery goal, and the use of an inexact graph match allows a controlled amount of deviations in the instance of a substructure concept. We describe the application of SUBDUE to a variety of domains. We also discuss approaches to combining SUBDUE with non-structural discovery systems.", "title": "" } ]
scidocsrr
bfee2c39744d2861320c8b7a3d93835e
MEC: Memory-efficient Convolution for Deep Neural Network
[ { "docid": "e735ddafd0dc48ea48e6ccb85ff96129", "text": "Convolutional Neural Networks (CNNs) have been successfully used for many computer vision applications. It would be beneficial to these applications if the computational workload of CNNs could be reduced. In this work we analyze the linear algebraic properties of CNNs and propose an algorithmic modification to reduce their computational workload. An up to a 47% reduction can be achieved without any change in the image recognition results or the addition of any hardware accelerators.", "title": "" }, { "docid": "0a3f5ff37c49840ec8e59cbc56d31be2", "text": "Convolutional neural networks (CNNs) are well known for producing state-of-the-art recognizers for document processing [1]. However, they can be difficult to implement and are usually slower than traditional multi-layer perceptrons (MLPs). We present three novel approaches to speeding up CNNs: a) unrolling convolution, b) using BLAS (basic linear algebra subroutines), and c) using GPUs (graphic processing units). Unrolled convolution converts the processing in each convolutional layer (both forward-propagation and back-propagation) into a matrix-matrix product. The matrix-matrix product representation of CNNs makes their implementation as easy as MLPs. BLAS is used to efficiently compute matrix products on the CPU. We also present a pixel shader based GPU implementation of CNNs. Results on character recognition problems indicate that unrolled convolution with BLAS produces a dramatic 2.4X−3.0X speedup. The GPU implementation is even faster and produces a 3.1X−4.1X speedup.", "title": "" }, { "docid": "28c03f6fb14ed3b7d023d0983cb1e12b", "text": "The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5⇥ speedup with no loss in accuracy, and 4.5⇥ speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.", "title": "" } ]
[ { "docid": "817c30996704fa58d8eb527fced31630", "text": "Image classification, a complex perceptual task with many real life important applications, faces a major challenge in presence of noise. Noise degrades the performance of the classifiers and makes them less suitable in real life scenarios. To solve this issue, several researches have been conducted utilizing denoising autoencoder (DAE) to restore original images from noisy images and then Convolutional Neural Network (CNN) is used for classification. The existing models perform well only when the noise level present in the training set and test set are same or differs only a little. To fit a model in real life applications, it should be independent to level of noise. The aim of this study is to develop a robust image classification system which performs well at regular to massive noise levels. The proposed method first trains a DAE with low-level noise-injected images and a CNN with noiseless native images independently. Then it arranges these two trained models in three different combinational structures: CNN, DAE-CNN, and DAE-DAECNN to classify images corrupted with zero, regular and massive noises, accordingly. Final system outcome is chosen by applying the winner-takes-all combination on individual outcomes of the three structures. Although proposed system consists of three DAEs and three CNNs in different structure layers, the DAEs and CNNs are the copy of same DAE and CNN trained initially which makes it computationally efficient as well. In DAE-DAECNN, two identical DAEs are arranged in a cascaded structure to make the structure well suited for classifying massive noisy data while the DAE is trained with low noisy image data. The proposed method is tested with MNIST handwritten numeral dataset with different noise levels. Experimental results revealed the effectiveness of the proposed method showing better results than individual structures as well as the other related methods. Keywords—Image denoising; denoising autoencoder; cascaded denoising autoencoder; convolutional neural network", "title": "" }, { "docid": "8a0d6cb6e2d54037a007c901959fcdcf", "text": "The trade-off between relevance and fairness in personalized recommendations has been explored in recent works, with the goal of minimizing learned discrimination towards certain demographics while still producing relevant results. We present a fairness-aware variation of the Maximal Marginal Relevance (MMR) re-ranking method which uses representations of demographic groups computed using a labeled dataset. This method is intended to incorporate fairness with respect to these demographic groups. We perform an experiment on a stock photo dataset and examine the trade-off between relevance and fairness against a well known baseline, MMR, by using human judgment to examine the results of the re-ranking when using different fractions of a labeled dataset, and by performing a quantitative analysis on the ranked results of a set of query images. We show that our proposed method can incorporate fairness in the ranked results while obtaining higher precision than the baseline, while our case study shows that even a limited amount of labeled data can be used to compute the representations to obtain fairness. This method can be used as a post-processing step for recommender systems and search.", "title": "" }, { "docid": "ab5e3f7ad73d8143ae4dc4db40ebfade", "text": "Knowledge is an essential organizational resource that provides a sustainable competitive advantage in a highly competitive and dynamic economy. SMEs must therefore consider how to promote the sharing of knowledge and expertise between experts who possess it and novices who need to know. Thus, they need to emphaisze and more effectively exploit knowledge-based resources that already exist within the firm. A key issue for the failure of any KM initiative to facilitate knowledge sharing is the lack of consideration of how the organizational and interpersonal context as well as individual characteristics influence knowledge sharing behaviors. Due to the potential benefits that could be realized from knowledge sharing, this study focused on knowledge sharing as one fundamental knowledge-centered activity. Based on the review of previous literature regarding knowledge sharing within and across firms, this study infer that knowledge sharing in a workplace can be influenced by the organizational, individuallevel and technological factors. This study proposes a conceptual model of knowledge sharing within a broad KM framework as an indispensable tool for SMEs internationalization. The model was assessed by using data gathered from employees and managers of twenty-five (25) different SMEs in Norway. The proposed model of knowledge sharing argues that knowledge sharing is influenced by the organizational, individual-level and technological factors. The study also found mediated effect between the organizational factors as well as between the technological factor and knowledge sharing behavior (i.e., being mediated by the individual-level factors). The test results were statistically significant. The organizational factors were acknowledged to have a highly significant role in ensuring that knowledge sharing takes place in the workplace, although the remaining factors play a critical in the knowledge sharing process. For instance, the technological factor may effectively help in creating, storing and distributing explicit knowledge in an accessible and expeditious manner. The implications of the empirical findings are also provided in this study.", "title": "" }, { "docid": "16c9b857bbe8d9f13f078ddb193d7483", "text": "We present TweetMotif, an exploratory search application for Twitter. Unlike traditional approaches to information retrieval, which present a simple list of messages, TweetMotif groups messages by frequent significant terms — a result set’s subtopics — which facilitate navigation and drilldown through a faceted search interface. The topic extraction system is based on syntactic filtering, language modeling, near-duplicate detection, and set cover heuristics. We have used TweetMotif to deflate rumors, uncover scams, summarize sentiment, and track political protests in real-time. A demo of TweetMotif, plus its source code, is available at http://tweetmotif.com. Introduction and Description On the microblogging service Twitter, users post millions of very short messages every day. Organizing and searching through this large corpus is an exciting research problem. Since messages are so small, we believe microblog search requires summarization across many messages at once. Our system, TweetMotif, responds to user queries, first retrieving several hundred recent matching messages from a simple index; we use the Twitter Search API. Instead of simply showing this result set as a list, TweetMotif extracts a set of themes (topics) to group and summarize these messages. A topic is simultaneously characterized by (1) a 1to 3-word textual label, and (2) a set of messages, whose texts must all contain the label. TweetMotif’s user interface is inspired by faceted search, which has been shown to aid Web search tasks (Hearst et al. 2002). The main screen is a two-column layout. The left column is a list of themes that are related to the current search term, while the right column presents actual tweets, grouped by theme. As themes are selected on the left column, a sample of tweets for that theme appears at the top of the right column, pushing down (but not removing) tweet results for any previously selected related themes. This allows users to explore and compare multiple related themes at once. The set of topics is chosen to try to satisfy several criteria, which often conflict: Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Screenshot of TweetMotif. 1. Frequency contrast: Topic label phrases should be frequent in the query subcorpus, but infrequent among general Twitter messages. This ensures relevance to the query while eliminating overly generic terms. 2. Topic diversity: Topics should be chosen such that their messages and label phrases minimally overlap. Overlapping topics repetitively fill the same information niche; only one should be used. 3. Topic size: A topic that includes too few messages is bad; it is overly specific. 4. Small number of topics: Screen real-estate and concomitant user cognitive load are limited resources. The goal is to provide the user a concise summary of themes and variation in the query subcorpus, then allow the user to navigate to individual topics to see their associated messages, and allow recursive drilldown. The approach is related to document clustering (though a message can belong to multiple topics) and text summarization (topic labels are a high-relevance subset of text across messages). We heuristically proceed through several stages of analysis.", "title": "" }, { "docid": "3e80695856aa94def292e22ea46abee6", "text": "Hyaluronic acid (HA), an immunoneutral polysaccharide that is ubiquitous in the human body, is crucial for many cellular and tissue functions and has been in clinical use for over thirty years. When chemically modified, HA can be transformed into many physical forms-viscoelastic solutions, soft or stiff hydrogels, electrospun fibers, non-woven meshes, macroporous and fibrillar sponges, flexible sheets, and nanoparticulate fluids-for use in a range of preclinical and clinical settings. Many of these forms are derived from the chemical crosslinking of pendant reactive groups by addition/condensation chemistry or by radical polymerization. Clinical products for cell therapy and regenerative medicine require crosslinking chemistry that is compatible with the encapsulation of cells and injection into tissues. Moreover, an injectable clinical biomaterial must meet marketing, regulatory, and financial constraints to provide affordable products that can be approved, deployed to the clinic, and used by physicians. Many HA-derived hydrogels meet these criteria, and can deliver cells and therapeutic agents for tissue repair and regeneration. This progress report covers both basic concepts and recent advances in the development of HA-based hydrogels for biomedical applications.", "title": "" }, { "docid": "4d7e876d61060061ba6419869d00675e", "text": "Context-aware recommender systems (CARS) take context into consideration when modeling user preferences. There are two general ways to integrate context with recommendation: contextual filtering and contextual modeling. Currently, the most effective context-aware recommendation algorithms are based on a contextual modeling approach that estimate deviations in ratings across different contexts. In this paper, we propose context similarity as an alternative contextual modeling approach and examine different ways to represent context similarity and incorporate it into recommendation. More specifically, we show how context similarity can be integrated into the sparse linear method and matrix factorization algorithms. Our experimental results demonstrate that learning context similarity is a more effective approach to contextaware recommendation than modeling contextual rating deviations.", "title": "" }, { "docid": "0cdda0d784780f65d92bc778279af17c", "text": "This paper makes the case for TaaS--automated software testing as a cloud-based service. We present three kinds of TaaS: a \"programmer's sidekick\" enabling developers to thoroughly and promptly test their code with minimal upfront resource investment; a \"home edition\" on-demand testing service for consumers to verify the software they are about to install on their PC or mobile device; and a public \"certification service,\" akin to Underwriters Labs, that independently assesses the reliability, safety, and security of software.\n TaaS automatically tests software, without human involvement from the service user's or provider's side. This is unlike today's \"testing as a service\" businesses, which employ humans to write tests. Our goal is to take recently proposed techniques for automated testing--even if usable only on to y programs--and make them practical by modifying them to harness the resources of compute clouds. Preliminary work suggests it is technically feasible to do so, and we find that TaaS is also compelling from a social and business point of view.", "title": "" }, { "docid": "d7d1da1632553a0ac5c0961c8cf9b5ac", "text": "In this paper a monitoring system for production well based on WSN is designed, where the sensors can be used as the downhole permanent sensor to measure temperature and pressure analog signals. The analog signals are modulated digital signals by data acquisition system. The digital signals are transmitted to database server of monitoring center. Meanwhile the data can be browsed on internet or by mobile telephone, and the consumer receive alarm message when the data are overflow. The system offered manager and technician credible gist to make decision timely.", "title": "" }, { "docid": "74d6c2fff4b67d05871ca0debbc4ec15", "text": "There is great interest in developing rechargeable lithium batteries with higher energy capacity and longer cycle life for applications in portable electronic devices, electric vehicles and implantable medical devices. Silicon is an attractive anode material for lithium batteries because it has a low discharge potential and the highest known theoretical charge capacity (4,200 mAh g(-1); ref. 2). Although this is more than ten times higher than existing graphite anodes and much larger than various nitride and oxide materials, silicon anodes have limited applications because silicon's volume changes by 400% upon insertion and extraction of lithium which results in pulverization and capacity fading. Here, we show that silicon nanowire battery electrodes circumvent these issues as they can accommodate large strain without pulverization, provide good electronic contact and conduction, and display short lithium insertion distances. We achieved the theoretical charge capacity for silicon anodes and maintained a discharge capacity close to 75% of this maximum, with little fading during cycling.", "title": "" }, { "docid": "7ce9f8cbba0bf56e68443f1ed759b6d3", "text": "We present a Connected Learning Analytics (CLA) toolkit, which enables data to be extracted from social media and imported into a Learning Record Store (LRS), as defined by the new xAPI standard. A number of implementation issues are discussed, and a mapping that will enable the consistent storage and then analysis of xAPI verb/object/activity statements across different social media and online environments is introduced. A set of example learning activities are proposed, each facilitated by the Learning Analytics beyond the LMS that the toolkit enables.", "title": "" }, { "docid": "bc4a72d96daf03f861b187fa73f57ff6", "text": "BACKGROUND\nShort-term preoperative radiotherapy and total mesorectal excision have each been shown to improve local control of disease in patients with resectable rectal cancer. We conducted a multicenter, randomized trial to determine whether the addition of preoperative radiotherapy increases the benefit of total mesorectal excision.\n\n\nMETHODS\nWe randomly assigned 1861 patients with resectable rectal cancer either to preoperative radiotherapy (5 Gy on each of five days) followed by total mesorectal excision (924 patients) or to total mesorectal excision alone (937 patients). The trial was conducted with the use of standardization and quality-control measures to ensure the consistency of the radiotherapy, surgery, and pathological techniques.\n\n\nRESULTS\nOf the 1861 patients randomly assigned to one of the two treatment groups, 1805 were eligible to participate. The overall rate of survival at two years among the eligible patients was 82.0 percent in the group assigned to both radiotherapy and surgery and 81.8 percent in the group assigned to surgery alone (P=0.84). Among the 1748 patients who underwent a macroscopically complete local resection, the rate of local recurrence at two years was 5.3 percent. The rate of local recurrence at two years was 2.4 percent in the radiotherapy-plus-surgery group and 8.2 percent in the surgery-only group (P<0.001).\n\n\nCONCLUSIONS\nShort-term preoperative radiotherapy reduces the risk of local recurrence in patients with rectal cancer who undergo a standardized total mesorectal excision.", "title": "" }, { "docid": "f975a1fa2905f8ae42ced1f13a88a15b", "text": "This paper presents a new method of detecting and tracking the boundaries of drivable regions in road without road-markings. As unmarked roads connect residential places to public roads, the capability of autonomously driving on such a roadway is important to truly realize self-driving cars in daily driving scenarios. To detect the left and right boundaries of drivable regions, our method first examines the image region at the front of ego-vehicle and then uses the appearance information of that region to identify the boundary of the drivable region from input images. Due to variation in the image acquisition condition, the image features necessary for boundary detection may not be present. When this happens, a boundary detection algorithm working frame-by-frame basis would fail to successfully detect the boundaries. To effectively handle these cases, our method tracks, using a Bayes filter, the detected boundaries over frames. Experiments using real-world videos show promising results.", "title": "" }, { "docid": "f5e6df40898a5b84f8e39784f9b56788", "text": "OBJECTIVE\nTo determine the prevalence of anxiety and depression among medical students at Nishtar Medical College, Multan.\n\n\nMETHODS\nA cross-sectional study was carried out at Nishtar Medical College, Multan in 2008. The questionnaire was administered to 815 medical students who had spent more than 6 months in college and had no self reported physical illness. They were present at the time of distribution of the questionnaires and consented. Prevalence of anxiety and depression was assessed using a structured validated questionnaire, the Aga Khan University Anxiety and Depression Scale with a cut-off score of 19. Data Analysis was done using SPSS v. 14.\n\n\nRESULTS\nOut of 815 students, 482 completed the questionnaire with a response rate of 59.14%. The mean age of students was 20.66 +/- 1.8 years. A high prevalence of anxiety and depression (43.89%) was found amongst medical students. Prevalence of anxiety and depression among students of first, second, third, fourth and final years was 45.86%, 52.58%, 47.14%, 28.75% and 45.10% respectively. Female students were found to be more depressed than male students (OR = 2.05, 95% CI = 1.42-2.95, p = 0.0001). There was a significant association between the prevalence of anxiety and depression and the respective year of medical college (p = 0.0276). It was seen that age, marital status, locality and total family income did not significantly affect the prevalence of anxiety and depression.\n\n\nCONCLUSIONS\nThe results showed that medical students constitute a vulnerable group that has a high prevalence of psychiatric morbidity comprising of anxiety and depression.", "title": "" }, { "docid": "b7dec8c2a0ef689ef0cac1eb6ed76cc5", "text": "One of the most difficult speech recognition tasks is accurate recognition of human to human communication. Advances in deep learning over the last few years have produced major speech recognition improvements on the representative Switchboard conversational corpus. Word error rates that just a few years ago were 14% have dropped to 8.0%, then 6.6% and most recently 5.8%, and are now believed to be within striking range of human performance. This then raises two issues what IS human performance, and how far down can we still drive speech recognition error rates? A recent paper by Microsoft suggests that we have already achieved human performance. In trying to verify this statement, we performed an independent set of human performance measurements on two conversational tasks and found that human performance may be considerably better than what was earlier reported, giving the community a significantly harder goal to achieve. We also report on our own efforts in this area, presenting a set of acoustic and language modeling techniques that lowered the word error rate of our own English conversational telephone LVCSR system to the level of 5.5%/10.3% on the Switchboard/CallHome subsets of the Hub5 2000 evaluation, which at least at the writing of this paper is a new performance milestone (albeit not at what we measure to be human performance!). On the acoustic side, we use a score fusion of three models: one LSTM with multiple feature inputs, a second LSTM trained with speaker-adversarial multitask learning and a third residual net (ResNet) with 25 convolutional layers and time-dilated convolutions. On the language modeling side, we use word and character LSTMs and convolutional WaveNet-style language models.", "title": "" }, { "docid": "05929ba76f75ed36a41ca339b71b15b8", "text": "This paper presents a complete method for pedestrian detection applied to infrared images. First, we study an image descriptor based on histograms of oriented gradients (HOG), associated with a support vector machine (SVM) classifier and evaluate its efficiency. After having tuned the HOG descriptor and the classifier, we include this method in a complete system, which deals with stereo infrared images. This approach gives good results for window classification, and a preliminary test applied on a video sequence proves that this approach is very promising", "title": "" }, { "docid": "85657981b55e3a87e74238cd373b3db6", "text": "INTRODUCTION\nLung cancer mortality rates remain at unacceptably high levels. Although mitochondrial dysfunction is a characteristic of most tumor types, mitochondrial dynamics are often overlooked. Altered rates of mitochondrial fission and fusion are observed in lung cancer and can influence metabolic function, proliferation and cell survival.\n\n\nAREAS COVERED\nIn this review, the authors outline the mechanisms of mitochondrial fission and fusion. They also identify key regulatory proteins and highlight the roles of fission and fusion in metabolism and other cellular functions (e.g., proliferation, apoptosis) with an emphasis on lung cancer and the interaction with known cancer biomarkers. They also examine the current therapeutic strategies reported as altering mitochondrial dynamics and review emerging mitochondria-targeted therapies.\n\n\nEXPERT OPINION\nMitochondrial dynamics are an attractive target for therapeutic intervention in lung cancer. Mitochondrial dysfunction, despite its molecular heterogeneity, is a common abnormality of lung cancer. Targeting mitochondrial dynamics can alter mitochondrial metabolism, and many current therapies already non-specifically affect mitochondrial dynamics. A better understanding of mitochondrial dynamics and their interaction with currently identified cancer 'drivers' such as Kirsten-Rat Sarcoma Viral Oncogene homolog will lead to the development of novel therapeutics.", "title": "" }, { "docid": "8584fc5cbd280874da5cebe016def0fa", "text": "This paper considers the problem of mining closed frequent itemsets over a data stream sliding window using limited memory space. We design a synopsis data structure to monitor transactions in the sliding window so that we can output the current closed frequent itemsets at any time. Due to time and memory constraints, the synopsis data structure cannot monitor all possible itemsets. However, monitoring only frequent itemsets will make it impossible to detect new itemsets when they become frequent. In this paper, we introduce a compact data structure, the closed enumeration tree (CET), to maintain a dynamically selected set of itemsets over a sliding window. The selected itemsets contain a boundary between closed frequent itemsets and the rest of the itemsets. Concept drifts in a data stream are reflected by boundary movements in the CET. In other words, a status change of any itemset (e.g., from non-frequent to frequent) must occur through the boundary. Because the boundary is relatively stable, the cost of mining closed frequent itemsets over a sliding window is dramatically reduced to that of mining transactions that can possibly cause boundary movements in the CET. Our experiments show that our algorithm performs much better than representative algorithms for the sate-of-the-art approaches.", "title": "" }, { "docid": "38302a1bfa5b187dc6590727fae14411", "text": "Recently, more attention has been directed for developing robotic systems that help elderly live independently or rely less on others. This paper describes a novel multi-function mobility assistive device for elderly. The proposed device aims to help patients who don't have enough physical strength on their lower limbs due to aging or diseases. Rather than one function device, the proposed device is designed to interactively assist in different lower limb activities; namely, sit to stand and walking activities as well as transfer paralyzed patients from bed to wheelchair and help them stand in upright position to improve blood circulation. The device is based on a non-conventional structure of 3-RPR planer parallel manipulator which offers besides the high rigidity of the parallel structure some interesting kinematic advantages. This structure provides kinematic decoupling between the position and orientation that required to position shoulder and orient trunk of the user. Also, it provides a suitable free of singularity workspace that required to perform the abovementioned activities. It also has only one dimensionless design parameter that simplifies the design. Additionally, it has high local kinematic and dynamic dexterity indices which achieve high accuracy and dynamic characteristics. Finally, the device is equipped with an active walker for walking activity. Experimental motion data of three healthy subjects that extracted using VICON human motion capturing system is used in a computer simulation to examine the performance of the device.", "title": "" }, { "docid": "77214b0522c0cb7772e094351b5bfa82", "text": "One of the key aspects in the implementation of reactive behaviour in the Web and, most importantly, in the semantic Web is the development of event detection engines. An event engine detects events occurring in a system and notifies their occurrences to its clients. Although primitive events are useful for modelling a good number of applications, certain other applications require the combination of primitive events in order to support reactive behaviour. This paper presents the implementation of an event detection engine that detects composite events specified by expressions of an illustrative sublanguage of the SNOOP event algebra", "title": "" }, { "docid": "09f812cae6c8952d27ef86168906ece8", "text": "Genetic algorithms provide an alternative to traditional optimization techniques by using directed random searches to locate optimal solutions in complex landscapes. We introduce the art and science of genetic algorithms and survey current issues in GA theory and practice. We do not present a detailed study, instead, we offer a quick guide into the labyrinth of GA research. First, we draw the analogy between genetic algorithms and the search processes in nature. Then we describe the genetic algorithm that Holland introduced in 1975 and the workings of GAs. After a survey of techniques proposed as improvements to Holland's GA and of some radically different approaches, we survey the advances in GA theory related to modeling, dynamics, and deception.<<ETX>>", "title": "" } ]
scidocsrr
2c1506c5719c699dfb2d6720e7f6fae3
Multimodal emotion recognition from expressive faces, body gestures and speech
[ { "docid": "113cf957b47a8b8e3bbd031aa9a28ff2", "text": "We present an approach for the recognition of acted emotional states based on the analysis of body movement and gesture expressivity. According to research showing that distinct emotions are often associated with different qualities of body movement, we use nonpropositional movement qualities (e.g. amplitude, speed and fluidity of movement) to infer emotions, rather than trying to recognise different gesture shapes expressing specific emotions. We propose a method for the analysis of emotional behaviour based on both direct classification of time series and a model that provides indicators describing the dynamics of expressive motion cues. Finally we show and interpret the recognition rates for both proposals using different classification algorithms.", "title": "" }, { "docid": "dadcecd178721cf1ea2b6bf51bc9d246", "text": "8 Research on speech and emotion is moving from a period of exploratory research into one where there is a prospect 9 of substantial applications, notably in human–computer interaction. Progress in the area relies heavily on the devel10 opment of appropriate databases. This paper addresses four main issues that need to be considered in developing 11 databases of emotional speech: scope, naturalness, context and descriptors. The state of the art is reviewed. A good deal 12 has been done to address the key issues, but there is still a long way to go. The paper shows how the challenge of 13 developing appropriate databases is being addressed in three major recent projects––the Reading–Leeds project, the 14 Belfast project and the CREST–ESP project. From these and other studies the paper draws together the tools and 15 methods that have been developed, addresses the problems that arise and indicates the future directions for the de16 velopment of emotional speech databases. 2002 Published by Elsevier Science B.V.", "title": "" } ]
[ { "docid": "26d8f073cfe1e907183022564e6bde80", "text": "With advances in computer hardware, 3D game worlds are becoming larger and more complex. Consequently the development of game worlds becomes increasingly time and resource intensive. This paper presents a framework for generation of entire virtual worlds using procedural generation. The approach is demonstrated with the example of a virtual city.", "title": "" }, { "docid": "04cf981a76c74b198ebe4703d0039e36", "text": "The acquisition of high-fidelity, long-term neural recordings in vivo is critically important to advance neuroscience and brain⁻machine interfaces. For decades, rigid materials such as metal microwires and micromachined silicon shanks were used as invasive electrophysiological interfaces to neurons, providing either single or multiple electrode recording sites. Extensive research has revealed that such rigid interfaces suffer from gradual recording quality degradation, in part stemming from tissue damage and the ensuing immune response arising from mechanical mismatch between the probe and brain. The development of \"soft\" neural probes constructed from polymer shanks has been enabled by advancements in microfabrication; this alternative has the potential to mitigate mismatch-related side effects and thus improve the quality of recordings. This review examines soft neural probe materials and their associated microfabrication techniques, the resulting soft neural probes, and their implementation including custom implantation and electrical packaging strategies. The use of soft materials necessitates careful consideration of surgical placement, often requiring the use of additional surgical shuttles or biodegradable coatings that impart temporary stiffness. Investigation of surgical implantation mechanics and histological evidence to support the use of soft probes will be presented. The review concludes with a critical discussion of the remaining technical challenges and future outlook.", "title": "" }, { "docid": "0ce46853852a20e5e0ab9aacd3ec20c1", "text": "In immunocompromised subjects, Epstein-Barr virus (EBV) infection of terminally differentiated oral keratinocytes may result in subclinical productive infection of the virus in the stratum spinosum and in the stratum granulosum with shedding of infectious virions into the oral fluid in the desquamating cells. In a minority of cases this productive infection with dysregulation of the cell cycle of terminally differentiated epithelial cells may manifest as oral hairy leukoplakia. This is a white, hyperkeratotic, benign lesion of low morbidity, affecting primarily the lateral border of the tongue. Factors that determine whether productive EBV replication within the oral epithelium will cause oral hairy leukoplakia include the fitness of local immune responses, the profile of EBV gene expression, and local environmental factors.", "title": "" }, { "docid": "51c4dd282e85db5741b65ae4386f6c48", "text": "In this paper, we present an end-to-end approach to simultaneously learn spatio-temporal features and corresponding similarity metric for video-based person re-identification. Given the video sequence of a person, features from each frame that are extracted from all levels of a deep convolutional network can preserve a higher spatial resolution from which we can model finer motion patterns. These lowlevel visual percepts are leveraged into a variant of recurrent model to characterize the temporal variation between time-steps. Features from all time-steps are then summarized using temporal pooling to produce an overall feature representation for the complete sequence. The deep convolutional network, recurrent layer, and the temporal pooling are jointly trained to extract comparable hidden-unit representations from input pair of time series to compute their corresponding similarity value. The proposed framework combines time series modeling and metric learning to jointly learn relevant features and a good similarity measure between time sequences of person. Experiments demonstrate that our approach achieves the state-of-the-art performance for video-based person re-identification on iLIDS-VID and PRID 2011, the two primary public datasets for this purpose.", "title": "" }, { "docid": "c2f338aef785f0d6fee503bf0501a558", "text": "Recognizing 3-D objects in cluttered scenes is a challenging task. Common approaches find potential feature correspondences between a scene and candidate models by matching sampled local shape descriptors and select a few correspondences with the highest descriptor similarity to identify models that appear in the scene. However, real scans contain various nuisances, such as noise, occlusion, and featureless object regions. This makes selected correspondences have a certain portion of false positives, requiring adopting the time-consuming model verification many times to ensure accurate recognition. This paper proposes a 3-D object recognition approach with three key components. First, we construct a Signature of Geometric Centroids descriptor that is descriptive and robust, and apply it to find high-quality potential feature correspondences. Second, we measure geometric compatibility between a pair of potential correspondences based on isometry and three angle-preserving components. Third, we perform effective correspondence selection by using both descriptor similarity and compatibility with an auxiliary set of “less” potential correspondences. Experiments on publicly available data sets demonstrate the robustness and/or efficiency of the descriptor, selection approach, and recognition framework. Comparisons with the state-of-the-arts validate the superiority of our recognition approach, especially under challenging scenarios.", "title": "" }, { "docid": "3e9f98a1aa56e626e47a93b7973f999a", "text": "This paper presents a sociocultural knowledge ontology (OntoSOC) modeling approach. OntoSOC modeling approach is based on Engeström‟s Human Activity Theory (HAT). That Theory allowed us to identify fundamental concepts and relationships between them. The top-down precess has been used to define differents sub-concepts. The modeled vocabulary permits us to organise data, to facilitate information retrieval by introducing a semantic layer in social web platform architecture, we project to implement. This platform can be considered as a « collective memory » and Participative and Distributed Information System (PDIS) which will allow Cameroonian communities to share an co-construct knowledge on permanent organized activities.", "title": "" }, { "docid": "77d0845463db0f4e61864b37ec1259b7", "text": "A new form of the variational autoencoder (VAE) is proposed, based on the symmetric KullbackLeibler divergence. It is demonstrated that learning of the resulting symmetric VAE (sVAE) has close connections to previously developed adversarial-learning methods. This relationship helps unify the previously distinct techniques of VAE and adversarially learning, and provides insights that allow us to ameliorate shortcomings with some previously developed adversarial methods. In addition to an analysis that motivates and explains the sVAE, an extensive set of experiments validate the utility of the approach.", "title": "" }, { "docid": "d1f8ee3d6dbc7ddc76b84ad2b0bfdd16", "text": "Cognitive radio technology addresses the limited availability of wireless spectrum and inefficiency of spectrum usage. Cognitive Radio (CR) devices sense their environment, detect spatially unused spectrum and opportunistically access available spectrum without creating harmful interference to the incumbents. In cellular systems with licensed spectrum, the efficient utilization of the spectrum as well as the protection of primary users is equally important, which imposes opportunities and challenges for the application of CR. This paper introduces an experimental framework for 5G cognitive radio access in current 4G LTE cellular systems. It can be used to study CR concepts in different scenarios, such as 4G to 5G system migrations, machine-type communications, device-to-device communications, and load balancing. Using our framework, selected measurement results are presented that compare Long Term Evolution (LTE) Orthogonal Frequency Division Multiplex (OFDM) with a candidate 5G waveform called Generalized Frequency Division Multiplexing (GFDM) and quantify the benefits of GFDM in CR scenarios.", "title": "" }, { "docid": "1d935fd69bcc3aca58f03e5d34892076", "text": "• Healthy behaviour interventions should be initiated in people newly diagnosed with type 2 diabetes. • In people with type 2 diabetes with A1C <1.5% above the person’s individualized target, antihyperglycemic pharmacotherapy should be added if glycemic targets are not achieved within 3 months of initiating healthy behaviour interventions. • In people with type 2 diabetes with A1C ≥1.5% above target, antihyperglycemic agents should be initiated concomitantly with healthy behaviour interventions, and consideration could be given to initiating combination therapy with 2 agents. • Insulin should be initiated immediately in individuals with metabolic decompensation and/or symptomatic hyperglycemia. • In the absence of metabolic decompensation, metformin should be the initial agent of choice in people with newly diagnosed type 2 diabetes, unless contraindicated. • Dose adjustments and/or additional agents should be instituted to achieve target A1C within 3 to 6 months. Choice of second-line antihyperglycemic agents should be made based on individual patient characteristics, patient preferences, any contraindications to the drug, glucose-lowering efficacy, risk of hypoglycemia, affordability/access, effect on body weight and other factors. • In people with clinical cardiovascular (CV) disease in whom A1C targets are not achieved with existing pharmacotherapy, an antihyperglycemic agent with demonstrated CV outcome benefit should be added to antihyperglycemic therapy to reduce CV risk. • In people without clinical CV disease in whom A1C target is not achieved with current therapy, if affordability and access are not barriers, people with type 2 diabetes and their providers who are concerned about hypoglycemia and weight gain may prefer an incretin agent (DPP-4 inhibitor or GLP-1 receptor agonist) and/or an SGLT2 inhibitor to other agents as they improve glycemic control with a low risk of hypoglycemia and weight gain. • In people receiving an antihyperglycemic regimen containing insulin, in whom glycemic targets are not achieved, the addition of a GLP-1 receptor agonist, DPP-4 inhibitor or SGLT2 inhibitor may be considered before adding or intensifying prandial insulin therapy to improve glycemic control with less weight gain and comparable or lower hypoglycemia risk.", "title": "" }, { "docid": "409f3b2768a8adf488eaa6486d1025a2", "text": "The aim of the study was to investigate prospectively the direction of the relationship between adolescent girls' body dissatisfaction and self-esteem. Participants were 242 female high school students who completed questionnaires at two points in time, separated by 2 years. The questionnaire contained measures of weight (BMI), body dissatisfaction (perceived overweight, figure dissatisfaction, weight satisfaction) and self-esteem. Initial body dissatisfaction predicted self-esteem at Time 1 and Time 2, and initial self-esteem predicted body dissatisfaction at Time 1 and Time 2. However, linear panel analysis (regression analyses controlling for Time 1 variables) found that aspects of Time 1 weight and body dissatisfaction predicted change in self-esteem, but not vice versa. It was concluded that young girls with heavier actual weight and perceptions of being overweight were particularly vulnerable to developing low self-esteem.", "title": "" }, { "docid": "fc2a7c789f742dfed24599997845b604", "text": "An axially symmetric power combiner, which utilizes a tapered conical impedance matching network to transform ten 50-Omega inputs to a central coaxial line over the X-band, is presented. The use of a conical line allows standard transverse electromagnetic design theory to be used, including tapered impedance matching networks. This, in turn, alleviates the problem of very low impedance levels at the common port of conical line combiners, which normally requires very high-precision manufacturing and assembly. The tapered conical line is joined to a tapered coaxial line for a completely smooth transmission line structure. Very few full-wave analyses are needed in the design process since circuit models are optimized to achieve a wide operating bandwidth. A ten-way prototype was developed at X-band with a 47% bandwidth, very low losses, and excellent agreement between simulated and measured results.", "title": "" }, { "docid": "6006d2a032b60c93e525a8a28828cc7e", "text": "Recent advances in genome engineering indicate that innovative crops developed by targeted genome modification (TGM) using site-specific nucleases (SSNs) have the potential to avoid the regulatory issues raised by genetically modified organisms. These powerful SSNs tools, comprising zinc-finger nucleases, transcription activator-like effector nucleases, and clustered regulatory interspaced short palindromic repeats/CRISPR-associated systems, enable precise genome engineering by introducing DNA double-strand breaks that subsequently trigger DNA repair pathways involving either non-homologous end-joining or homologous recombination. Here, we review developments in genome-editing tools, summarize their applications in crop organisms, and discuss future prospects. We also highlight the ability of these tools to create non-transgenic TGM plants for next-generation crop breeding.", "title": "" }, { "docid": "98269ed4d72abecb6112c35e831fc727", "text": "The goal of this article is to place the role that social media plays in collective action within a more general theoretical structure, using the events of the Arab Spring as a case study. The article presents two broad theoretical principles. The first is that one cannot understand the role of social media in collective action without first taking into account the political environment in which they operate. The second principle states that a significant increase in the use of the new media is much more likely to follow a significant amount of protest activity than to precede it. The study examines these two principles using political, media, and protest data from twenty Arab countries and the Palestinian Authority. The findings provide strong support for the validity of the claims.", "title": "" }, { "docid": "2348652010d1dec37a563e3eed15c090", "text": "This study firstly examines the current literature concerning ERP implementation problems during implementation phases and causes of ERP implementation failure. A multiple case study research methodology was adopted to understand “why” and “how” these ERP systems could not be implemented successfully. Different stakeholders (including top management, project manager, project team members and ERP consultants) from these case studies were interviewed, and ERP implementation documents were reviewed for triangulation. An ERP life cycle framework was applied to study the ERP implementation process and the associated problems in each phase of ERP implementation. Fourteen critical failure factors were identified and analyzed, and three common critical failure factors (poor consultant effectiveness, project management effectiveness and poo555îr quality of business process re-engineering) were examined and discussed. Future research on ERP implementation and critical failure factors is discussed. It is hoped that this research will help to bridge the current literature gap and provide practical advice for both academics and practitioners.", "title": "" }, { "docid": "1ef814163a5c91155a2d7e1b4b19f4d7", "text": "In this article, a frequency reconfigurable fractal patch antenna using pin diodes is proposed and studied. The antenna structure has been designed on FR-4 low-cost substrate material of relative permittivity εr = 4.4, with a compact volume of 30×30×0.8 mm3. The bandwidth and resonance frequency of the antenna design will be increased when we exploit the fractal iteration on the patch antenna. This antenna covers some service bands such as: WiMAX, m-WiMAX, WLAN, C-band and X band applications. The simulation of the proposed antenna is carried out using CST microwave studio. The radiation pattern and S parameter are further presented and discussed.", "title": "" }, { "docid": "2c79e4e8563b3724014a645340b869ce", "text": "Development of linguistic technologies and penetration of social media provide powerful possibilities to investigate users' moods and psychological states of people. In this paper we discussed possibility to improve accuracy of stock market indicators predictions by using data about psychological states of Twitter users. For analysis of psychological states we used lexicon-based approach, which allow us to evaluate presence of eight basic emotions in more than 755 million tweets. The application of Support Vectors Machine and Neural Networks algorithms to predict DJIA and S&P500 indicators are discussed.", "title": "" }, { "docid": "fabcb243bff004279cfb5d522a7bed4b", "text": "Vein pattern is the network of blood vessels beneath person’s skin. Vein patterns are sufficiently different across individuals, and they are stable unaffected by ageing and no significant changed in adults by observing. It is believed that the patterns of blood vein are unique to every individual, even among twins. Finger vein authentication technology has several important features that set it apart from other forms of biometrics as a highly secure and convenient means of personal authentication. This paper presents a finger-vein image matching method based on minutiae extraction and curve analysis. This proposed system is implemented in MATLAB. Experimental results show that the proposed method performs well in improving finger-vein matching accuracy.", "title": "" }, { "docid": "6deab7156f09594f497806d6f6ad2a27", "text": "The development of the Multidimensional Health Locus of Control scales is described. Scales have been developed to tap beliefs that the source of reinforcements for health-related behaviors is primarily internal, a matter of chance, or under the control of powerful others. These scales are based on earlier work with a general Health Locus of Control Scale, which, in turn, was developed from Rotter's social learning theory. Equivalent forms of the scales are presented along with initial internal consistency and validity data. Possible means of utilizing these scales are provided.", "title": "" }, { "docid": "027e10898845955beb5c81518f243555", "text": "As the field of Natural Language Processing has developed, research has progressed on ambitious semantic tasks like Recognizing Textual Entailment (RTE). Systems that approach these tasks may perform sophisticated inference between sentences, but often depend heavily on lexical resources like WordNet to provide critical information about relationships and entailments between lexical items. However, lexical resources are expensive to create and maintain, and are never fully comprehensive. Distributional Semantics has long provided a method to automatically induce meaning representations for lexical items from large corpora with little or no annotation efforts. The resulting representations are excellent as proxies of semantic similarity: words will have similar representations if their semantic meanings are similar. Yet, knowing two words are similar does not tell us their relationship or whether one entails the other. We present several models for identifying specific relationships and entailments from distributional representations of lexical semantics. Broadly, this work falls into two distinct but related areas: the first predicts specific ontology relations and entailment decisions between lexical items devoid of context; and the second predicts specific lexical paraphrases in complete sentences. We provide insight and analysis of how and why our models are able to generalize to novel lexical items and improve upon prior work. We propose several shortand long-term extensions to our work. In the short term, we propose applying one of our hypernymy-detection models to other relationships and evaluating our more recent work in an end-to-end RTE system. In the long-term, we propose adding consistency constraints to our lexical relationship prediction, better integration of context into our lexical paraphrase model, and new distributional models for improving word representations.", "title": "" }, { "docid": "bffbc725b52468b41c53b156f6eadedb", "text": "This paper presents the design and experimental evaluation of an underwater robot that is propelled by a pair of lateral undulatory fins, inspired by the locomotion of rays and cuttlefish. Each fin mechanism is comprised of three individually actuated fin rays, which are interconnected by an elastic membrane. An on-board microcontroller generates the rays’ motion pattern that result in the fins’ undulations, through which propulsion is generated. The prototype, which is fully untethered and energetically autonomous, also integrates an Inertial Measurement Unit for navigation purposes, a wireless communication module, and a video camera for recording underwater footage. Due to its small size and low manufacturing cost, the developed prototype can also serve as an educational platform for underwater robotics.", "title": "" } ]
scidocsrr
c23304081a262f1fff80fadacd664000
Provably secure session key distribution: the three party case
[ { "docid": "5a28fbdcce61256fd67d97fc353b138b", "text": "Use of encryption to achieve authenticated communication in computer networks is discussed. Example protocols are presented for the establishment of authenticated connections, for the management of authenticated mail, and for signature verification and document integrity guarantee. Both conventional and public-key encryption algorithms are considered as the basis for protocols.", "title": "" } ]
[ { "docid": "b5009853d22801517431f46683b235c2", "text": "Artificial intelligence (AI) is the study of how to make computers do things which, at the moment, people do better. Thus Strong AI claims that in near future we will be surrounded by such kinds of machine which can completely works like human being and machine could have human level intelligence. One intention of this article is to excite a broader AI audience about abstract algorithmic information theory concepts, and conversely to inform theorists about exciting applications to AI.The science of Artificial Intelligence (AI) might be defined as the construction of intelligent systems and their analysis.", "title": "" }, { "docid": "47501c171c7b3f8e607550c958852be1", "text": "Fundus images provide an opportunity for early detection of diabetes. Generally, retina fundus images of diabetic patients exhibit exudates, which are lesions indicative of Diabetic Retinopathy (DR). Therefore, computational tools can be considered to be used in assisting ophthalmologists and medical doctor for the early screening of the disease. Hence in this paper, we proposed visualisation of exudates in fundus images using radar chart and Color Auto Correlogram (CAC) technique. The proposed technique requires that the Optic Disc (OD) from the fundus image be removed. Next, image normalisation was performed to standardise the colors in the fundus images. The exudates from the modified image are then extracted using Artificial Neural Network (ANN) and visualised using radar chart and CAC technique. The proposed technique was tested on 149 images of the publicly available MESSIDOR database. Experimental results suggest that the method has potential to be used for early indication of DR, by visualising the overlap between CAC features of the fundus images.", "title": "" }, { "docid": "8e06dbf42df12a34952cdd365b7f328b", "text": "Data and theory from prism adaptation are reviewed for the purpose of identifying control methods in applications of the procedure. Prism exposure evokes three kinds of adaptive or compensatory processes: postural adjustments (visual capture and muscle potentiation), strategic control (including recalibration of target position), and spatial realignment of various sensory-motor reference frames. Muscle potentiation, recalibration, and realignment can all produce prism exposure aftereffects and can all contribute to adaptive performance during prism exposure. Control over these adaptive responses can be achieved by manipulating the locus of asymmetric exercise during exposure (muscle potentiation), the similarity between exposure and post-exposure tasks (calibration), and the timing of visual feedback availability during exposure (realignment).", "title": "" }, { "docid": "15dbf1ad05c8219be484c01145c09b6c", "text": "In this paper, we study the contextual bandit problem (also known as the multi-armed bandit problem with expert advice) for linear payoff functions. For T rounds, K actions, and d dimensional feature vectors, we prove an O ( √ Td ln(KT ln(T )/δ) ) regret bound that holds with probability 1− δ for the simplest known (both conceptually and computationally) efficient upper confidence bound algorithm for this problem. We also prove a lower bound of Ω( √ Td) for this setting, matching the upper bound up to logarithmic factors.", "title": "" }, { "docid": "6cce055b947b1d222bfdee01507416a1", "text": "An automatic road sign recognition system first locates road signs within images captured by an imaging sensor on-board of a vehicle, and then identifies road signs assisting the driver of the vehicle to properly operate the vehicle. This paper presents an automatic road sign recognition system capable of analysing live images, detecting multiple road signs within images, and classifying the type of the detected road signs. The system consists of two modules: detection and classification. The detection module segments the input image in the hue-saturation-intensity colour space and locates road signs. The classification module determines the type of detected road signs using a series of one to one architectural Multi Layer Perceptron neural networks. The performances of the classifiers that are trained using Resillient Backpropagation and Scaled Conjugate Gradient algorithms are compared. The experimental results demonstrate that the system is capable of achieving an average recognition hit-rate of 96% using Scaled Conjugate Gradient trained classifiers.", "title": "" }, { "docid": "5f52b31afe9bf18f009a10343ccedaf0", "text": "The preservation of image quality under various display conditions becomes more and more important in the multimedia era. A considerable amount of effort has been devoted to compensating the quality degradation caused by dim LCD backlight for mobile devices and desktop monitors. However, most previous enhancement methods for backlight-scaled images only consider the luminance component and overlook the impact of color appearance on image quality. In this paper, we propose a fast and elegant method that exploits the anchoring property of human visual system to preserve the color appearance of backlight-scaled images as much as possible. Our approach is distinguished from previous ones in many aspects. First, it has a sound theoretical basis. Second, it takes the luminance and chrominance components into account in an integral manner. Third, it has low complexity and can process 720p high-definition videos at 35 frames per second without flicker. The superior performance of the proposed method is verified through psychophysical tests.", "title": "" }, { "docid": "29378712a9ab9031879c95ee8baad923", "text": "In recent decades, different extensional forms of fuzzy sets have been developed. However, these multitudinous fuzzy sets are unable to deal with quantitative information better. Motivated by fuzzy linguistic approach and hesitant fuzzy sets, the hesitant fuzzy linguistic term set was introduced and it is a more reasonable set to deal with quantitative information. During the process of multiple criteria decision making, it is necessary to propose some aggregation operators to handle hesitant fuzzy linguistic information. In this paper, two aggregation operators for hesitant fuzzy linguistic term sets are introduced, which are the hesitant fuzzy linguistic Bonferroni mean operator and the weighted hesitant fuzzy linguistic Bonferroni mean operator. Correspondingly, several properties of these two aggregation operators are discussed. Finally, a practical case is shown in order to express the application of these two aggregation operators. This case mainly discusses how to choose the best hospital about conducting the whole society resourcemanagement research included in awisdommedical health system. Communicated by V. Loia. B Zeshui Xu xuzeshui@263.net Xunjie Gou gouxunjie@qq.com Huchang Liao liaohuchang@163.com 1 Business School, Sichuan University, Chengdu 610064, China 2 School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 210044, China", "title": "" }, { "docid": "6e70435f2d434581f00962b5677facfa", "text": "Many institutions of Higher Education and Corporate Training Institutes are resorting to e-Learning as a means of solving authentic learning and performance problems, while other institutions are hopping onto the bandwagon simply because they do not want to be left behind. Success is crucial because an unsuccessful effort to implement e-Learning will be clearly reflected in terms of the return of investment. One of the most crucial prerequisites for successful implementation of e-Learning is the need for careful consideration of the underlying pedagogy, or how learning takes place online. In practice, however, this is often the most neglected aspect in any effort to implement e-Learning. The purpose of this paper is to identify the pedagogical principles underlying the teaching and learning activities that constitute effective e-Learning. An analysis and synthesis of the principles and ideas by the practicing e-Learning company employing the author will also be presented, in the perspective of deploying an effective Learning Management Systems (LMS). D 2002 Published by Elsevier Science Inc.", "title": "" }, { "docid": "5ec1cff52a55c5bd873b5d0d25e0456b", "text": "This study presents a novel approach to the problem of system portability across different domains: a sentiment annotation system that integrates a corpus-based classifier trained on a small set of annotated in-domain data and a lexicon-based system trained on WordNet. The paper explores the challenges of system portability across domains and text genres (movie reviews, news, blogs, and product reviews), highlights the factors affecting system performance on out-of-domain and smallset in-domain data, and presents a new system consisting of the ensemble of two classifiers with precision-based vote weighting, that provides significant gains in accuracy and recall over the corpus-based classifier and the lexicon-based system taken individually.", "title": "" }, { "docid": "131415093146eeecb6231e22e514170b", "text": "Aspect-Oriented Programming (AOP) provides another way of thinking about program structure that allows developers to separate and modularize concerns like crosscutting concerns. These concerns are maintained in aspects that allows to easily maintain both the core and crosscutting concerns. Much research on this area has been done focused on traditional software development. Although little has been done in the Web development context. In this paper is presented an overview of existing AOP PHP development tools identifying their strengths and weaknesses. Then we compare the existing AOP PHP development tools presented in this paper. We then discuss how these tools can be effectively used in the Web development. Finally, is discussed how AOP can enhance the Web development and are presented some future work possibilities on this area.", "title": "" }, { "docid": "efe4f4e726e40731432a95dbdfcb9f89", "text": "We propose the combination of a keyframe-based monocular SLAM system and a global localization method. The SLAM system runs locally on a camera-equipped mobile client and provides continuous, relative 6DoF pose estimation as well as keyframe images with computed camera locations. As the local map expands, a server process localizes the keyframes with a pre-made, globally-registered map and returns the global registration correction to the mobile client. The localization result is updated each time a keyframe is added, and observations of global anchor points are added to the client-side bundle adjustment process to further refine the SLAM map registration and limit drift. The end result is a 6DoF tracking and mapping system which provides globally registered tracking in real-time on a mobile device, overcomes the difficulties of localization with a narrow field-of-view mobile phone camera, and is not limited to tracking only in areas covered by the offline reconstruction.", "title": "" }, { "docid": "0b44782174d1dae460b86810db8301ec", "text": "We present an overview of Markov chain Monte Carlo, a sampling method for model inference and uncertainty quantification. We focus on the Bayesian approach to MCMC, which allows us to estimate the posterior distribution of model parameters, without needing to know the normalising constant in Bayes’ theorem. Given an estimate of the posterior, we can then determine representative models (such as the expected model, and the maximum posterior probability model), the probability distributions for individual parameters, and the uncertainty about the predictions from these models. We also consider variable dimensional problems in which the number of model parameters is unknown and needs to be inferred. Such problems can be addressed with reversible jump (RJ) MCMC. This leads us to model choice, where we may want to discriminate between models or theories of differing complexity. For problems where the models are hierarchical (e.g. similar structure but with a different number of parameters), the Bayesian approach naturally selects the simpler models. More complex problems require an estimate of the normalising constant in Bayes’ theorem (also known as the evidence) and this is difficult to do reliably for high dimensional problems. We illustrate the applications of RJMCMC with 3 examples from our earlier working involving modelling distributions of geochronological age data, inference of sea-level and sediment supply histories from 2D stratigraphic cross-sections, and identification of spatially discontinuous thermal histories from a suite of apatite fission track samples distributed in 3D. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ec300259d5bcdcf3373d05ddcd8f99ae", "text": "This research focuses on the flapping wing mechanism design for the micro air vehicle model. The paper starts with analysis the topological structure characteristics of Single-Crank Double-Rocker mechanism. Following the design procedure, all of the possible combinations of flapping mechanism which contains not more than 6 components were generated. The design procedure is based on Hong-Sen Yan's creative design theory for mechanical devices. This research designed 31 different types of mechanisms, which provide more directions for the design and fabrication of the micro air vehicle model.", "title": "" }, { "docid": "d4ed4cad670b1e11cfb3c869e34cf9fd", "text": "BACKGROUND\nDespite the many antihypertensive medications available, two-thirds of patients with hypertension do not achieve blood pressure control. This is thought to be due to a combination of poor patient education, poor medication adherence, and \"clinical inertia.\" The present trial evaluates an intervention consisting of health coaching, home blood pressure monitoring, and home medication titration as a method to address these three causes of poor hypertension control.\n\n\nMETHODS/DESIGN\nThe randomized controlled trial will include 300 patients with poorly controlled hypertension. Participants will be recruited from a primary care clinic in a teaching hospital that primarily serves low-income populations.An intervention group of 150 participants will receive health coaching, home blood pressure monitoring, and home-titration of antihypertensive medications during 6 months. The control group (n=150) will receive health coaching plus home blood pressure monitoring for the same duration. A passive control group will receive usual care. Blood pressure measurements will take place at baseline, and after 6 and 12 months. The primary outcome will be change in systolic blood pressure after 6 and 12 months. Secondary outcomes measured will be change in diastolic blood pressure, adverse events, and patient and provider satisfaction.\n\n\nDISCUSSION\nThe present study is designed to assess whether the 3-pronged approach of health coaching, home blood pressure monitoring, and home medication titration can successfully improve blood pressure, and if so, whether this effect persists beyond the period of the intervention.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov identifier: NCT01013857.", "title": "" }, { "docid": "f6f1462e8edd8200948168423c87c1bf", "text": "Users' behaviors are driven by their preferences across various aspects of items they are potentially interested in purchasing, viewing, etc. Latent space approaches model these aspects in the form of latent factors. Although such approaches have been shown to lead to good results, the aspects that are important to different users can vary. In many domains, there may be a set of aspects for which all users care about and a set of aspects that are specific to different subsets of users. To explicitly capture this, we consider models in which there are some latent factors that capture the shared aspects and some user subset specific latent factors that capture the set of aspects that the different subsets of users care about.\n In particular, we propose two latent space models: rGLSVD and sGLSVD, that combine such a global and user subset specific sets of latent factors. The rGLSVD model assigns the users into different subsets based on their rating patterns and then estimates a global and a set of user subset specific local models whose number of latent dimensions can vary.\n The sGLSVD model estimates both global and user subset specific local models by keeping the number of latent dimensions the same among these models but optimizes the grouping of the users in order to achieve the best approximation. Our experiments on various real-world datasets show that the proposed approaches significantly outperform state-of-the-art latent space top-N recommendation approaches.", "title": "" }, { "docid": "0860b29f52d403a0ff728a3e356ec071", "text": "Neuroanatomy has entered a new era, culminating in the search for the connectome, otherwise known as the brain's wiring diagram. While this approach has led to landmark discoveries in neuroscience, potential neurosurgical applications and collaborations have been lagging. In this article, the authors describe the ideas and concepts behind the connectome and its analysis with graph theory. Following this they then describe how to form a connectome using resting state functional MRI data as an example. Next they highlight selected insights into healthy brain function that have been derived from connectome analysis and illustrate how studies into normal development, cognitive function, and the effects of synthetic lesioning can be relevant to neurosurgery. Finally, they provide a précis of early applications of the connectome and related techniques to traumatic brain injury, functional neurosurgery, and neurooncology.", "title": "" }, { "docid": "c94d01ee0aaa8a70ce4e3441850316a6", "text": "Convolutional neural networks (CNNs) are inherently subject to invariable filters that can only aggregate local inputs with the same topological structures. It causes that CNNs are allowed to manage data with Euclidean or grid-like structures (e.g., images), not ones with non-Euclidean or graph structures (e.g., traffic networks). To broaden the reach of CNNs, we develop structure-aware convolution to eliminate the invariance, yielding a unified mechanism of dealing with both Euclidean and non-Euclidean structured data. Technically, filters in the structure-aware convolution are generalized to univariate functions, which are capable of aggregating local inputs with diverse topological structures. Since infinite parameters are required to determine a univariate function, we parameterize these filters with numbered learnable parameters in the context of the function approximation theory. By replacing the classical convolution in CNNs with the structure-aware convolution, Structure-Aware Convolutional Neural Networks (SACNNs) are readily established. Extensive experiments on eleven datasets strongly evidence that SACNNs outperform current models on various machine learning tasks, including image classification and clustering, text categorization, skeleton-based action recognition, molecular activity detection, and taxi flow prediction.", "title": "" }, { "docid": "94bb7d2329cbea921c6f879090ec872d", "text": "We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment. An interactive version of this paper is available at https://worldmodels.github.io", "title": "" }, { "docid": "a29a61f5ad2e4b44e8e3d11b471a0f06", "text": "To ascertain by MRI the presence of filler injected into facial soft tissue and characterize complications by contrast enhancement. Nineteen volunteers without complications were initially investigated to study the MRI features of facial fillers. We then studied another 26 patients with clinically diagnosed filler-related complications using contrast-enhanced MRI. TSE-T1-weighted, TSE-T2-weighted, fat-saturated TSE-T2-weighted, and TIRM axial and coronal scans were performed in all patients, and contrast-enhanced fat-suppressed TSE-T1-weighted scans were performed in complicated patients, who were then treated with antibiotics. Patients with soft-tissue enhancement and those without enhancement but who did not respond to therapy underwent skin biopsy. Fisher’s exact test was used for statistical analysis. MRI identified and quantified the extent of fillers. Contrast enhancement was detected in 9/26 patients, and skin biopsy consistently showed inflammatory granulomatous reaction, whereas in 5/17 patients without contrast enhancement, biopsy showed no granulomas. Fisher’s exact test showed significant correlation (p < 0.001) between subcutaneous contrast enhancement and granulomatous reaction. Cervical lymph node enlargement (longitudinal axis >10 mm) was found in 16 complicated patients (65 %; levels IA/IB/IIA/IIB). MRI is a useful non-invasive tool for anatomical localization of facial dermal filler; IV gadolinium administration is advised in complicated cases for characterization of granulomatous reaction. • MRI is a non-invasive tool for facial dermal filler detection and localization. • MRI-criteria to evaluate complicated/non-complicated cases after facial dermal filler injections are defined. • Contrast-enhanced MRI detects subcutaneous inflammatory granulomatous reaction due to dermal filler. • 65 % patients with filler-related complications showed lymph-node enlargement versus 31.5 % without complications. • Lymph node enlargement involved cervical levels (IA/IB/IIA/IIB) that drained treated facial areas.", "title": "" }, { "docid": "2c4a2d41653f05060ff69f1c9ad7e1a6", "text": "Until recently the information technology (IT)-centricity was the prevailing paradigm in cyber security that was organized around confidentiality, integrity and availability of IT assets. Despite of its widespread usage, the weakness of IT-centric cyber security became increasingly obvious with the deployment of very large IT infrastructures and introduction of highly mobile tactical missions where the IT-centric cyber security was not able to take into account the dynamics of time and space bound behavior of missions and changes in their operational context. In this paper we will show that the move from IT-centricity towards to the notion of cyber attack resilient missions opens new opportunities in achieving the completion of mission goals even if the IT assets and services that are supporting the missions are under cyber attacks. The paper discusses several fundamental architectural principles of achieving cyber attack resilience of missions, including mission-centricity, survivability through adaptation, synergistic mission C2 and mission cyber security management, and the real-time temporal execution of the mission tasks. In order to achieve the overall system resilience and survivability under a cyber attack, both, the missions and the IT infrastructure are considered as two interacting adaptable multi-agent systems. While the paper is mostly concerned with the architectural principles of achieving cyber attack resilient missions, several models and algorithms that support resilience of missions are discussed in fairly detailed manner.", "title": "" } ]
scidocsrr
952730d7e4071e6f3fba2fc1a322a745
RUPERT: An exoskeleton robot for assisting rehabilitation of arm functions
[ { "docid": "cdc3e4b096be6775547a8902af52e798", "text": "OBJECTIVE\nThe aim of the study was to present a systematic review of studies that investigate the effects of robot-assisted therapy on motor and functional recovery in patients with stroke.\n\n\nMETHODS\nA database of articles published up to October 2006 was compiled using the following Medline key words: cerebral vascular accident, cerebral vascular disorders, stroke, paresis, hemiplegia, upper extremity, arm, and robot. References listed in relevant publications were also screened. Studies that satisfied the following selection criteria were included: (1) patients were diagnosed with cerebral vascular accident; (2) effects of robot-assisted therapy for the upper limb were investigated; (3) the outcome was measured in terms of motor and/or functional recovery of the upper paretic limb; and (4) the study was a randomized clinical trial (RCT). For each outcome measure, the estimated effect size (ES) and the summary effect size (SES) expressed in standard deviation units (SDU) were calculated for motor recovery and functional ability (activities of daily living [ADLs]) using fixed and random effect models. Ten studies, involving 218 patients, were included in the synthesis. Their methodological quality ranged from 4 to 8 on a (maximum) 10-point scale.\n\n\nRESULTS\nMeta-analysis showed a nonsignificant heterogeneous SES in terms of upper limb motor recovery. Sensitivity analysis of studies involving only shoulder-elbow robotics subsequently demonstrated a significant homogeneous SES for motor recovery of the upper paretic limb. No significant SES was observed for functional ability (ADL).\n\n\nCONCLUSION\nAs a result of marked heterogeneity in studies between distal and proximal arm robotics, no overall significant effect in favor of robot-assisted therapy was found in the present meta-analysis. However, subsequent sensitivity analysis showed a significant improvement in upper limb motor function after stroke for upper arm robotics. No significant improvement was found in ADL function. However, the administered ADL scales in the reviewed studies fail to adequately reflect recovery of the paretic upper limb, whereas valid instruments that measure outcome of dexterity of the paretic arm and hand are mostly absent in selected studies. Future research into the effects of robot-assisted therapy should therefore distinguish between upper and lower robotics arm training and concentrate on kinematical analysis to differentiate between genuine upper limb motor recovery and functional recovery due to compensation strategies by proximal control of the trunk and upper limb.", "title": "" } ]
[ { "docid": "904454a191da497071ee9b835561c6e6", "text": "We introduce a stochastic discrete automaton model to simulate freeway traffic. Monte-Carlo simulations of the model show a transition from laminar traffic flow to start-stopwaves with increasing vehicle density, as is observed in real freeway traffic. For special cases analytical results can be obtained.", "title": "" }, { "docid": "ba29af46fd410829c450eed631aa9280", "text": "We address the problem of dense visual-semantic embedding that maps not only full sentences and whole images but also phrases within sentences and salient regions within images into a multimodal embedding space. Such dense embeddings, when applied to the task of image captioning, enable us to produce several region-oriented and detailed phrases rather than just an overview sentence to describe an image. Specifically, we present a hierarchical structured recurrent neural network (RNN), namely Hierarchical Multimodal LSTM (HM-LSTM). Compared with chain structured RNN, our proposed model exploits the hierarchical relations between sentences and phrases, and between whole images and image regions, to jointly establish their representations. Without the need of any supervised labels, our proposed model automatically learns the fine-grained correspondences between phrases and image regions towards the dense embedding. Extensive experiments on several datasets validate the efficacy of our method, which compares favorably with the state-of-the-art methods.", "title": "" }, { "docid": "1af5c5e20c1ce827f899dc70d0495bdc", "text": "High power sources and high sensitivity detectors are highly in demand for terahertz imaging and sensing systems. Use of nano-antennas and nano-plasmonic light concentrators in photoconductive terahertz sources and detectors has proven to offer significantly higher terahertz radiation powers and detection sensitivities by enhancing photoconductor quantum efficiency while maintaining its ultrafast operation. This is because of the unique capability of nano-antennas and nano-plasmonic structures in manipulating the concentration of photo-generated carriers within the device active area, allowing a larger number of photocarriers to efficiently contribute to terahertz radiation and detection. An overview of some of the recent advancements in terahertz optoelectronic devices through use of various types of nano-antennas and nano-plasmonic light concentrators is presented in this article.", "title": "" }, { "docid": "6f95d8bcaefcc99209279dadb1beb0a6", "text": "Public cloud software marketplaces already offer users a wealth of choice in operating systems, database management systems, financial software, and virtual networking, all deployable and configurable at the click of a button. Unfortunately, this level of customization has not extended to emerging hypervisor-level services, partly because traditional virtual machines (VMs) are fully controlled by only one hypervisor at a time. Currently, a VM in a cloud platform cannot concurrently use hypervisorlevel services from multiple third-parties in a compartmentalized manner. We propose the notion of a multihypervisor VM, which is an unmodified guest that can simultaneously use services from multiple coresident, but isolated, hypervisors. We present a new virtualization architecture, called Span virtualization, that leverages nesting to allow multiple hypervisors to concurrently control a guest’s memory, virtual CPU, and I/O resources. Our prototype of Span virtualization on the KVM/QEMU platform enables a guest to use services such as introspection, network monitoring, guest mirroring, and hypervisor refresh, with performance comparable to traditional nested VMs.", "title": "" }, { "docid": "3c733b60b2319c706069d9163cf849d4", "text": "A novel dual-mode microstrip square loop resonator is proposed using the slow-wave and dispersion features of the microstrip slow-wave open-loop resonator. It is shown that the designed and fabricated dual-mode microstrip filter has a wide stopband including the first spurious resonance frequency. Also, it has a size reduction of about 50% at the same center frequency, as compared with the dual-mode bandpass filters such as microstrip patch, cross-slotted patch, square loop, and ring resonator filter.", "title": "" }, { "docid": "4ee84cfdef31d4814837ad2811e59cd4", "text": "In this paper, we propose the MIML (Multi-Instance Multi-Label learning) framework where an example is described by multiple instances and associated with multiple class labels. Compared to traditional learning frameworks, the MIML framework is more convenient and natural for representing complicated objects which have multiple semantic meanings. To learn from MIML examples, we propose the MimlBoost and MimlSvm algorithms based on a simple degeneration strategy, and experiments show that solving problems involving complicated objects with multiple semantic meanings in the MIML framework can lead to good performance. Considering that the degeneration process may lose information, we propose the D-MimlSvm algorithm which tackles MIML problems directly in a regularization framework. Moreover, we show that even when we do not have access to the real objects and thus cannot capture more information from real objects by using the MIML representation, MIML is still useful. We propose the InsDif and SubCod algorithms. InsDif works by transforming single-instances into the MIML representation for learning, while SubCod works by transforming single-label examples into the MIML representation for learning. Experiments show that in some tasks they are able to achieve better performance than learning the single-instances or single-label examples directly.", "title": "" }, { "docid": "7e3de04fc54b78d66e8209984a76b25c", "text": "OBJECTIVE\nTo assess existing reported human trials of Withania somnifera (WS; common name, ashwagandha) for the treatment of anxiety.\n\n\nDESIGN\nSystematic review of the literature, with searches conducted in PubMed, SCOPUS, CINAHL, and Google Scholar by a medical librarian. Additionally, the reference lists of studies identified in these databases were searched by a research assistant, and queries were conducted in the AYUSH Research Portal. Search terms included \"ashwagandha,\" \"Withania somnifera,\" and terms related to anxiety and stress. Inclusion criteria were human randomized controlled trials with a treatment arm that included WS as a remedy for anxiety or stress. The study team members applied inclusion criteria while screening the records by abstract review.\n\n\nINTERVENTION\nTreatment with any regimen of WS.\n\n\nOUTCOME MEASURES\nNumber and results of studies identified in the review.\n\n\nRESULTS\nSixty-two abstracts were screened; five human trials met inclusion criteria. Three studies compared several dosage levels of WS extract with placebos using versions of the Hamilton Anxiety Scale, with two demonstrating significant benefit of WS versus placebo, and the third demonstrating beneficial effects that approached but did not achieve significance (p=0.05). A fourth study compared naturopathic care with WS versus psychotherapy by using Beck Anxiety Inventory (BAI) scores as an outcome; BAI scores decreased by 56.5% in the WS group and decreased 30.5% for psychotherapy (p<0.0001). A fifth study measured changes in Perceived Stress Scale (PSS) scores in WS group versus placebo; there was a 44.0% reduction in PSS scores in the WS group and a 5.5% reduction in the placebo group (p<0.0001). All studies exhibited unclear or high risk of bias, and heterogenous design and reporting prevented the possibility of meta-analysis.\n\n\nCONCLUSIONS\nAll five studies concluded that WS intervention resulted in greater score improvements (significantly in most cases) than placebo in outcomes on anxiety or stress scales. Current evidence should be received with caution because of an assortment of study methods and cases of potential bias.", "title": "" }, { "docid": "c52a9d3d66d2b56374f26580a728cbd2", "text": "Automatic License Plate Recognition (ALPR) has important applications in traffic surveillance. It is a challenging problem especially in countries like in India where the license plates have varying sizes, number of lines, fonts etc. The difficulty is all the more accentuated in traffic videos as the cameras are placed high and most plates appear skewed. This work aims to address ALPR using Deep CNN methods for real-time traffic videos. We first extract license plate candidates from each frame using edge information and geometrical properties, ensuring high recall. These proposals are fed to a CNN classifier for License Plate detection obtaining high precision. We then use a CNN classifier trained for individual characters along with a spatial transformer network (STN) for character recognition. Our system is evaluated on several traffic videos with vehicles having different license plate formats in terms of tilt, distances, colors, illumination, character size, thickness etc. Results demonstrate robustness to such variations and impressive performance in both the localization and recognition. We also make available the dataset for further research on this topic.", "title": "" }, { "docid": "c4f6edd01cee1e44a00eca11a086a284", "text": "In this paper we investigate the effectiveness of Recurrent Neural Networks (RNNs) in a top-N content-based recommendation scenario. Specifically, we propose a deep architecture which adopts Long Short Term Memory (LSTM) networks to jointly learn two embeddings representing the items to be recommended as well as the preferences of the user. Next, given such a representation, a logistic regression layer calculates the relevance score of each item for a specific user and we returns the top-N items as recommendations.\n In the experimental session we evaluated the effectiveness of our approach against several baselines: first, we compared it to other shallow models based on neural networks (as Word2Vec and Doc2Vec), next we evaluated it against state-of-the-art algorithms for collaborative filtering. In both cases, our methodology obtains a significant improvement over all the baselines, thus giving evidence of the effectiveness of deep learning techniques in content-based recommendation scenarios and paving the way for several future research directions.", "title": "" }, { "docid": "f065684c26f71567c092ee6c85d5e831", "text": "Various types of killings occur within family matrices. The news media highlight the dramatic components, and even novels now use it as a theme. 1 However, a psychiatric understanding remains elusive. Not all killings within a family are familicidal. For want of a better term, I have called the killing of more than one member of a family by another family member \"familicide.\" The destruction of the family unit appears to be the goal. Such behavior comes within the category of \"mass murders\" where a number of victims are killed in a short period of time by one person. However, in mass murders the victims are not exclusively family members. The case of one person committing a series of homicides over an extended period of time, such as months or years, also differs from familicide. The latter can result in the perpetrator getting killed or injured in the process, or subsequently attempting a suicidal act. However, neither injury, nor suicide, nor death of the perpetrator is an indispensable part of familicide. Fifteen different theories purport to explain physical violence within the nuclear family. 2 Varieties of killings within a family are subvarieties and familicide is yet a rarer event. Pedicide is the killing of a child by a parent. These are usually cases of one child being killed by one parent. If the child happens to be an infant, the act is infanticide. Many of the latter are situations where a mother kills her infant and is diagnosed schizophrenic or psychotic depressive. Child beating by a parent can result in inadvertent death. One sibling killing another is fratricide. A child killing a parent is parricide, or more specifically patricide or matricide. Uxoricide is one spouse killing another. Each of these behaviors has its own intrapsychic and interpersonal correlates. Such correlates often involve victimologic aspects. As a caveat, and based on this study, we should not assume that the perpetrators in familicide all bear one diagnosis even in a descriptive nosological sense. A distinction is needed between intra familial homicides related to psychiatric disturbance in one family member and collective types of violence in which families are destroyed. Extermination of families based on national, ethnic, racial or religious backgrounds are not", "title": "" }, { "docid": "8e44d0e60c6460a07d66ba9a90741b86", "text": "Although graph embedding has been a powerful tool for modeling data intrinsic structures, simply employing all features for data structure discovery may result in noise amplification. This is particularly severe for high dimensional data with small samples. To meet this challenge, this paper proposes a novel efficient framework to perform feature selection for graph embedding, in which a category of graph embedding methods is cast as a least squares regression problem. In this framework, a binary feature selector is introduced to naturally handle the feature cardinality in the least squares formulation. The resultant integral programming problem is then relaxed into a convex Quadratically Constrained Quadratic Program (QCQP) learning problem, which can be efficiently solved via a sequence of accelerated proximal gradient (APG) methods. Since each APG optimization is w.r.t. only a subset of features, the proposed method is fast and memory efficient. The proposed framework is applied to several graph embedding learning problems, including supervised, unsupervised, and semi-supervised graph embedding. Experimental results on several high dimensional data demonstrated that the proposed method outperformed the considered state-of-the-art methods.", "title": "" }, { "docid": "85576e6b36757f0a475e7482e4827a91", "text": "Existing approaches to neural machine translation are typically autoregressive models. While these models attain state-of-the-art translation quality, they are suffering from low parallelizability and thus slow at decoding long sequences. In this paper, we propose a novel model for fast sequence generation — the semi-autoregressive Transformer (SAT). The SAT keeps the autoregressive property in global but relieves in local and thus is able to produce multiple successive words in parallel at each time step. Experiments conducted on English-German and ChineseEnglish translation tasks show that the SAT achieves a good balance between translation quality and decoding speed. On WMT’14 English-German translation, the SAT achieves 5.58× speedup while maintains 88% translation quality, significantly better than the previous non-autoregressive methods. When produces two words at each time step, the SAT is almost lossless (only 1% degeneration in BLEU score).", "title": "" }, { "docid": "3b0a36f6d484705f8a68ae4a928b743e", "text": "Solution The unique pure strategy subgame perfect equilibrium is (Rr, r). 2. (30pts.) An entrepreneur has a project that she presents to a capitalist. She has her own money that she could invest in the project and is looking for additional funding from the capitalist. The project is either good (denoted g) (with probability p) or it is bad (denoted b) (with probability 1− p) and only the entrepreneur knows the quality of the project. The entrepreneur (E) decides whether to invest her own money (I) or not (N), the capitalist (C) observes whether the entrepreneur has invested or not and then decides whether to invest his money (i) or not (n). Figure 1 represents the game and gives the payoffs, where the first number is the entrepreneur’s payoff and the second number is the capitalist’s. (a) (20pts.) Find the set of pure strategy perfect Bayesian equilibria of this game.", "title": "" }, { "docid": "e787a1486a6563c15a74a07ed9516447", "text": "This chapter describes how engineering principles can be used to estimate joint forces. Principles of static and dynamic analysis are reviewed, with examples of static analysis applied to the hip and elbow joints and to the analysis of joint forces in human ancestors. Applications to indeterminant problems of joint mechanics are presented and utilized to analyze equine fetlock joints.", "title": "" }, { "docid": "6bca70ccf17fd4380502b7b4e2e7e550", "text": "A consistent UI leaves an overall impression on user’s psychology, aesthetics and taste. Human–computer interaction (HCI) is the study of how humans interact with computer systems. Many disciplines contribute to HCI, including computer science, psychology, ergonomics, engineering, and graphic design. HCI is a broad term that covers all aspects of the way in which people interact with computers. In their daily lives, people are coming into contact with an increasing number of computer-based technologies. Some of these computer systems, such as personal computers, we use directly. We come into contact with other systems less directly — for example, we have all seen cashiers use laser scanners and digital cash registers when we shop. We have taken the same but in extensible line and made more solid justified by linking with other scientific pillars and concluded some of the best holistic base work for future innovations. It is done by inspecting various theories of Colour, Shape, Wave, Fonts, Design language and other miscellaneous theories in detail. Keywords— Karamvir Singh Rajpal, Mandeep Singh Rajpal, User Interface, User Experience, Design, Frontend, Neonex Technology,", "title": "" }, { "docid": "5bf761b94840bcab163ae3a321063b8b", "text": "The simulation method plays an important role in the investigation of the intrabody communication (IBC). Due to the problems of the transfer function and the corresponding parameters, only the simulation of the galvanic coupling IBC along the arm has been achieved at present. In this paper, a method for the mathematical simulation of the galvanic coupling IBC with different signal transmission paths has been introduced. First, a new transfer function of the galvanic coupling IBC was derived with the consideration of the internal resistances of the IBC devices. Second, the determination of the corresponding parameters used in the transfer function was discussed in detail. Finally, both the measurements and the simulations of the galvanic coupling IBC along the different signal transmission paths were carried out. Our investigation shows that the mathematical simulation results coincide with the measurement results over the frequency range from 100 kHz to 5 MHz, which indicates that the proposed method offers the significant advantages in the theoretical analysis and the application of the galvanic coupling IBC.", "title": "" }, { "docid": "0baf2c97da07f954a76b81f840ccca9e", "text": "3 Chapter 1 Introduction 1.1 Background: Identification is an action of recognizing or being recognized, in particular, identification of a thing or person from previous exposures or information. Identification these days is quite necessary as for security purposes. It can be done using biometric parameters such as finger prints, I.D scan, face recognition etc. Most probably the first well known example of a facial recognition system is because of Kohonen, who signified that an uncomplicated neural network could execute face recognition for aligned and normalized face images. The sort of network he recruited was by computing a face illustration by estimating the eigenvectors of the face image's autocorrelation pattern; these eigenvectors are currently called as`Eigen faces. But Kohonen's approach was not a real time triumph due to the need for accurate alignment and normalization. In successive years a great number of researchers attempted facial recognition systems based on edges, inter-feature spaces, and various neural network techniques. While many were victorious using small scale databases of aligned samples, but no one significantly directed the alternative practical problem of vast databases where the position and scale of the face was not known. An image is supposed to be outcome of two real variables, defined in the \" real world \" , for example, a(x, y) where 'a' is the amplitude in terms of brightness of the image at the real coordinate position (x, y). It is now practicable to operate multi-dimensional signals with systems that vary from simple digital circuits to complicated circuits, due to modern technology.  Image Analysis (input image->computation out)  Image Understanding (input image-> high-level interpretation out) 4 In this age of science and technology, images also attain wider opportunity due to the rapidly increasing significance of scientific visualization, for example microarray data in genetic research. To process the image firstly it is transformed into a digital form. Digitization comprises of sampling of image and quantization of sampled values. After transformed into a digital form, processing is performed. It introduces focal attention on image, or improvement of image features such as boundaries, or variation that make a graphic display more effective for representation & study. This technique does not enlarge the intrinsic information content in data. This technique is used to remove the unwanted observed image to reduce the effect of mortifications. Scope and precision of the knowledge of mortifications process and filter design are the basis of …", "title": "" }, { "docid": "2399755bed6b1fc5fac495d54886acc0", "text": "Lately fire outbreak is common issue happening in Malays and the damage caused by these type of incidents is tremendous toward nature and human interest. Due to this the need for application for fire detection has increases in recent years. In this paper we proposed a fire detection algorithm based on image processing techniques which is compatible in surveillance devices like CCTV, wireless camera to UAVs. The algorithm uses RGB colour model to detect the colour of the fire which is mainly comprehended by the intensity of the component R which is red colour. The growth of fire is detected using sobel edge detection. Finally a colour based segmentation technique was applied based on the results from the first technique and second technique to identify the region of interest (ROI) of the fire. After analysing 50 different fire scenarios images, the final accuracy obtained from testing the algorithm was 93.61% and the efficiency was 80.64%.", "title": "" }, { "docid": "861d7ad76337bc7960493d0b69976253", "text": "Dysuria, defined as pain, burning, or discomfort on urination, is more common in women than in men. Although urinary tract infection is the most frequent cause of dysuria, empiric treatment with antibiotics is not always appropriate. Dysuria occurs more often in younger women, probably because of their greater frequency of sexual activity. Older men are more likely to have dysuria because of an increased incidence of prostatic hyperplasia with accompanying inflammation and infection. A comprehensive history and physical examination can often reveal the cause of dysuria. Urinalysis may not be needed in healthier patients who have uncomplicated medical histories and symptoms. In most patients, however, urinalysis can help to determine the presence of infection and confirm a suspected diagnosis. Urine cultures and both urethral and vaginal smears and cultures can help to identify sites of infection and causative agents. Coliform organisms, notably Escherichia coli, are the most common pathogens in urinary tract infection. Dysuria can also be caused by noninfectious inflammation or trauma, neoplasm, calculi, hypoestrogenism, interstitial cystitis, or psychogenic disorders. Although radiography and other forms of imaging are rarely needed, these studies may identify abnormalities in the upper urinary tract when symptoms are more complex.", "title": "" }, { "docid": "ac15d2b4d14873235fe6e4d2dfa84061", "text": "Despite strong popular conceptions of gender differences in emotionality and striking gender differences in the prevalence of disorders thought to involve emotion dysregulation, the literature on the neural bases of emotion regulation is nearly silent regarding gender differences (Gross, 2007; Ochsner & Gross, in press). The purpose of the present study was to address this gap in the literature. Using functional magnetic resonance imaging, we asked male and female participants to use a cognitive emotion regulation strategy (reappraisal) to down-regulate their emotional responses to negatively valenced pictures. Behaviorally, men and women evidenced comparable decreases in negative emotion experience. Neurally, however, gender differences emerged. Compared with women, men showed (a) lesser increases in prefrontal regions that are associated with reappraisal, (b) greater decreases in the amygdala, which is associated with emotional responding, and (c) lesser engagement of ventral striatal regions, which are associated with reward processing. We consider two non-competing explanations for these differences. First, men may expend less effort when using cognitive regulation, perhaps due to greater use of automatic emotion regulation. Second, women may use positive emotions in the service of reappraising negative emotions to a greater degree. We then consider the implications of gender differences in emotion regulation for understanding gender differences in emotional processing in general, and gender differences in affective disorders.", "title": "" } ]
scidocsrr
553b43e196f2dc11cf28b2b14ba7f651
Detecting Spam in Chinese Microblogs - A Study on Sina Weibo
[ { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" } ]
[ { "docid": "2ce4e4d5026114739adfeee7626e2aae", "text": "-A neural network model for visual pattern recognition, called the \"neocognitron, \"' was previously proposed by the author In this paper, we discuss the mechanism of the model in detail. In order to demonstrate the ability of the neocognitron, we also discuss a pattern-recognition system which works with the mechanism of the neocognitron. The system has been implemented on a minicomputer and has been trained to recognize handwritten numerals. The neocognitron is a hierarchical network consisting of many layers of cells, and has variable connections between the cells in adjoining layers. It can acquire the ability to recognize patterns by learning, and can be trained to recognize any set of patterns. After finishing the process of learning, pattern recognition is performed on the basis of similarity in shape between patterns, and is not affected by deformation, nor by changes in size, nor by shifts in the position of the input patterns. In the hierarchical network of the neocognitron, local features of the input pattern are extracted by the cells of a lower stage, and they are gradually integrated into more global features. Finally, each cell of the highest stage integrates all the information of the input pattern, and responds only to one specific pattern. Thus, the response of the cells of the highest stage shows the final result of the pattern-recognition of the network. During this process of extracting and integrating features, errors in the relative position of local features are gradually tolerated. The operation of tolerating positional error a little at a time at each stage, rather than all in one step, plays an important role in endowing the network with an ability to recognize even distorted patterns.", "title": "" }, { "docid": "930515101a83dd668ef6769c9626416c", "text": "Users speaking different languages may prefer different patterns in creating their passwords, and thus knowledge on English passwords cannot help to guess passwords from other languages well. Research has already shown Chinese passwords are one of the most difficult ones to guess. We believe that the conclusion is biased because, to the best of our knowledge, little empirical study has examined regional differences of passwords on a large scale, especially on Chinese passwords. In this paper, we study the differences between passwords from Chinese and English speaking users, leveraging over 100 million leaked and publicly available passwords from Chinese and international websites in recent years. We found that Chinese prefer digits when composing their passwords while English users prefer letters, especially lowercase letters. However, their strength against password guessing is similar. Second, we observe that both users prefer to use the patterns that they are familiar with, e.g., Chinese Pinyins for Chinese and English words for English users. Third, we observe that both Chinese and English users prefer their conventional format when they use dates to construct passwords. Based on these observations, we improve a PCFG (Probabilistic Context-Free Grammar) based password guessing method by inserting Pinyins (about 2.3% more entries) into the attack dictionary and insert our observed composition rules into the guessing rule set. As a result, our experiments show that the efficiency of password guessing increases by 34%.", "title": "" }, { "docid": "302c1d322868af2fec6bef62c5eb2dd5", "text": "This paper presents the combined LIPNUAM participation in the WASSA 2017 Shared Task on Emotion Intensity. In particular, the paper provides some highlights on the system that was presented to the shared task, partly based on the Tweetaneuse system used to participate in a French Sentiment Analysis task (DEFT2017). We combined lexicon-based features with sentence-level vector representations to obtain a random forest model.", "title": "" }, { "docid": "86cb3c072e67bed8803892b72297812c", "text": "Internet of Things (IoT) will comprise billions of devices that can sense, communicate, compute and potentially actuate. Data streams coming from these devices will challenge the traditional approaches to data management and contribute to the emerging paradigm of big data. This paper discusses emerging Internet of Things (IoT) architecture, large scale sensor network applications, federating sensor networks, sensor data and related context capturing techniques, challenges in cloud-based management, storing, archiving and processing of", "title": "" }, { "docid": "27f3060ef96f1656148acd36d50f02ce", "text": "Video sensors become particularly important in traffic applications mainly due to their fast response, easy installation, operation and maintenance, and their ability to monitor wide areas. Research in several fields of traffic applications has resulted in a wealth of video processing and analysis methods. Two of the most demanding and widely studied applications relate to traffic monitoring and automatic vehicle guidance. In general, systems developed for these areas must integrate, amongst their other tasks, the analysis of their static environment (automatic lane finding) and the detection of static or moving obstacles (object detection) within their space of interest. In this paper we present an overview of image processing and analysis tools used in these applications and we relate these tools with complete systems developed for specific traffic applications. More specifically, we categorize processing methods based on the intrinsic organization of their input data (feature-driven, area-driven, or model-based) and the domain of processing (spatial/frame or temporal/video). Furthermore, we discriminate between the cases of static and mobile camera. Based on this categorization of processing tools, we present representative systems that have been deployed for operation. Thus, the purpose of the paper is threefold. First, to classify image-processing methods used in traffic applications. Second, to provide the advantages and disadvantages of these algorithms. Third, from this integrated consideration, to attempt an evaluation of shortcomings and general needs in this field of active research. q 2003 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "12adb5e324d971d2c752f2193cec3126", "text": "Despite recent excitement generated by the P2P paradigm and despite surprisingly fast deployment of some P2P applications, there are few quantitative evaluations of P2P systems behavior. Due to its open architecture and achieved scale, Gnutella is an interesting P2P architecture case study. Gnutella, like most other P2P applications, builds at the application level a virtual network with its own routing mechanisms. The topology of this overlay network and the routing mechanisms used have a significant influence on application properties such as performance, reliability, and scalability. We built a ‘crawler’ to extract the topology of Gnutella’s application level network, we analyze the topology graph and evaluate generated network traffic. We find that although Gnutella is not a pure power-law network, its current configuration has the benefits and drawbacks of a power-law structure. These findings lead us to propose changes to Gnutella protocol and implementations that bring significant performance and scalability improvements.", "title": "" }, { "docid": "a90c56a22559807463b46d1c7ab36cb3", "text": "We have studied manual motor function in a man deafferented by a severe peripheral sensory neuropathy. Motor power was almost unaffected. Our patients could produce a very wide range of preprogrammed finger movements with remarkable accuracy, involving complex muscle synergies of the hand and forearm muscles. He could perform individual finger movements and outline figures in the air with high eyes closed. He had normal pre- and postmovement EEG potentials, and showed the normal bi/triphasic pattern of muscle activation in agonist and antagonist muscles during fast limb movements. He could also move his thumb accurately through three different distances at three different speeds, and could produce three different levels of force at his thumb pad when required. Although he could not judge the weights of objects placed in his hands without vision, he was able to match forces applied by the experimenter to the pad of each thumb if he was given a minimal indication of thumb movement. Despite his success with these laboratory tasks, his hands were relatively useless to him in daily life. He was unable to grasp a pen and write, to fasten his shirt buttons or to hold a cup in one hand. Part of hist difficulty lay in the absence of any automatic reflex correction in his voluntary movements, and also to an inability to sustain constant levels of muscle contraction without visual feedback over periods of more than one or two seconds. He was also unable to maintain long sequences of simple motor programmes without vision.", "title": "" }, { "docid": "873a1bbd835d6b116b1334afeee4f52c", "text": "Both basic science and clinical research on mindfulness, meditation, and related constructs have dramatically increased in recent years. However, interpretation of these research results has been challenging. The present article addresses unique conceptual and methodological problems posed by research in this area. Included among the key topics is the role of first-person experience and how it can be best studied, the challenges posed by intervention research designs in which true double-blinding is not possible, the nature of control and comparison conditions for research that includes mindfulness or other meditation-based interventions, issues in the adequate description of mindfulness and related trainings and interventions, the question of how mindfulness can be measured, questions regarding what can and cannot be inferred from self-report measures, and considerations regarding the structure of study design and data analyses. Most of these topics are germane to both basic and clinical research studies and have important bearing on the future scientific understanding of mindfulness and meditation.", "title": "" }, { "docid": "ba3f0e792b896b38f8844807a8d8e80e", "text": "In this paper, we present a novel self-learning single image super-resolution (SR) method, which restores a high-resolution (HR) image from self-examples extracted from the low-resolution (LR) input image itself without relying on extra external training images. In the proposed method, we directly use sampled image patches as the anchor points, and then learn multiple linear mapping functions based on anchored neighborhood regression to transform LR space into HR space. Moreover, we utilize the flipped and rotated versions of the self-examples to expand the internal patch space. Experimental comparison on standard benchmarks with state-of-the-art methods validates the effectiveness of the proposed approach.", "title": "" }, { "docid": "2a0de2b93a6a227380264e7bc6cac094", "text": "The most common computer authentication method is to use alphanumerical usernames and passwords. This method has been shown to have significant drawbacks. For example, users tend to pick passwords that can be easily guessed. On the other hand, if a password is hard to guess, then it is often hard to remember. To address this problem, some researchers have developed authentication methods that use pictures as passwords. In this paper, we conduct a comprehensive survey of the existing graphical password techniques. We classify these techniques into two categories: recognition-based and recall-based approaches. We discuss the strengths and limitations of each method and point out the future research directions in this area. We also try to answer two important questions: \"Are graphical passwords as secure as text-based passwords?\"; \"What are the major design and implementation issues for graphical passwords?\" This survey will be useful for information security researchers and practitioners who are interested in finding an alternative to text-based authentication methods", "title": "" }, { "docid": "736ee2bed70510d77b1f9bb13b3bee68", "text": "Yes, they do. This work investigates a perspective for deep learning: whether different normalization layers in a ConvNet require different normalizers. This is the first step towards understanding this phenomenon. We allow each convolutional layer to be stacked before a switchable normalization (SN) that learns to choose a normalizer from a pool of normalization methods. Through systematic experiments in ImageNet, COCO, Cityscapes, and ADE20K, we answer three questions: (a) Is it useful to allow each normalization layer to select its own normalizer? (b) What impacts the choices of normalizers? (c) Do different tasks and datasets prefer different normalizers? Our results suggest that (1) using distinct normalizers improves both learning and generalization of a ConvNet; (2) the choices of normalizers are more related to depth and batch size, but less relevant to parameter initialization, learning rate decay, and solver; (3) different tasks and datasets have different behaviors when learning to select normalizers.", "title": "" }, { "docid": "f6f1462e8edd8200948168423c87c1bf", "text": "Users' behaviors are driven by their preferences across various aspects of items they are potentially interested in purchasing, viewing, etc. Latent space approaches model these aspects in the form of latent factors. Although such approaches have been shown to lead to good results, the aspects that are important to different users can vary. In many domains, there may be a set of aspects for which all users care about and a set of aspects that are specific to different subsets of users. To explicitly capture this, we consider models in which there are some latent factors that capture the shared aspects and some user subset specific latent factors that capture the set of aspects that the different subsets of users care about.\n In particular, we propose two latent space models: rGLSVD and sGLSVD, that combine such a global and user subset specific sets of latent factors. The rGLSVD model assigns the users into different subsets based on their rating patterns and then estimates a global and a set of user subset specific local models whose number of latent dimensions can vary.\n The sGLSVD model estimates both global and user subset specific local models by keeping the number of latent dimensions the same among these models but optimizes the grouping of the users in order to achieve the best approximation. Our experiments on various real-world datasets show that the proposed approaches significantly outperform state-of-the-art latent space top-N recommendation approaches.", "title": "" }, { "docid": "bb5dccb965c71fcbb8c4f2f924e65316", "text": "BACKGROUND AND OBJECTIVES\nBecause skin cancer affects millions of people worldwide, computational methods for the segmentation of pigmented skin lesions in images have been developed in order to assist dermatologists in their diagnosis. This paper aims to present a review of the current methods, and outline a comparative analysis with regards to several of the fundamental steps of image processing, such as image acquisition, pre-processing and segmentation.\n\n\nMETHODS\nTechniques that have been proposed to achieve these tasks were identified and reviewed. As to the image segmentation task, the techniques were classified according to their principle.\n\n\nRESULTS\nThe techniques employed in each step are explained, and their strengths and weaknesses are identified. In addition, several of the reviewed techniques are applied to macroscopic and dermoscopy images in order to exemplify their results.\n\n\nCONCLUSIONS\nThe image segmentation of skin lesions has been addressed successfully in many studies; however, there is a demand for new methodologies in order to improve the efficiency.", "title": "" }, { "docid": "db42b2c5b9894943c3ba05fad07ee2f9", "text": "This paper deals principally with the grid connection problem of a kite-based system, named the “Kite Generator System (KGS).” It presents a control scheme of a closed-orbit KGS, which is a wind power system with a relaxation cycle. Such a system consists of a kite with its orientation mechanism and a power transformation system that connects the previous part to the electric grid. Starting from a given closed orbit, the optimal tether's length rate variation (the kite's tether radial velocity) and the optimal orbit's period are found. The trajectory-tracking problem is not considered in this paper; only the kite's tether radial velocity is controlled via the electric machine rotation velocity. The power transformation system transforms the mechanical energy generated by the kite into electrical energy that can be transferred to the grid. A Matlab/simulink model of the KGS is employed to observe its behavior, and to insure the control of its mechanical and electrical variables. In order to improve the KGS's efficiency in case of slow changes of wind speed, a maximum power point tracking (MPPT) algorithm is proposed.", "title": "" }, { "docid": "5ef325cffe20a0337eca258fa7ad8392", "text": "DEAP (Distributed Evolutionary Algorithms in Python) is a novel volutionary computation framework for rapid prototyping and testing of ideas. Its design departs from most other existing frameworks in that it seeks to make algorithms explicit and data structures transparent, as opposed to the more common black box type of frameworks. It also incorporates easy parallelism where users need not concern themselves with gory implementation details like synchronization and load balancing, only functional decomposition. Several examples illustrate the multiple properties of DEAP.", "title": "" }, { "docid": "950d7d10b09f5d13e09692b2a4576c00", "text": "Prebiotics, as currently conceived of, are all carbohydrates of relatively short chain length. To be effective they must reach the cecum. Present evidence concerning the 2 most studied prebiotics, fructooligosaccharides and inulin, is consistent with their resisting digestion by gastric acid and pancreatic enzymes in vivo. However, the wide variety of new candidate prebiotics becoming available for human use requires that a manageable set of in vitro tests be agreed on so that their nondigestibility and fermentability can be established without recourse to human studies in every case. In the large intestine, prebiotics, in addition to their selective effects on bifidobacteria and lactobacilli, influence many aspects of bowel function through fermentation. Short-chain fatty acids are a major product of prebiotic breakdown, but as yet, no characteristic pattern of fermentation acids has been identified. Through stimulation of bacterial growth and fermentation, prebiotics affect bowel habit and are mildly laxative. Perhaps more importantly, some are a potent source of hydrogen in the gut. Mild flatulence is frequently observed by subjects being fed prebiotics; in a significant number of subjects it is severe enough to be unacceptable and to discourage consumption. Prebiotics are like other carbohydrates that reach the cecum, such as nonstarch polysaccharides, sugar alcohols, and resistant starch, in being substrates for fermentation. They are, however, distinctive in their selective effect on the microflora and their propensity to produce flatulence.", "title": "" }, { "docid": "cdd32b82093fc08fbc0016a7ac9f2e60", "text": "The research field of technology acceptance and software acceptance is a fertile field in the discipline of MIS. Acceptance research is mainly affected by the technology acceptance model (TAM). The TAM is counted as the major guideline for acceptance research. But recently more researchers discover the deficits of former acceptance research. The main cause of the criticism is the focus on quantitative research methods. We will show this with the help of former meta-studies and a literature review. Quantitative approaches are basically appropriate for the testing of theories. The development of new theories or constructs is followed to a lesser intent. In the article we will show how a qualitative approach can be used for theory-construction. We will introduce a qualitative research design and show how this approach can be used to develop new constructs of acceptance while some existing constructs taken from TAM and related theories cannot be confirmed.", "title": "" }, { "docid": "96ace1fc608d90ae53f903802bb60a10", "text": "Attributes offer useful mid-level features to interpret visual data. While most attribute learning methods are supervised by costly human-generated labels, we introduce a simple yet powerful unsupervised approach to learn and predict visual attributes directly from data. Given a large unlabeled image collection as input, we train deep Convolutional Neural Networks (CNNs) to output a set of discriminative, binary attributes often with semantic meanings. Specifically, we first train a CNN coupled with unsupervised discriminative clustering, and then use the cluster membership as a soft supervision to discover shared attributes from the clusters while maximizing their separability. The learned attributes are shown to be capable of encoding rich imagery properties from both natural images and contour patches. The visual representations learned in this way are also transferrable to other tasks such as object detection. We show other convincing results on the related tasks of image retrieval and classification, and contour detection.", "title": "" }, { "docid": "f50a58aad1697eef2d2d62be8eae7b08", "text": "Engineering biological systems with predictable behavior is a foundational goal of synthetic biology. To accomplish this, it is important to accurately characterize the behavior of biological devices. Prior characterization efforts, however, have generally not yielded enough high-quality information to enable compositional design. In the TASBE (A Tool-Chain to Accelerate Synthetic Biological Engineering) project we have developed a new characterization technique capable of producing such data. This document describes the techniques we have developed, along with examples of their application, so that the techniques can be accurately used by others. 10 1 10 10 10 10 10 10 1 10 10 10 10 10 [Dox] IF P M EF L/ pl as m id Normalized Dox transfer curve, colored by plasmid bin 10 10 10 10 10 10 10 10 10 10 10 IFP MEFL O FP M EF L/ pl as m id Normalized Tal1 transfer curve, colored by plasmid count Work partially sponsored by DARPA; the views and conclusions contained in this document are those of the authors and not DARPA or the U.S. Government.", "title": "" }, { "docid": "1d1ba5f131c9603fe3d919ad493a6dc1", "text": "By its very nature, software development consists of many knowledge-intensive processes. One of the most difficult to model, however, is requirements elicitation. This paper presents a mathematical model of the requirements elicitation process that clearly shows the critical role of knowledge in its performance. One metaprocess of requirements elicitation, selection of an appropriate elicitation technique, is also captured in the model. The values of this model are: (1) improved understanding of what needs to be performed during elicitation helps analysts improve their elicitation efforts, (2) improved understanding of how elicitation techniques are selected helps less experienced analysts be as successful as more experienced analysts, and (3) as we improve our ability to perform elicitation, we improve the likelihood that the systems we create will meet their intended customers’ needs. Many papers have been written that promulgate specific elicitation methods. A few have been written that model elicitation in general. However, none have yet to model elicitation in a way that makes clear the critical role played by knowledge. This paper’s model captures the critical roles played by knowledge in both elicitation and elicitation technique selection.", "title": "" } ]
scidocsrr
52fcac10e3a340aab6653031c2dae94d
Compliant leg behaviour explains basic dynamics of walking and running.
[ { "docid": "5d1e77b6b09ebac609f2e518b316bd49", "text": "Principles of muscle coordination in gait have been based largely on analyses of body motion, ground reaction force and EMG measurements. However, data from dynamical simulations provide a cause-effect framework for analyzing these measurements; for example, Part I (Gait Posture, in press) of this two-part review described how force generation in a muscle affects the acceleration and energy flow among the segments. This Part II reviews the mechanical and coordination concepts arising from analyses of simulations of walking. Simple models have elucidated the basic multisegmented ballistic and passive mechanics of walking. Dynamical models driven by net joint moments have provided clues about coordination in healthy and pathological gait. Simulations driven by muscle excitations have highlighted the partial stability afforded by muscles with their viscoelastic-like properties and the predictability of walking performance when minimization of metabolic energy per unit distance is assumed. When combined with neural control models for exciting motoneuronal pools, simulations have shown how the integrative properties of the neuro-musculo-skeletal systems maintain a stable gait. Other analyses of walking simulations have revealed how individual muscles contribute to trunk support and progression. Finally, we discuss how biomechanical models and simulations may enhance our understanding of the mechanics and muscle function of walking in individuals with gait impairments.", "title": "" } ]
[ { "docid": "d4400c07fe072a841c8f8e910c0e17f0", "text": "In the field of big data applications, lossless data compression and decompression can play an important role in improving the data center's efficiency in storage and distribution of data. To avoid becoming a performance bottleneck, they must be accelerated to have a capability of high speed data processing. As FPGAs begin to be deployed as compute accelerators in the data centers for its advantages of massive parallel customized processing capability, power efficiency and hardware reconfiguration. It is promising and interesting to use FPGAs for acceleration of data compression and decompression. The conventional development of FPGA accelerators using hardware description language costs much more design efforts than that of CPUs or GPUs. High level synthesis (HLS) can be used to greatly improve the design productivity. In this paper, we present a solution for accelerating lossless data decompression on FPGA by using HLS. With a pipelined data-flow structure, the proposed decompression accelerator can perform static Huffman decoding and LZ77 decompression at a very high throughput rate. According to the experimental results conducted on FPGA with the Calgary Corpus data benchmark, the average data throughput of the proposed decompression core achieves to 4.6 Gbps while running at 200 MHz.", "title": "" }, { "docid": "6f9ffe5e1633046418ca0bc4f7089b2f", "text": "This paper presents a new motion planning primitive to be used for the iterative steering of vision-based autonomous vehicles. This primitive is a parameterized quintic spline, denoted as -spline, that allows interpolating an arbitrary sequence of points with overall second-order geometric ( -) continuity. Issues such as completeness, minimality, regularity, symmetry, and flexibility of these -splines are addressed in the exposition. The development of the new primitive is tightly connected to the inversion control of nonholonomic car-like vehicles. The paper also exposes a supervisory strategy for iterative steering that integrates feedback vision data processing with the feedforward inversion control.", "title": "" }, { "docid": "84569374aa1adb152aee714d053b082d", "text": "PURPOSE\nTo describe the insertions of the superficial medial collateral ligament (sMCL) and posterior oblique ligament (POL) and their related osseous landmarks.\n\n\nMETHODS\nInsertions of the sMCL and POL were identified and marked in 22 unpaired human cadaveric knees. The surface area, location, positional relations, and morphology of the sMCL and POL insertions and related osseous structures were analyzed on 3-dimensional images.\n\n\nRESULTS\nThe femoral insertion of the POL was located 18.3 mm distal to the apex of the adductor tubercle (AT). The femoral insertion of the sMCL was located 21.1 mm distal to the AT and 9.2 mm anterior to the POL. The angle between the femoral axis and femoral insertion of the sMCL was 18.6°, and that between the femoral axis and the POL insertion was 5.1°. The anterior portions of the distal fibers of the POL were attached to the fascia cruris and semimembranosus tendon, whereas the posterior fibers were attached to the posteromedial side of the tibia directly. The tibial insertion of the POL was located just proximal and medial to the superior edge of the semimembranosus groove. The tibial insertion of the sMCL was attached firmly and widely to the tibial crest. The mean linear distances between the tibial insertion of the POL or sMCL and joint line were 5.8 and 49.6 mm, respectively.\n\n\nCONCLUSIONS\nThis study used 3-dimensional images to assess the insertions of the sMCL and POL and their related osseous landmarks. The AT was identified clearly as an osseous landmark of the femoral insertions of the sMCL and POL. The tibial crest and semimembranosus groove served as osseous landmarks of the tibial insertions of the sMCL and POL.\n\n\nCLINICAL RELEVANCE\nBy showing further details of the anatomy of the knee, the described findings can assist surgeons in anatomic reconstruction of the sMCL and POL.", "title": "" }, { "docid": "9c01496a3f3c52705671553165aa2024", "text": "Fiberoptic bronchoscopy is a widely performed procedure that is generally considered to be safe. The first performed bronchoscopy was done by Gustav Killian in 1897; however, the development of flexible fiberoptic bronchoscopy was accomplished by Ikeda in 1964(1). Flexible fiberoptic bronchoscopy is a key diagnostic and therapeutic procedure(2). It is estimated that more than 500,000 of these procedures are performed each year by pulmonologists, otolaryngologists, anesthesiologists, and cardiothoracic and trauma surgeons(3). Despite the widespread practice of diagnostic flexible bronchoscopy, there are no firm guidelines that assure a uniform acquisition of basic skills and competency in this procedure, nor are there guidelines to ensure uniform training and competency in advanced diagnostic flexible bronchoscopic techniques(4). The purpose of this review is to provide an update on 1) tracheobronchial anatomy, 2) flexible fiberoptic bronchoscopy exam, 3) training and competence on fiberoptic bronchoscopy, and 4) application of flexible fiberoptic bronchoscopy in thoracic anesthesia.", "title": "" }, { "docid": "631dc14ab0df1e5def0998bcf6ad016e", "text": "This study investigates the performance of two open source intrusion detection systems (IDSs) namely Snort and Suricata for accurately detecting the malicious traffic on computer networks. Snort and Suricata were installed on two different but identical computers and the performance was evaluated at 10 Gbps network speed. It was noted that Suricata could process a higher speed of network traffic than Snort with lower packet drop rate but it consumed higher computational resources. Snort had higher detection accuracy and was thus selected for further experiments. It was observed that Snort triggered a high rate of false positive alarms. To solve this problem a Snort adaptive plug-in was developed. To select the best performing algorithm for the Snort adaptive plug-in, an empirical study was carried out with different learning algorithms and Support Vector Machine (SVM) was selected. A hybrid version of SVM and Fuzzy logic produced a better detection accuracy. But the best result was achieved using an optimized SVM with the firefly algorithm with FPR (false positive rate) as 8.6% and FNR (false negative rate) as 2.2%, which is a good result. The novelty of this work is the performance comparison of two IDSs at 10 Gbps and the application of hybrid and optimized machine learning algorithms to Snort.", "title": "" }, { "docid": "d29ad30492b084cbcd2e6ede4665f483", "text": "K-means algorithm has been widely used in machine learning and data mining due to its simplicity and good performance. However, the standard k-means algorithm would be quite slow for clustering millions of data into thousands of or even tens of thousands of clusters. In this paper, we propose a fast k-means algorithm named multi-stage k-means (MKM) which uses a multi-stage filtering approach. The multi-stage filtering approach greatly accelerates the k-means algorithm via a coarse-to-fine search strategy. To further speed up the algorithm, hashing is introduced to accelerate the assignment step which is the most time-consuming part in k-means. Extensive experiments on several massive datasets show that the proposed algorithm can obtain up to 600X speed-up over the k-means algorithm with comparable accuracy.", "title": "" }, { "docid": "264c63f249f13bf3eb4fd5faac8f4fa0", "text": "This paper presents the study to investigate the possibility of the stand-alone micro hydro for low-cost electricity production which can satisfy the energy load requirements of a typical remote and isolated rural area. In this framework, the feasibility study in term of the technical and economical performances of the micro hydro system are determined according to the rural electrification concept. The proposed axial flux permanent magnet (AFPM) generator will be designed for micro hydro under sustainable development to optimize between cost and efficiency by using the local materials and basic engineering knowledge. First of all, the simple simulation of micro hydro model for lighting system is developed by considering the optimal size of AFPM generator. The simulation results show that the optimal micro hydro power plant with 70 W can supply the 9 W compact fluorescent up to 20 set for 8 hours by using pressure of water with 6 meters and 0.141 m3/min of flow rate. Lastly, a proposed micro hydro power plant can supply lighting system for rural electrification up to 525.6 kWh/year or 1,839.60 Baht/year and reduce 0.33 ton/year of CO2 emission.", "title": "" }, { "docid": "150a09dbdbc53282a23a2e99e4509255", "text": "The reductionist approach has revolutionized biology in the past 50 years. Yet its limits are being felt as the complexity of cellular interactions is gradually revealed by high-throughput technology. In order to make sense of the deluge of \"omic data\", a hypothesis-driven view is needed to understand how biomolecular interactions shape cellular networks. We review recent efforts aimed at building in vitro biochemical networks that reproduce the flow of genetic regulation. We highlight how those efforts have culminated in the rational construction of biochemical oscillators and bistable memories in test tubes. We also recapitulate the lessons learned about in vivo biochemical circuits such as the importance of delays and competition, the links between topology and kinetics, as well as the intriguing resemblance between cellular reaction networks and ecosystems.", "title": "" }, { "docid": "747ca83d8a4be084a30bbba3e96f248c", "text": "Introduction to chapter. Due to its cryptographic and operational key features such as the one-way function property, high speed and a fixed output size independent of input size the hash algorithm is one of the most important cryptographic primitives. A critical drawback of most cryptographic algorithms is the large computational overheads. This is getting more critical since the data amount to process or communicate is dramatically increasing. In many of such cases, a proper use of the hash algorithm effectively reduces the computational overhead. Digital signature algorithm and the message authentication are the most common applications of the hash algorithms. The increasing data size also motivates hardware designers to have a throughput optimal architecture of a given hash algorithm. In this chapter, some popular hash algorithms and their cryptanalysis are briefly introduced, and a design methodology for throughput optimal architectures of MD4-based hash algorithms is described in detail.", "title": "" }, { "docid": "d159ddace8c8d33963a304e04484aeff", "text": "This work addresses the problem of semantic scene understanding under fog. Although marked progress has been made in semantic scene understanding, it is mainly concentrated on clear-weather scenes. Extending semantic segmentation methods to adverse weather conditions such as fog is crucial for outdoor applications. In this paper, we propose a novel method, named Curriculum Model Adaptation (CMAda), which gradually adapts a semantic segmentation model from light synthetic fog to dense real fog in multiple steps, using both labeled synthetic foggy data and unlabeled real foggy data. The method is based on the fact that the results of semantic segmentation in moderately adverse conditions (light fog) can be bootstrapped to solve the same problem in highly adverse conditions (dense fog). CMAda is extensible to other adverse conditions and provides a new paradigm for learning with synthetic data and unlabeled real data. In addition, we present three other main stand-alone contributions: 1) a novel method to add synthetic fog to real, clear-weather scenes using semantic input; 2) a new fog density estimator; 3) a novel fog densification method to densify the fog in real foggy scenes without known depth; and 4) the Foggy Zurich dataset comprising 3808 real foggy images, with pixel-level semantic annotations for 40 images under dense fog. Our experiments show that 1) our fog simulation and fog density estimator outperform their state-of-theart counterparts with respect to the task of semantic foggy scene understanding (SFSU); 2) CMAda improves the performance of state-of-the-art models for SFSU significantly, benefiting both from our synthetic and real foggy data. The datasets and code are available at the project website. D. Dai · C. Sakaridis · S. Hecker · L. Van Gool ETH Zürich, Zurich, Switzerland L. Van Gool KU Leuven, Leuven, Belgium", "title": "" }, { "docid": "3b05004828d71f1b69d80cb25e165d7f", "text": "Mapping in the GPS-denied environment is an important and challenging task in the field of robotics. In the large environment, mapping can be significantly accelerated by multiple robots exploring different parts of the environment. Accordingly, a key problem is how to integrate these local maps built by different robots into a single global map. In this paper, we propose an approach for simultaneous merging of multiple grid maps by the robust motion averaging. The main idea of this approach is to recover all global motions for map merging from a set of relative motions. Therefore, it firstly adopts the pair-wise map merging method to estimate relative motions for grid map pairs. To obtain as many reliable relative motions as possible, a graph-based sampling scheme is utilized to efficiently remove unreliable relative motions obtained from the pair-wise map merging. Subsequently, the accurate global motions can be recovered from the set of reliable relative motions by the motion averaging. Experimental results carried on real robot data sets demonstrate that proposed approach can achieve simultaneous merging of multiple grid maps with good performances.", "title": "" }, { "docid": "18a483a6f8ce4f20a6e5209ca6dd4808", "text": "OBJECTIVE\nCurrent mainstream EEG electrode setups permit efficient recordings, but are often bulky and uncomfortable for subjects. Here we introduce a novel type of EEG electrode, which is designed for an optimal wearing comfort. The electrode is referred to as C-electrode where \"C\" stands for comfort.\n\n\nMETHODS\nThe C-electrode does not require any holder/cap for fixation on the head nor does it use traditional pads/lining of disposable electrodes - thus, it does not disturb subjects. Fixation of the C-electrode on the scalp is based entirely on the adhesive interaction between the very light C-electrode/wire construction (<35 mg) and a droplet of EEG paste/gel. Moreover, because of its miniaturization, both C-electrode (diameter 2-3mm) and a wire (diameter approximately 50 microm) are minimally (or not at all) visible to an external observer. EEG recordings with standard and C-electrodes were performed during rest condition, self-paced movements and median nerve stimulation.\n\n\nRESULTS\nThe quality of EEG recordings for all three types of experimental conditions was similar for standard and C-electrodes, i.e., for near-DC recordings (Bereitschaftspotential), standard rest EEG spectra (1-45 Hz) and very fast oscillations approximately 600 Hz (somatosensory evoked potentials). The tests showed also that once being placed on a subject's head, C-electrodes can be used for 9h without any loss in EEG recording quality. Furthermore, we showed that C-electrodes can be effectively utilized for Brain-Computer Interfacing. C-electrodes proved to posses a high stability of mechanical fixation (stayed attached with 2.5 g accelerations). Subjects also reported not having any tactile sensations associated with wearing of C-electrodes.\n\n\nCONCLUSION\nC-electrodes provide optimal wearing comfort without any loss in the quality of EEG recordings.\n\n\nSIGNIFICANCE\nWe anticipate that C-electrodes can be used in a wide range of clinical, research and emerging neuro-technological environments.", "title": "" }, { "docid": "c9b4366d56a889b5f25c92fe45898c08", "text": "Studies of the clinical correlates of the subtypes of Attention-Deficit/Hyperactivity Disorder (ADHD) have identified differences in the representation of age, gender, prevalence, comorbidity, and treatment. We report retrospective chart review data detailing the clinical characteristics of the Inattentive (IA) and Combined (C) subtypes of ADHD in 143 cases of ADHD-IA and 133 cases of ADHD-C. The children with ADHD-IA were older, more likely to be female, and had more comorbid internalizing disorders and learning disabilities. Individuals in the ADHD-IA group were two to five times as likely to have a referral for speech and language problems. The children with ADHD-IA were rated as having less overall functional impairment, but did have difficulty with academic achievement. Children with ADHD-IA were less likely to be treated with stimulants. One eighth of the children with ADHD-IA still had significant symptoms of hyperactivity/impulsivity, but did not meet the DSM-IV threshold for diagnosis of ADHD-Combined Type. The ADHD-IA subtype includes children with no hyperactivity and children who still manifest clinically significant hyperactive symptomatology but do not meet DSM-IV criteria for Combined Type. ADHD-IA children are often seen as having speech and language problems, and are less likely to receive medication treatment, but respond to medical treatment with improvement both in attention and residual hyperactive/impulsive symptoms.", "title": "" }, { "docid": "9507febd41296b63e8a6434eb27400f9", "text": "This paper presents a new approach for automatic concept extraction, using grammatical parsers and Latent Semantic Analysis. The methodology is described, also the tool used to build the benchmarkingcorpus. The results obtained on student essays shows good inter-rater agreement and promising machine extraction performance. Concept extraction is the first step to automatically extract concept maps fromstudent’s essays or Concept Map Mining.", "title": "" }, { "docid": "8bd9a5cf3ca49ad8dd38750410a462b0", "text": "Most regional anesthesia in breast surgeries is performed as postoperative pain management under general anesthesia, and not as the primary anesthesia. Regional anesthesia has very few cardiovascular or pulmonary side-effects, as compared with general anesthesia. Pectoral nerve block is a relatively new technique, with fewer complications than other regional anesthesia. We performed Pecs I and Pec II block simultaneously as primary anesthesia under moderate sedation with dexmedetomidine for breast conserving surgery in a 49-year-old female patient with invasive ductal carcinoma. Block was uneventful and showed no complications. Thus, Pecs block with sedation could be an alternative to general anesthesia for breast surgeries.", "title": "" }, { "docid": "66423bc00bb724d1d0c616397d898dd0", "text": "Background\nThere is a growing trend for patients to seek the least invasive treatments with less risk of complications and downtime for facial rejuvenation. Thread embedding acupuncture has become popular as a minimally invasive treatment. However, there is little clinical evidence in the literature regarding its effects.\n\n\nMethods\nThis single-arm, prospective, open-label study recruited participants who were women aged 40-59 years, with Glogau photoaging scale III-IV. Fourteen participants received thread embedding acupuncture one time and were measured before and after 1 week from the procedure. The primary outcome was a jowl to subnasale vertical distance. The secondary outcomes were facial wrinkle distances, global esthetic improvement scale, Alexiades-Armenakas laxity scale, and patient-oriented self-assessment scale.\n\n\nResults\nFourteen participants underwent thread embedding acupuncture alone, and 12 participants revisited for follow-up outcome measures. For the primary outcome measure, both jowls were elevated in vertical height by 1.87 mm (left) and 1.43 mm (right). Distances of both melolabial and nasolabial folds showed significant improvement. In the Alexiades-Armenakas laxity scale, each evaluator evaluated for four and nine participants by 0.5 grades improved. In the global aesthetic improvement scale, improvement was graded as 1 and 2 in nine and five cases, respectively. The most common adverse events were mild bruising, swelling, and pain. However, adverse events occurred, although mostly minor and of short duration.\n\n\nConclusion\nIn this study, thread embedding acupuncture showed clinical potential for facial wrinkles and laxity. However, further large-scale trials with a controlled design and objective measurements are needed.", "title": "" }, { "docid": "eb3a993e5302a45c11daa8d3482468c7", "text": "Network structure determination is an important issue in pattern classification based on a probabilistic neural network. In this study, a supervised network structure determination algorithm is proposed. The proposed algorithm consists of two parts and runs in an iterative way. The first part identifies an appropriate smoothing parameter using a genetic algorithm, while the second part determines suitable pattern layer neurons using a forward regression orthogonal algorithm. The proposed algorithm is capable of offering a fairly small network structure with satisfactory classification accuracy.", "title": "" }, { "docid": "2ce6a8dfe133da8a4486e2aca3487a03", "text": "This paper responds to research into the aerodynamics of flapping wings and to the problem of the lack of an adequate method which accommodates large-scale trailing vortices. A comparative review is provided of prevailing aerodynamic methods, highlighting their respective limitations as well as strengths. The main advantages of an unsteady aerodynamic panel method are then introduced and illustrated by modelling the flapping wings of a tethered sphingid moth and comparing the results with those generated using a quasi-steady method. The improved correlations of the aerodynamic forces and the resultant graphics clearly demonstrate the advantages of the unsteady panel method (namely, its ability to detail the trailing wake and to include dynamic effects in a distributed manner).", "title": "" }, { "docid": "2bee8125c2a8a1c85ab7f044e28e2191", "text": "To achieve instantaneous control of induction motor torque using field-orientation techniques, it is necessary that the phase currents be controlled to maintain precise instantaneous relationships. Failure to do so results in a noticeable degradation in torque response. Most of the currently used approaches to achieve this control employ classical control strategies which are only correct for steady-state conditions. A modern control theory approach which circumvents these limitations is developed. The approach uses a state-variable feedback control model of the field-oriented induction machine. This state-variable controller is shown to be intrinsically more robust than PI regulators. Experimental verification of the performance of this state-variable control strategy in achieving current-loop performance and torque control at high operating speeds is included.", "title": "" }, { "docid": "ea87229e46fd049930c75a9d5187fd6c", "text": "Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies.", "title": "" } ]
scidocsrr