query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
23
| negative_passages
listlengths 4
100
| subset
stringclasses 7
values |
---|---|---|---|---|
0ef23f3cae8b3bfb41a8781c5febc94a
|
Fusing LIDAR and images for pedestrian detection using convolutional neural networks
|
[
{
"docid": "cc4c58f1bd6e5eb49044353b2ecfb317",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
}
] |
[
{
"docid": "030e3872ea6169e41236b416818caf31",
"text": "BACKGROUND\nPersonality traits are considered risk factors for drug use, and, in turn, the psychoactive substances impact individuals' traits. Furthermore, there is increasing interest in developing treatment approaches that match an individual's personality profile. To advance our knowledge of the role of individual differences in drug use, the present study compares the personality profile of tobacco, marijuana, cocaine, and heroin users and non-users using the wide spectrum Five-Factor Model (FFM) of personality in a diverse community sample.\n\n\nMETHOD\nParticipants (N = 1,102; mean age = 57) were part of the Epidemiologic Catchment Area (ECA) program in Baltimore, MD, USA. The sample was drawn from a community with a wide range of socio-economic conditions. Personality traits were assessed with the Revised NEO Personality Inventory (NEO-PI-R), and psychoactive substance use was assessed with systematic interview.\n\n\nRESULTS\nCompared to never smokers, current cigarette smokers score lower on Conscientiousness and higher on Neuroticism. Similar, but more extreme, is the profile of cocaine/heroin users, which score very high on Neuroticism, especially Vulnerability, and very low on Conscientiousness, particularly Competence, Achievement-Striving, and Deliberation. By contrast, marijuana users score high on Openness to Experience, average on Neuroticism, but low on Agreeableness and Conscientiousness.\n\n\nCONCLUSION\nIn addition to confirming high levels of negative affect and impulsive traits, this study highlights the links between drug use and low Conscientiousness. These links provide insight into the etiology of drug use and have implications for public health interventions.",
"title": ""
},
{
"docid": "c8f6eac662b30768b2e64b3bd3502e73",
"text": "This paper discusses the use of genetic programming (GP) and genetic algorithms (GA) to evolve solutions to a problem in robot control. GP is seen as an intuitive evolutionary method while GAs require an extra layer of human intervention. The infrastructures for the different evolutionary approaches are compared.",
"title": ""
},
{
"docid": "2e0941f7874ce5372927544791a81a2e",
"text": "This project has as an objective of the extraction of humans in the foreground of image by creating a trimap which combines a depth map analysis and the Chromakey technique. The trimap is generated automatically, differing from the manual implementations which require user interaction. The extraction is based on extra information deriving from a structured lighting device (Kinect) integrated with a high resolution camera. With the junction of the monochromatic Kinect camera and the high definition camera, the results so far have been more expressive than only using the RGB and monochromatic cameras from the Kinect.",
"title": ""
},
{
"docid": "4019d3f46ec0ef42145d8d63b62a88d0",
"text": "Learning policies on data synthesized by models can in principle quench the thirst of reinforcement learning algorithms for large amounts of real experience, which is often costly to acquire. However, simulating plausible experience de novo is a hard problem for many complex environments, often resulting in biases for modelbased policy evaluation and search. Instead of de novo synthesis of data, here we assume logged, real experience and model alternative outcomes of this experience under counterfactual actions, i.e. actions that were not actually taken. Based on this, we propose the Counterfactually-Guided Policy Search (CF-GPS) algorithm for learning policies in POMDPs from off-policy experience. It leverages structural causal models for counterfactual evaluation of arbitrary policies on individual off-policy episodes. CF-GPS can improve on vanilla model-based RL algorithms by making use of available logged data to de-bias model predictions. In contrast to off-policy algorithms based on Importance Sampling which re-weight data, CF-GPS leverages a model to explicitly consider alternative outcomes, allowing the algorithm to make better use of experience data. We find empirically that these advantages translate into improved policy evaluation and search results on a non-trivial grid-world task. Finally, we show that CF-GPS generalizes the previously proposed Guided Policy Search and that reparameterization-based algorithms such Stochastic Value Gradient can be interpreted as counterfactual methods.",
"title": ""
},
{
"docid": "22a667496ebd652a47ac1ffc3546ec96",
"text": "Most current semantic segmentation methods rely on fully convolutional networks (FCNs). However, their use of large receptive fields and many pooling layers cause low spatial resolution inside the deep layers. This leads to predictions with poor localization around the boundaries. Prior work has attempted to address this issue by post-processing predictions with CRFs or MRFs. But such models often fail to capture semantic relationships between objects, which causes spatially disjoint predictions. To overcome these problems, recent methods integrated CRFs or MRFs into an FCN framework. The downside of these new models is that they have much higher complexity than traditional FCNs, which renders training and testing more challenging. In this work we introduce a simple, yet effective Convolutional Random Walk Network (RWN) that addresses the issues of poor boundary localization and spatially fragmented predictions with very little increase in model complexity. Our proposed RWN jointly optimizes the objectives of pixelwise affinity and semantic segmentation. It combines these two objectives via a novel random walk layer that enforces consistent spatial grouping in the deep layers of the network. Our RWN is implemented using standard convolution and matrix multiplication. This allows an easy integration into existing FCN frameworks and it enables end-to-end training of the whole network via standard back-propagation. Our implementation of RWN requires just 131 additional parameters compared to the traditional FCNs, and yet it consistently produces an improvement over the FCNs on semantic segmentation and scene labeling.",
"title": ""
},
{
"docid": "74a3c4dae9573325b292da736d46a78e",
"text": "Machine learning is currently dominated by largely experimental work focused on improvements in a few key tasks. However, the impressive accuracy numbers of the best performing models are questionable because the same test sets have been used to select these models for multiple years now. To understand the danger of overfitting, we measure the accuracy of CIFAR-10 classifiers by creating a new test set of truly unseen images. Although we ensure that the new test set is as close to the original data distribution as possible, we find a large drop in accuracy (4% to 10%) for a broad range of deep learning models. Yet, more recent models with higher original accuracy show a smaller drop and better overall performance, indicating that this drop is likely not due to overfitting based on adaptivity. Instead, we view our results as evidence that current accuracy numbers are brittle and susceptible to even minute natural variations in the data distribution.",
"title": ""
},
{
"docid": "0a0ec569738b90f44b0c20870fe4dc2f",
"text": "Transactional memory provides a concurrency control mechanism that avoids many of the pitfalls of lock-based synchronization. Researchers have proposed several different implementations of transactional memory, broadly classified into software transactional memory (STM) and hardware transactional memory (HTM). Both approaches have their pros and cons: STMs provide rich and flexible transactional semantics on stock processors but incur significant overheads. HTMs, on the other hand, provide high performance but implement restricted semantics or add significant hardware complexity. This paper is the first to propose architectural support for accelerating transactions executed entirely in software. We propose instruction set architecture (ISA) extensions and novel hardware mechanisms that improve STM performance. We adapt a high-performance STM algorithm supporting rich transactional semantics to our ISA extensions (called hardware accelerated software transactional memory or HASTM). HASTM accelerates fully virtualized nested transactions, supports language integration, and provides both object-based and cache-line based conflict detection. We have implemented HASTM in an accurate multi-core IA32 simulator. Our simulation results show that (1) HASTM single-thread performance is comparable to a conventional HTM implementation; (2) HASTM scaling is comparable to a STM implementation; and (3) HASTM is resilient to spurious aborts and can scale better than HTM in a multi-core setting. Thus, HASTM provides the flexibility and rich semantics of STM, while giving the performance of HTM.",
"title": ""
},
{
"docid": "fba2a59e74e7288cbdb1970e4a52d454",
"text": "Suppose that, for a learning task, we have to select one hypothesis out of a set of hypotheses (that may, for example, have been generated by multiple applications of a randomized learning algorithm). A common approach is to evaluate each hypothesis in the set on some previously unseen cross-validation data, and then to select the hypothesis that had the lowest cross-validation error. But when the cross-validation data is partially corrupted such as by noise, and if the set of hypotheses we are selecting from is large, then \\folklore\" also warns about \\overrtting\" the cross-In this paper, we explain how this \\overrtting\" really occurs, and show the surprising result that it can be overcome by selecting a hypothesis with a higher cross-validation error, over others with lower cross-validation errors. We give reasons for not selecting the hypothesis with the lowest cross-validation error, and propose a new algorithm, LOOCVCV, that uses a computa-tionally eecient form of leave{one{out cross-validation to select such a hypothesis. Finally , we present experimental results for one domain, that show LOOCVCV consistently beating picking the hypothesis with the lowest cross-validation error, even when using reasonably large cross-validation sets.",
"title": ""
},
{
"docid": "7e459967f93c4cf0b432717aa41201e1",
"text": "Paper describes the development of prototype that enables monitoring of heart rate and inter beat interval for several subjects. The prototype was realized using ESP8266 hardware modules, WebSocket library, nodejs and JavaScript. System architecture is described where nodejs server acts as the signal processing and GUI code provider for clients. Signal processing algorithm was implemented in JavaScript. Application GUI is presented which can be used on mobile devices. Several important parts of the code are described which illustrate the communication between ESP8266 modules, server and clients. Developed prototype shows one of the possible realizations of group monitoring of biomedical data.",
"title": ""
},
{
"docid": "244a02626a49fd61c274874577de4388",
"text": "New apple fruit recognition algorithms based on colour features are presented to estimate the number of fruits and develop models for early prediction of apple yield, in a multi-disciplinary approach linking computer science with agricultural engineering and horticulture as part of precision agriculture. Fifty cv. ‘Gala’ apple digital images were captured twice, i.e. after June drop and during ripening, on the preferred western side of the tree row with a variability of between 70 and 170 fruit per tree, under natural daylight conditions at Bonn, Germany. Several image processing algorithms and fruit counting algorithms were used to analyse the apple images. Finally, an apple recognition algorithm with colour difference R − B (red minus blue) and G − R (green minus red) was developed for apple images after June drop, and two different colour models were used to segment ripening period apple images. The algorithm was tested on 50 images of trees in each period. Close correlation coefficients R 2 of 0.80 and 0.85 were obtained for two developmental periods between apples detected by the fruit counting algorithm and those manually counted. Two sets of data in each period were used for modelling yield prediction of the apple fruits. In the calibration data set, the R 2 values between apples detected by the fruit counting algorithm and actual harvested yield were from 0.57 for young fruit after June drop to 0.70 in the fruit ripening period. In the validation data set, the R 2 value between the number of apples predicted by the model and actual yield at harvest ranged from 0.58 to 0.71. The proposed model showed great potential for early prediction of yield for individual trees of apple and possibly other fruit crops.",
"title": ""
},
{
"docid": "7d308c302065253ee1adbffad04ff3f1",
"text": "Cloud computing opens a new era in IT as it can provide various elastic and scalable IT services in a pay-as-you-go fashion, where its users can reduce the huge capital investments in their own IT infrastructure. In this philosophy, users of cloud storage services no longer physically maintain direct control over their data, which makes data security one of the major concerns of using cloud. Existing research work already allows data integrity to be verified without possession of the actual data file. When the verification is done by a trusted third party, this verification process is also called data auditing, and this third party is called an auditor. However, such schemes in existence suffer from several common drawbacks. First, a necessary authorization/authentication process is missing between the auditor and cloud service provider, i.e., anyone can challenge the cloud service provider for a proof of integrity of certain file, which potentially puts the quality of the so-called `auditing-as-a-service' at risk; Second, although some of the recent work based on BLS signature can already support fully dynamic data updates over fixed-size data blocks, they only support updates with fixed-sized blocks as basic unit, which we call coarse-grained updates. As a result, every small update will cause re-computation and updating of the authenticator for an entire file block, which in turn causes higher storage and communication overheads. In this paper, we provide a formal analysis for possible types of fine-grained data updates and propose a scheme that can fully support authorized auditing and fine-grained update requests. Based on our scheme, we also propose an enhancement that can dramatically reduce communication overheads for verifying small updates. Theoretical analysis and experimental results demonstrate that our scheme can offer not only enhanced security and flexibility, but also significantly lower overhead for big data applications with a large number of frequent small updates, such as applications in social media and business transactions.",
"title": ""
},
{
"docid": "9e0cbbe8d95298313fd929a7eb2bfea9",
"text": "We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re search efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology.",
"title": ""
},
{
"docid": "2a15dfdf9c9a225ef1328e72100f8035",
"text": "We present an efficient numerical strategy for the Bayesian solution of inverse problems. Stochastic collocation methods, based on generalized polynomial chaos (gPC), are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. This approximation then defines a surrogate posterior probability density that can be evaluated repeatedly at minimal computational cost. The ability to simulate a large number of samples from the posterior distribution results in very accurate estimates of the inverse solution and its associated uncertainty. Combined with high accuracy of the gPC-based forward solver, the new algorithm can provide great efficiency in practical applications. A rigorous error analysis of the algorithm is conducted, where we establish convergence of the approximate posterior to the true posterior and obtain an estimate of the convergence rate. It is proved that fast (exponential) convergence of the gPC forward solution yields similarly fast (exponential) convergence of the posterior. The numerical strategy and the predicted convergence rates are then demonstrated on nonlinear inverse problems of varying smoothness and dimension. AMS subject classifications: 41A10, 60H35, 65C30, 65C50",
"title": ""
},
{
"docid": "e5016e84bdbd016e880f12bfdfd99cb5",
"text": "The subject of this paper is a method which suppresses systematic errors of resolvers and optical encoders with sinusoidal line signals. The proposed method does not require any additional hardware and the computational efforts are minimal. Since this method does not cause any time delay, the dynamic of the speed control is not affected. By means of this new scheme, dynamic and smooth running characteristics of drive systems are improved considerably.",
"title": ""
},
{
"docid": "89dea4ec4fd32a4a61be184d97ae5ba6",
"text": "In this paper, we propose Generative Adversarial Network (GAN) architectures that use Capsule Networks for image-synthesis. Based on the principal of positionalequivariance of features, Capsule Network’s ability to encode spatial relationships between the features of the image helps it become a more powerful critic in comparison to Convolutional Neural Networks (CNNs) used in current architectures for image synthesis. Our proposed GAN architectures learn the data manifold much faster and therefore, synthesize visually accurate images in significantly lesser number of training samples and training epochs in comparison to GANs and its variants that use CNNs. Apart from analyzing the quantitative results corresponding the images generated by different architectures, we also explore the reasons for the lower coverage and diversity explored by the GAN architectures that use CNN critics.",
"title": ""
},
{
"docid": "f97093a848329227f363a8a073a6334a",
"text": "With the increasing in mobile application systems and a high competition between companies, that led to increase in the number of mobile application projects. Mobile software development is a group of process for creating software for mobile devices with limited resources like small screen, low-power. The development of mobile applications is a big challenging because of rapidly changing business requirements and technical constraints for mobile systems. So, developers faced the challenge of a dynamic environment and the Changing of mobile application requirements. Moreover, Mobile applications should adapt appropriate software development methods that act in response efficiently to these challenges. However, at the moment, there is limited knowledge about the suitability of different software practices for the development of mobile applications. According to many researchers ,Agile methodologies was found to be most suitable for mobile development projects as they are short time, require flexibility, reduces waste and time to market. Finally, in this research we are looking for a suitable process model that conforms to the requirement of mobile application, we are going to investigate agile development methods to find a way, making the development of mobile application easy and compatible with mobile device features.",
"title": ""
},
{
"docid": "1121e6d94c1e545e0fa8b0d8b0ef5997",
"text": "Research is a continuous phenomenon. It is recursive in nature. Every research is based on some earlier research outcome. A general approach in reviewing the literature for a problem is to categorize earlier work for the same problem as positive and negative citations. In this paper, we propose a novel automated technique, which classifies whether an earlier work is cited as sentiment positive or sentiment negative. Our approach first extracted the portion of the cited text from citing paper. Using a sentiment lexicon we classify the citation as positive or negative by picking a window of at most five (5) sentences around the cited place (corpus). We have used Naïve-Bayes Classifier for sentiment analysis. The algorithm is evaluated on a manually annotated and class labelled collection of 150 research papers from the domain of computer science. Our preliminary results show an accuracy of 80%. We assert that our approach can be generalized to classification of scientific research papers in different disciplines.",
"title": ""
},
{
"docid": "b1958bbb9348a05186da6db649490cdd",
"text": "Fourier ptychography (FP) utilizes illumination control and computational post-processing to increase the resolution of bright-field microscopes. In effect, FP extends the fixed numerical aperture (NA) of an objective lens to form a larger synthetic system NA. Here, we build an FP microscope (FPM) using a 40X 0.75NA objective lens to synthesize a system NA of 1.45. This system achieved a two-slit resolution of 335 nm at a wavelength of 632 nm. This resolution closely adheres to theoretical prediction and is comparable to the measured resolution (315 nm) associated with a standard, commercially available 1.25 NA oil immersion microscope. Our work indicates that Fourier ptychography is an attractive method to improve the resolution-versus-NA performance, increase the working distance, and enlarge the field-of-view of high-resolution bright-field microscopes by employing lower NA objectives.",
"title": ""
},
{
"docid": "07fb577f1393bf4b33693961827e99aa",
"text": "Diabetes is one among the supreme health challenges of the current century. Most common method for estimation of blood glucose concentration is using glucose meter. The process involves pricking the finger and extracting the blood along with chemical analysis being done with the help of disposable test strips. Non-invasive method for glucose estimation promotes regular testing, adequate control and reduction in health care cost. The proposed method makes use of a near infrared sensor for determination of blood glucose. Near-infrared (NIR) is sent through the fingertip, before and after blocking the blood flow by making use of a principle called occlusion. By analyzing the variation in voltages received after reflection in both the cases with the dataset, the current diabetic condition as well as the approximate glucose level of the individual is predicted. The results obtained are being validated with glucose meter readings and statistical analysis of the readings where done. Analysis shows that the bias as well as the standard deviation decreases as the glucose concentration increases. The obtained result is then communicated with a smart phone through Bluetooth for further communication with the doctor.",
"title": ""
},
{
"docid": "920c1b2b4720586b1eb90b08631d9e6f",
"text": "Linear active-power-only power flow approximations are pervasive in the planning and control of power systems. However, AC power systems are governed by a system of nonlinear non-convex power flow equations. Existing linear approximations fail to capture key power flow variables including reactive power and voltage magnitudes, both of which are necessary in many applications that require voltage management and AC power flow feasibility. This paper proposes novel linear-programming models (the LPAC models) that incorporate reactive power and voltage magnitudes in a linear power flow approximation. The LPAC models are built on a polyhedral relaxation of the cosine terms in the AC equations, as well as Taylor approximations of the remaining nonlinear terms. Experimental comparisons with AC solutions on a variety of standard IEEE and Matpower benchmarks show that the LPAC models produce accurate values for active and reactive power, phase angles, and voltage magnitudes. The potential benefits of the LPAC models are illustrated on two “proof-of-concept” studies in power restoration and capacitor placement.",
"title": ""
}
] |
scidocsrr
|
a4bc9b166b4c926585f670760a3169e1
|
Why people buy virtual items in virtual worlds with real money
|
[
{
"docid": "bd13f54cd08fe2626fe8de4edce49197",
"text": "Ease of use and usefulness are believed to be fundamental in determining the acceptance and use of various, corporate ITs. These beliefs, however, may not explain the user's behavior toward newly emerging ITs, such as the World-Wide-Web (WWW). In this study, we introduce playfulness as a new factor that re ̄ects the user's intrinsic belief in WWW acceptance. Using it as an intrinsic motivation factor, we extend and empirically validate the Technology Acceptance Model (TAM) for the WWW context. # 2001 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "06b4bfebe295e3dceadef1a842b2e898",
"text": "Constant changes in the economic environment, where globalization and the development of the knowledge economy act as drivers, are systematically pushing companies towards the challenge of accessing external markets. Web localization constitutes a new field of study and professional intervention. From the translation perspective, localization equates to the website being adjusted to the typological, discursive and genre conventions of the target culture, adapting that website to a different language and culture. This entails much more than simply translating the content of the pages. The content of a webpage is made up of text, images and other multimedia elements, all of which have to be translated and subjected to cultural adaptation. A case study has been carried out to analyze the current presence of localization within Spanish SMEs from the chemical sector. Two types of indicator have been established for evaluating the sample: indicators for evaluating company websites (with a Likert scale from 0–4) and indicators for evaluating web localization (0–2 scale). The results show overall website quality is acceptable (2.5 points out of 4). The higher rating has been obtained by the system quality (with 2.9), followed by information quality (2.7 points) and, lastly, service quality (1.9 points). In the web localization evaluation, the contact information aspects obtain 1.4 points, the visual aspect 1.04, and the navigation aspect was the worse considered (0.37). These types of analysis facilitate the establishment of practical recommendations aimed at SMEs in order to increase their international presence through the localization of their websites.",
"title": ""
},
{
"docid": "570eca9884edb7e4a03ed95763be20aa",
"text": "Gene expression is a fundamentally stochastic process, with randomness in transcription and translation leading to cell-to-cell variations in mRNA and protein levels. This variation appears in organisms ranging from microbes to metazoans, and its characteristics depend both on the biophysical parameters governing gene expression and on gene network structure. Stochastic gene expression has important consequences for cellular function, being beneficial in some contexts and harmful in others. These situations include the stress response, metabolism, development, the cell cycle, circadian rhythms, and aging.",
"title": ""
},
{
"docid": "a72e4785509d85702096fb304e9fdac5",
"text": "Cross-lingual adaptation aims to learn a prediction model in a label-scarce target language by exploiting labeled data from a labelrich source language. An effective crosslingual adaptation system can substantially reduce the manual annotation effort required in many natural language processing tasks. In this paper, we propose a new cross-lingual adaptation approach for document classification based on learning cross-lingual discriminative distributed representations of words. Specifically, we propose to maximize the loglikelihood of the documents from both language domains under a cross-lingual logbilinear document model, while minimizing the prediction log-losses of labeled documents. We conduct extensive experiments on cross-lingual sentiment classification tasks of Amazon product reviews. Our experimental results demonstrate the efficacy of the proposed cross-lingual adaptation approach.",
"title": ""
},
{
"docid": "9d5ca4c756b63c60f6a9d6308df63ea3",
"text": "This paper presents recent advances in the project: development of a convertible unmanned aerial vehicle (UAV). This aircraft is able to change its flight configuration from hover to level flight and vice versa by means of a transition maneuver, while maintaining the aircraft in flight. For this purpose a nonlinear control strategy based on Lyapunov design is given. Numerical results are presented showing the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "1202e46fcc6c2f88b81fcf153ed4fd7d",
"text": "Recently, several high dimensional classification methods have been proposed to automatically discriminate between patients with Alzheimer's disease (AD) or mild cognitive impairment (MCI) and elderly controls (CN) based on T1-weighted MRI. However, these methods were assessed on different populations, making it difficult to compare their performance. In this paper, we evaluated the performance of ten approaches (five voxel-based methods, three methods based on cortical thickness and two methods based on the hippocampus) using 509 subjects from the ADNI database. Three classification experiments were performed: CN vs AD, CN vs MCIc (MCI who had converted to AD within 18 months, MCI converters - MCIc) and MCIc vs MCInc (MCI who had not converted to AD within 18 months, MCI non-converters - MCInc). Data from 81 CN, 67 MCInc, 39 MCIc and 69 AD were used for training and hyperparameters optimization. The remaining independent samples of 81 CN, 67 MCInc, 37 MCIc and 68 AD were used to obtain an unbiased estimate of the performance of the methods. For AD vs CN, whole-brain methods (voxel-based or cortical thickness-based) achieved high accuracies (up to 81% sensitivity and 95% specificity). For the detection of prodromal AD (CN vs MCIc), the sensitivity was substantially lower. For the prediction of conversion, no classifier obtained significantly better results than chance. We also compared the results obtained using the DARTEL registration to that using SPM5 unified segmentation. DARTEL significantly improved six out of 20 classification experiments and led to lower results in only two cases. Overall, the use of feature selection did not improve the performance but substantially increased the computation times.",
"title": ""
},
{
"docid": "96e56dcf3d38c8282b5fc5c8ae747a66",
"text": "The solid-state transformer (SST) was conceived as a replacement for the conventional power transformer, with both lower volume and weight. The smart transformer (ST) is an SST that provides ancillary services to the distribution and transmission grids to optimize their performance. Hence, the focus shifts from hardware advantages to functionalities. One of the most desired functionalities is the dc connectivity to enable a hybrid distribution system. For this reason, the ST architecture shall be composed of at least two power stages. The standard design procedure for this kind of system is to design each power stage for the maximum load. However, this design approach might limit additional services, like the reactive power compensation on the medium voltage (MV) side, and it does not consider the load regulation capability of the ST on the low voltage (LV) side. If the SST is tailored to the services that it shall provide, different stages will have different designs, so that the ST is no longer a mere application of the SST but an entirely new subject.",
"title": ""
},
{
"docid": "a45109840baf74c61b5b6b8f34ac81d5",
"text": "Decision-making groups can potentially benefit from pooling members' information, particularly when members individually have partial and biased information but collectively can compose an unbiased characterization of the decision alternatives. The proposed biased sampling model of group discussion, however, suggests that group members often fail to effectively pool their information because discussion tends to be dominated by (a) information that members hold in common before discussion and (b) information that supports members' existent preferences. In a political caucus simulation, group members individually read candidate descriptions that contained partial information biased against the most favorable candidate and then discussed the candidates as a group. Even though groups could have produced unbiased composites of the candidates through discussion, they decided in favor of the candidate initially preferred by a plurality rather than the most favorable candidate. Group members' preand postdiscussion recall of candidate attributes indicated that discussion tended to perpetuate, not to correct, members' distorted pictures of the candidates.",
"title": ""
},
{
"docid": "97c22bf7654160e53c24eee7ebe97333",
"text": "‘‘Sexting’’ refers to sending and receiving sexually suggestive images, videos, or texts on cell phones. As a means for maintaining or initiating a relationship, sexting behavior and attitudes may be understood through adult attachment theory. One hundred and twenty-eight participants (M = 22 and F = 106), aged 18–30 years, completed an online questionnaire about their adult attachment styles and sexting behavior and attitudes. Attachment anxiety predicted sending texts that solicit sexual activity for those individuals in relationships. Attachment anxiety also predicted positive attitudes towards sexting such as accepting it as normal, that it will enhance the relationship, and that partners will expect sexting. Sexting may be a novel form for expressing attachment anxiety. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9f3803ae394163e32fe81784b671de92",
"text": "A smart community is a distributed system consisting of a set of smart homes which utilize the smart home scheduling techniques to enable customers to automatically schedule their energy loads targeting various purposes such as electricity bill reduction. Smart home scheduling is usually implemented in a decentralized fashion inside a smart community, where customers compete for the community level renewable energy due to their relatively low prices. Typically there exists an aggregator as a community wide electricity policy maker aiming to minimize the total electricity bill among all customers. This paper develops a new renewable energy aware pricing scheme to achieve this target. We establish the proof that under certain assumptions the optimal solution of decentralized smart home scheduling is equivalent to that of the centralized technique, reaching the theoretical lower bound of the community wide total electricity bill. In addition, an advanced cross entropy optimization technique is proposed to compute the pricing scheme of renewable energy, which is then integrated in smart home scheduling. The simulation results demonstrate that our pricing scheme facilitates the reduction of both the community wide electricity bill and individual electricity bills compared to the uniform pricing. In particular, the community wide electricity bill can be reduced to only 0.06 percent above the theoretic lower bound.",
"title": ""
},
{
"docid": "5ee610b61deefffc1b054d908587b406",
"text": "Self-shaping of curved structures, especially those involving flexible thin layers, is attracting increasing attention because of their broad potential applications in, e.g., nanoelectromechanical andmicroelectromechanical systems, sensors, artificial skins, stretchable electronics, robotics, and drug delivery. Here, we provide an overview of recent experimental, theoretical, and computational studies on the mechanical selfassembly of strain-engineered thin layers, with an emphasis on systems in which the competition between bending and stretching energy gives rise to a variety of deformations, such as wrinkling, rolling, and twisting. We address the principle of mechanical instabilities, which is often manifested in wrinkling or multistability of strain-engineered thin layers. The principles of shape selection and transition in helical ribbons are also systematically examined. We hope that a more comprehensive understanding of the mechanical principles underlying these rich phenomena can foster the development of techniques for manufacturing functional three-dimensional structures on demand for a broad spectrum of engineering applications.",
"title": ""
},
{
"docid": "2b3c9b9f92582af41fcde0186c9bd0f6",
"text": "Person re-identification is a challenging task mainly due to factors such as background clutter, pose, illumination and camera point of view variations. These elements hinder the process of extracting robust and discriminative representations, hence preventing different identities from being successfully distinguished. To improve the representation learning, usually local features from human body parts are extracted. However, the common practice for such a process has been based on bounding box part detection. In this paper, we propose to adopt human semantic parsing which, due to its pixel-level accuracy and capability of modeling arbitrary contours, is naturally a better alternative. Our proposed SPReID integrates human semantic parsing in person re-identification and not only considerably outperforms its counter baseline, but achieves state-of-the-art performance. We also show that, by employing a simple yet effective training strategy, standard popular deep convolutional architectures such as Inception-V3 and ResNet-152, with no modification, while operating solely on full image, can dramatically outperform current state-of-the-art. Our proposed methods improve state-of-the-art person re-identification on: Market-1501 [48] by ~17% in mAP and ~6% in rank-1, CUHK03 [24] by ~4% in rank-1 and DukeMTMC-reID [50] by ~24% in mAP and ~10% in rank-1.",
"title": ""
},
{
"docid": "10f31578666795a3b1ad852929769fc5",
"text": "CNNs have been successfully used in audio, image and text classification, analysis and generation [12,17,18], whereas the RNNs with LSTM cells [5,6] have been widely adopted for solving sequence transduction problems such as language modeling and machine translation [19,3,5]. The RNN models typically align the element positions of the input and output sequences to steps in computation time for generating the sequenced hidden states, with each depending on the current element and the previous hidden state. Such operations are inherently sequential which precludes parallelization and becomes the performance bottleneck. This situation has motivated researchers to extend the easily parallelizable CNN models for more efficient sequence-to-sequence mapping. Once such efforts can deliver satisfactory quality, the usage of CNN in deep learning would be significantly broadened.",
"title": ""
},
{
"docid": "0974cee877ff2fecfda81d48012c07d3",
"text": "New method of blinking detection is proposed. The utmost important of blinking detection method is robust against different users, noise, and also change of eye shape. In this paper, we propose blinking detection method by measuring the distance between two arcs of eye (upper part and lower part). We detect eye arcs by apply Gabor filter onto eye image. As we know that Gabor filter has advantage on image processing application since it able to extract spatial localized spectral features such as line, arch, and other shapes. After two of eye arcs are detected, we measure the distance between arcs of eye by using connected labeling method. The open eye is marked by the distance between two arcs is more than threshold and otherwise, the closed eye is marked by the distance less than threshold. The experiment result shows that our proposed method robust enough against different users, noise, and eye shape changes with perfectly accuracy.",
"title": ""
},
{
"docid": "8503c9989f9706805a74bbd5c964ab07",
"text": "Since the phenomenon of cloud computing was proposed, there is an unceasing interest for research across the globe. Cloud computing has been seen as unitary of the technology that poses the next-generation computing revolution and rapidly becomes the hottest topic in the field of IT. This fast move towards Cloud computing has fuelled concerns on a fundamental point for the success of information systems, communication, virtualization, data availability and integrity, public auditing, scientific application, and information security. Therefore, cloud computing research has attracted tremendous interest in recent years. In this paper, we aim to precise the current open challenges and issues of Cloud computing. We have discussed the paper in three-fold: first we discuss the cloud computing architecture and the numerous services it offered. Secondly we highlight several security issues in cloud computing based on its service layer. Then we identify several open challenges from the Cloud computing adoption perspective and its future implications. Finally, we highlight the available platforms in the current era for cloud research and development.",
"title": ""
},
{
"docid": "5546cbb6fac77d2d9fffab8ba0a50ed8",
"text": "The next-generation electric power systems (smart grid) are studied intensively as a promising solution for energy crisis. One important feature of the smart grid is the integration of high-speed, reliable and secure data communication networks to manage the complex power systems effectively and intelligently. We provide in this paper a comprehensive survey on the communication architectures in the power systems, including the communication network compositions, technologies, functions, requirements, and research challenges. As these communication networks are responsible for delivering power system related messages, we discuss specifically the network implementation considerations and challenges in the power system settings. This survey attempts to summarize the current state of research efforts in the communication networks of smart grid, which may help us identify the research problems in the continued studies. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0a8150abf09c6551e4cd771d12ed66c1",
"text": "Sarcasm presents a negative meaning with positive expressions and is a non-literalistic expression. Sarcasm detection is an important task because it contributes directly to the improvement of the accuracy of sentiment analysis tasks. In this study, we propose a extraction method of sarcastic sentences in product reviews. First, we analyze sarcastic sentences in product reviews and classify the sentences into 8 classes by focusing on evaluation expressions. Next, we generate classification rules for each class and use them to extract sarcastic sentences. Our method consists of three stage, judgment processes based on rules for 8 classes, boosting rules and rejection rules. In the experiment, we compare our method with a baseline based on a simple rule. The experimental result shows the effectiveness of our method.",
"title": ""
},
{
"docid": "a289829cb63b56280a1e06f69c6670a9",
"text": "This article presents an overview of the ability model of emotional intelligence and includes a discussion about how and why the concept became useful in both educational and workplace settings. We review the four underlying emotional abilities comprising emotional intelligence and the assessment tools that that have been developed to measure the construct. A primary goal is to provide a review of the research describing the correlates of emotional intelligence. We describe what is known about how emotionally intelligent people function both intraand interpersonally and in both academic and workplace settings. The facts point in one direction: The job offer you have in hand is perfect – great salary, ideal location, and tremendous growth opportunities. Yet, there is something that makes you feel uneasy about resigning from your current position and moving on. What will you do? Ignore the feeling and choose what appears to be the logical path, or go with your gut and risk disappointing your family? Or, might you consider both your thoughts and feelings about the job in order to make the decision? Solving problems and making wise decisions using both thoughts and feelings or logic and intuition is a part of what we refer to as emotional intelligence (Mayer & Salovey, 1997; Salovey & Mayer, 1990). Linking emotions and intelligence was relatively novel when first introduced in a theoretical model about twenty years ago (Salovey & Mayer, 1990; but see Gardner, 1983 ⁄1993). Among the many questions posed by both researchers and laypersons alike were: Is emotional intelligence an innate, nonmalleable mental ability? Can it be acquired with instruction and training? Is it a new intelligence or just the repackaging of existing constructs? How can it be measured reliably and validly? What does the existence of an emotional intelligence mean in everyday life? In what ways does emotional intelligence affect mental health, relationships, daily decisions, and academic and workplace performance? In this article, we provide an overview of the theory of emotional intelligence, including a brief discussion about how and why the concept has been used in both educational and workplace settings. Because the field is now replete with articles, books, and training manuals on the topic – and because the definitions, claims, and measures of emotional intelligence have become extremely diverse – we also clarify definitional and measurement issues. A final goal is to provide an up-to-date review of the research describing what the lives of emotionally intelligent people ‘look like’ personally, socially, academically, and in the workplace. Social and Personality Psychology Compass 5/1 (2011): 88–103, 10.1111/j.1751-9004.2010.00334.x a 2011 The Authors Social and Personality Psychology Compass a 2011 Blackwell Publishing Ltd What is Emotional Intelligence? Initial conception of emotional intelligence Emotional intelligence was described formally by Salovey and Mayer (1990). They defined it as ‘the ability to monitor one’s own and others’ feelings and emotions, to discriminate among them and to use this information to guide one’s thinking and actions’ (p. 189). They also provided an initial empirical demonstration of how an aspect of emotional intelligence could be measured as a mental ability (Mayer, DiPaolo, & Salovey, 1990). In both articles, emotional intelligence was presented as a way to conceptualize the relation between cognition and affect. Historically, ‘emotion’ and ‘intelligence’ were viewed as being in opposition to one another (Lloyd, 1979). How could one be intelligent about the emotional aspects of life when emotions derail individuals from achieving their goals (e.g., Young, 1943)? The theory of emotional intelligence suggested the opposite: emotions make cognitive processes adaptive and individuals can think rationally about emotions. Emotional intelligence is an outgrowth of two areas of psychological research that emerged over forty years ago. The first area, cognition and affect, involved how cognitive and emotional processes interact to enhance thinking (Bower, 1981; Isen, Shalker, Clark, & Karp, 1978; Zajonc, 1980). Emotions like anger, happiness, and fear, as well as mood states, preferences, and bodily states, influence how people think, make decisions, and perform different tasks (Forgas & Moylan, 1987; Mayer & Bremer, 1985; Salovey & Birnbaum, 1989). The second was an evolution in models of intelligence itself. Rather than viewing intelligence strictly as how well one engaged in analytic tasks associated with memory, reasoning, judgment, and abstract thought, theorists and investigators began considering intelligence as a broader array of mental abilities (e.g., Cantor & Kihlstrom, 1987; Gardner, 1983 ⁄1993; Sternberg, 1985). Sternberg (1985), for example, urged educators and scientists to place an emphasis on creative abilities and practical knowledge that could be acquired through careful navigation of one’s everyday environment. Gardner’s (1983) ‘personal intelligences,’ including the capacities involved in accessing one’s own feeling life (intrapersonal intelligence) and the ability to monitor others’ emotions and mood (interpersonal intelligence), provided a compatible backdrop for considering emotional intelligence as a viable construct. Popularization of emotional intelligence The term ‘emotional intelligence’ was mostly unfamiliar to researchers and the general public until Goleman (1995) wrote the best-selling trade book, Emotional Intelligence: Why it can Matter More than IQ. The book quickly caught the eye of the media, public, and researchers. In it, Goleman described how scientists had discovered a connection between emotional competencies and prosocial behavior; he also declared that emotional intelligence was both an answer to the violence plaguing our schools and ‘as powerful and at times more powerful than IQ’ in predicting success in life (Goleman, 1995; p. 34). Both in the 1995 book and in a later book focusing on workplace applications of emotional intelligence (Goleman, 1998), Goleman described the construct as an array of positive attributes including political awareness, self-confidence, conscientiousness, and achievement motives rather than focusing only on an intelligence that could help individuals solve problems effectively (Brackett & Geher, 2006). Goleman’s views on emotional intelligence, in part because they were articulated for ⁄ to the general public, extended Emotional Intelligence 89 a 2011 The Authors Social and Personality Psychology Compass 5/1 (2011): 88–103, 10.1111/j.1751-9004.2010.00334.x Social and Personality Psychology Compass a 2011 Blackwell Publishing Ltd beyond the empirical evidence that was available (Davies, Stankov, & Roberts, 1998; Hedlund & Sternberg, 2000; Mayer & Cobb, 2000). Yet, people from all professions – educators, psychologists, human resource professionals, and corporate executives – began to incorporate emotional intelligence into their daily vernacular and professional practices. Definitions and measures of emotional intelligence varied widely, with little consensus about what emotional intelligence is and is not. Alternative models of emotional intelligence Today, there are two scientific approaches to emotional intelligence. They can be characterized as the ability model and mixed models (Mayer, Caruso, & Salovey, 2000). The ability model views emotional intelligence as a standard intelligence and argues that the construct meets traditional criteria for an intelligence (Mayer, Roberts, & Barsade, 2008b; Mayer & Salovey, 1997; Mayer, Salovey, & Caruso, 2008a). Proponents of the ability model measure emotional intelligence as a mental ability with performance assessments that have a criterion of correctness (i.e., there are better and worse answers, which are determined using complex scoring algorithms). Mixed models are so called because they mix the ability conception with personality traits and competencies such as optimism, self-esteem, and emotional self-efficacy (see Cherniss, 2010, for a review). Proponents of this approach use self-report instruments as opposed to performance assessments to measure emotional intelligence (i.e., instead of asking people to demonstrate how they perceive an emotional expression accurately, self-report measures ask people to judge and report how good they are at perceiving others’ emotions accurately). There has been a debate about the ideal method to measure emotional intelligence. On the surface, self-report (or self-judgment) scales are desirable: they are less costly, easier to administer, and take considerably less time to complete than performance tests (Brackett, Rivers, Shiffman, Lerner, & Salovey, 2006). However, it is well known that self-report measures are problematic because respondents can provide socially desirable responses rather than truthful ones, or respondents may not actually know how good they are at emotion-based tasks – to whom do they compare themselves (e.g., DeNisi & Shaw, 1977; Paulhus, Lysy, & Yik, 1998)? As they apply to emotional intelligence, selfreport measures are related weakly to performance assessments and lack discriminant validity from existing measures of personality (Brackett & Mayer, 2003; Brackett et al., 2006). In a meta-analysis of 13 studies that compared performance tests (e.g., Mayer, Salovey, & Caruso, 2002) and self-report scales (e.g., EQ-i; Bar-On, 1997), Van Rooy, Viswesvaran, and Pluta (2005) reported that performance tests were relatively distinct from self-report measures (r = 0.14). Even when a self-report measure is designed to map onto performance tests, correlations are very low (Brackett et al., 2006a). Finally, self-report measures of emotional intelligence are more susceptible to faking than performance tests (Day & Carroll, 2008). For the reasons described in this section, we assert that the ability-based definition and performance-based measure",
"title": ""
}
] |
scidocsrr
|
1ff72cb42b9b5c6fa33595c2c9f1764f
|
Forecasting Volatility in Financial Markets : A Review
|
[
{
"docid": "867c8c0286c0fed4779f550f7483770d",
"text": "Numerous studies report that standard volatility models have low explanatory power, leading some researchers to question whether these models have economic value. We examine this question by using conditional mean-variance analysis to assess the value of volatility timing to short-horizon investors. We nd that the volatility timing strategies outperform the unconditionally e cient static portfolios that have the same target expected return and volatility. This nding is robust to estimation risk and transaction costs.",
"title": ""
}
] |
[
{
"docid": "132516e468ba0d95506c17a17c8990ef",
"text": "Compact planar phase shifters with wide range of differential phase shift across ultra-wideband frequency are proposed. To achieve that performance, the devices use broadside coupled structure terminated with open-ended or short-ended stubs. The theory of operation for the proposed devices is derived. To validate the theory, several phase shifters are designed to achieve a differential phase ranging from -180° to 180°. Moreover, three prototypes are developed and tested. The simulated and measured results agree well with the theory and show less than 7° phase deviation and 1.4 dB insertion loss across the band 3.1-10.6 GHz.",
"title": ""
},
{
"docid": "ba206d552bb33f853972e3f2e70484bc",
"text": "Presumptive stressful life event scale Dear Sir, in different demographic and clinical categories, which has not been attempted. I have read with considerable interest the article entitled, Presumptive stressful life events scale (PSLES)-a new stressful life events scale for use in India by Gurmeet Singh et al (April 1984 issue). I think it is a commendable effort to develop such a scale which would potentially be of use in our setting. However, the research raises several questions, which have not been dealt with in the' paper. The following are the questions or comments which ask for response from the authors: a) The mode of selection of 51 items is not mentioned. If taken arbitrarily they could suggest a bias. If selected from clinical experience, there could be a likelihood of certain events being missed. An ideal way would be to record various events from a number of persons (and patients) and then prepare a list of commonly occuring events. b) It is noteworthy that certain culture specific items as dowry, birth of daughter, etc. are included. Other relevant events as conflict with in-laws (not regarding dowry), refusal by match seeking team (difficulty in finding match for marriage) and lack of son, could be considered stressful in our setting. c) Total number of life events are a function of age, as has been mentioned in the review of literature also, hence age categorisation as under 35 and over 35 might neither be proper nor sufficient. The relationship of number of life events in different age groups would be interesting to note. d) Also, more interesting would be to examine the rank order of life events e) A briefened version would be more welcome. The authors should try to evolve a version of around about 25-30 items, which could be easily applied clinically or for research purposes. As can be seen, from items after serial number 30 (Table 4) many could be excluded. f) The cause and effect relationship is difficult to comment from the results given by the scale. As is known, 'stressfulness' of the event depends on an individuals perception of the event. That persons with higher neu-roticism scores report more events could partly be due to this. g) A minor point, Table 4 mentions Standard Deviations however S. D. has not been given for any item. Reply: I am grateful for the interest shown by Dr. Chaturvedi and his …",
"title": ""
},
{
"docid": "a1127dc1bf1af6aae9303b79fc75afa3",
"text": "Multilevel inverters have become more popular over the years in electric high power application with the promise of less disturbances and the possibility to function at lower switching frequencies than ordinary two-level inverters. This report presents information about several multilevel inverter topologies, such as the Neutral-Point Clamped Inverter and the Cascaded Multicell Inverter. These multilevel inverters will also be compared with two-level inverters in simulations to investigate the advantages of using multilevel inverters. Modulation strategies, component comparison and solutions to the multilevel voltage source balancing problem will also be presented in this work. It is shown that multilevel inverters only produce 22% and 32% voltage THD while the two-level inverter for the same 1kHz test produces 115% voltage THD. For another simulation, while using lower switching frequency, it is shown that when the two-level inverter generates 25.1W switching losses, the tested multilevel inverters only produce 2.1W and 2.2W switching losses.",
"title": ""
},
{
"docid": "87d14a44fbf555e7bd3ef17a44f1d884",
"text": "We present a new circuit topology for a low-voltage class AB amplifier. The circuit shows superior current efficiency in the use of the supply current to charge and discharge the output load. It uses negative feedback rather than component matching to optimize current efficiency and performance, resulting in a current boost ratio exactly equal to one. Measurement results for an example circuit fabricated in a 2m CMOS process are given. The circuit uses a quiescent supply current of 0.2 A and is able to settle to a 1% error in 1.1 ms for a 0.4-V input step and a load capacitance of 35 pF. The circuit design is straightforward and modular, and the core circuit can be used to replace the differential pair of other op-amp topologies.",
"title": ""
},
{
"docid": "f766387d1f2d9d3d3bed90b2f235b7b7",
"text": "Matrix factorization of knowledge bases in universal schema has facilitated accurate distantlysupervised relation extraction. This factorization encodes dependencies between textual patterns and structured relations using lowdimensional vectors defined for each entity pair; although these factors are effective at combining evidence for an entity pair, they are inaccurate on rare pairs, or for relations that depend crucially on the entity types. On the other hand, tensor factorization is able to overcome these shortcomings when applied to link prediction by maintaining entity-wise factors. However these models have been unsuitable for universal schema. In this paper we first present an illustration on synthetic data that explains the unsuitability of tensor factorization to relation extraction with universal schemas. Since the benefits of tensor and matrix factorization are complementary, we then investigate two hybrid methods that combine the benefits of the two paradigms. We show that the combination can be fruitful: we handle ambiguously phrased relations, achieve gains in accuracy on real-world relations, and demonstrate that entity embeddings encode entity types.",
"title": ""
},
{
"docid": "0719aecf2628c0dc431c8655243329d2",
"text": "Use of computer based decision tools to aid clinical decision making, has been a primary goal of research in biomedical informatics. Research in the last five decades has led to the development of Medical Decision Support (MDS) applications using a variety of modeling techniques, for a diverse range of medical decision problems. This paper surveys literature on modeling techniques for diagnostic decision support, with a focus on decision accuracy. Trends and shortcomings of research in this area are discussed and future directions are provided. The authors suggest that—(i) Improvement in the accuracy of MDS application may be possible by modeling of vague and temporal data, research on inference algorithms, integration of patient information from diverse sources and improvement in gene profiling algorithms; (ii) MDS research would be facilitated by public release of de-identified medical datasets, and development of opensource data-mining tool kits; (iii) Comparative evaluations of different modeling techniques are required to understand characteristics of the techniques, which can guide developers in choice of technique for a particular medical decision problem; and (iv) Evaluations of MDS applications in clinical setting are necessary to foster physicians’ utilization of these decision aids.",
"title": ""
},
{
"docid": "d55ac5425fe640210631ee0012d13c26",
"text": "Developmental dyslexia and specific language impairment (SLI) were for many years treated as distinct disorders but are now often regarded as different manifestations of the same underlying problem, differing only in severity or developmental stage. The merging of these categories has been motivated by the reconceptualization of dyslexia as a language disorder in which phonological processing is deficient. The authors argue that this focus underestimates the independent influence of semantic and syntactic deficits, which are widespread in SLI and which affect reading comprehension and impair attainment of fluent reading in adolescence. The authors suggest that 2 dimensions of impairment are needed to conceptualize the relationship between these disorders and to capture phenotypic features that are important for identifying neurobiologically and etiologically coherent subgroups.",
"title": ""
},
{
"docid": "5109aa9328094af5e552ed1cab62f09a",
"text": "In this paper, we present a novel approach for human action recognition with histograms of 3D joint locations (HOJ3D) as a compact representation of postures. We extract the 3D skeletal joint locations from Kinect depth maps using Shotton et al.'s method [6]. The HOJ3D computed from the action depth sequences are reprojected using LDA and then clustered into k posture visual words, which represent the prototypical poses of actions. The temporal evolutions of those visual words are modeled by discrete hidden Markov models (HMMs). In addition, due to the design of our spherical coordinate system and the robust 3D skeleton estimation from Kinect, our method demonstrates significant view invariance on our 3D action dataset. Our dataset is composed of 200 3D sequences of 10 indoor activities performed by 10 individuals in varied views. Our method is real-time and achieves superior results on the challenging 3D action dataset. We also tested our algorithm on the MSR Action3D dataset and our algorithm outperforms Li et al. [25] on most of the cases.",
"title": ""
},
{
"docid": "c1a44605e8e9b76a76bf5a2dd3539310",
"text": "This paper presents a stereo matching approach for a novel multi-perspective panoramic stereo vision system, making use of asynchronous and non-simultaneous stereo imaging towards real-time 3D 360° vision. The method is designed for events representing the scenes visual contrast as a sparse visual code allowing the stereo reconstruction of high resolution panoramic views. We propose a novel cost measure for the stereo matching, which makes use of a similarity measure based on event distributions. Thus, the robustness to variations in event occurrences was increased. An evaluation of the proposed stereo method is presented using distance estimation of panoramic stereo views and ground truth data. Furthermore, our approach is compared to standard stereo methods applied on event-data. Results show that we obtain 3D reconstructions of 1024 × 3600 round views and outperform depth reconstruction accuracy of state-of-the-art methods on event data.",
"title": ""
},
{
"docid": "c8cbde81fbff3c8d5f52758524062a00",
"text": "te the feasibility of the approach. Successful multimodal search and retrieval requires the automatic understanding of semantic cross-modal relations, which, however, is still an open research problem. Previous work has suggested the metrics crossmodal mutual information and semantic correlation to model and predict cross-modal semantic relations of image and text. In this paper, we present an approach to predict the (cross-modal) relative abstractness level of a given image-text pair, that is whether the image is an abstraction of the text or vice versa. For this purpose, we introduce a new metric that captures this specific relationship between image and text at the Abstractness Level (ABS). We present a deep learning approach to predict this metric, which relies on an autoencoder architecture that allows us to significantly reduce the required amount of labeled training data. For this, a comprehensive set of publicly available scientific documents has been accumulated. Experimental results on a challenging test set demonstrate the feasibility of the approach.",
"title": ""
},
{
"docid": "96754dcc11dbc79fddbc812aa511958a",
"text": "A longstanding goal of computer vision is to build a system that can automatically understand a 3D scene from a single image. This requires extracting semantic concepts and 3D information from 2D images which can depict an enormous variety of environments that comprise our visual world. This paper summarizes our recent efforts toward these goals. First, we describe the richly annotated SUN database which is a collection of annotated images spanning 908 different scene categories with object, attribute, and geometric labels for many scenes. This database allows us to systematically study the space of scenes and to establish a benchmark for scene and object recognition. We augment the categorical SUN database with 102 scene attributes for every image and explore attribute recognition. Finally, we present an integrated system to extract the 3D structure of the scene and objects depicted in an image.",
"title": ""
},
{
"docid": "ea12c2b64eab8fdaed954450875effa8",
"text": "Transformation of experience into memories that can guide future behavior is a common ability across species. However, only humans can declare their perceptions and memories of experienced events (episodes). The medial temporal lobe (MTL) is central to episodic memory, yet the neuronal code underlying the translation from sensory information to memory remains unclear. Recordings from neurons within the brain in patients who have electrodes implanted for clinical reasons provide an opportunity to bridge physiology with cognitive theories. Recent evidence illustrates several striking response properties of MTL neurons. Responses are selective yet invariant, associated with conscious perception, can be internally generated and modulated, and spontaneously retrieved. Representation of information by these neurons is highly explicit, suggesting abstraction of information for future conscious recall.",
"title": ""
},
{
"docid": "659bf57d758481a4920f1ba203012895",
"text": "A photovoltaic (PV) panel model is at the heart of an accurate performance model for a large PV farm. This paper presents an algorithm to calculate the parameters of the one-diode model of PV modules based solely on the manufacturer's datasheets. The important feature of this algorithm is that through reformulation of the characteristic equations at various points of the current-voltage (I-V) curve, the unknown model parameters can be determined analytically. This is in contrast to many existing models which choose a value for one parameter and then calculate the other parameters through simultaneous solution of the system of equations. The calculated I-V curve is then compared with the manufacturer's curve to validate the proposed algorithm and quantify the modeling error.",
"title": ""
},
{
"docid": "8448346c102274fd81f1b0543719c2ba",
"text": "This research examines the interrelationships of trust, brand awareness/associations, perceived quality and brand loyalty in building Internet banking brand equity. The model was based on data from customers using online banking (customers of an international bank) using the PLS technique. The results suggest that perceived quality and brand loyalty are more important to explain the Internet banking brand equity than brand awareness/associations and trust. Interestingly, trust contributes only indirectly, through perceived quality and brand awareness/association to Internet banking brand equity. Online perceived benefits impact positively on customers’ trust and online perceived risks tend to be lower when trust increases.",
"title": ""
},
{
"docid": "df38d14091d6a350d8f04f8cb061428c",
"text": "Although semi-supervised variational autoencoder (SemiVAE) works in image classification task, it fails in text classification task if using vanilla LSTM as its decoder. From a perspective of reinforcement learning, it is verified that the decoder’s capability to distinguish between different categorical labels is essential. Therefore, Semi-supervised Sequential Variational Autoencoder (SSVAE) is proposed, which increases the capability by feeding label into its decoder RNN at each time-step. Two specific decoder structures are investigated and both of them are verified to be effective. Besides, in order to reduce the computational complexity in training, a novel optimization method is proposed, which estimates the gradient of the unlabeled objective function by sampling, along with two variance reduction techniques. Experimental results on Large Movie Review Dataset (IMDB) and AG’s News corpus show that the proposed approach significantly improves the classification accuracy compared with pure-supervised classifiers, and achieves competitive performance against previous advanced methods. State-of-the-art results can be obtained by integrating other pretraining-based methods.",
"title": ""
},
{
"docid": "bcdb8fea60d1d13a8c5dcf7c49632653",
"text": "There is a small but growing body of research investigating how teams form and how that affects how they perform. Much of that research focuses on teams that seek to accomplish certain tasks such as writing an article or performing a Broadway musical. There has been much less investigation of the relative performance of teams that form to directly compete against another team. In this study, we report on team-vs-team competitions in the multiplayer online battle arena game Dota 2. Here, the teams’ overall goal is to beat the opponent. We use this setting to observe multilevel factors influence the relative performance of the teams. Those factors include compositional factors or attributes of the individuals comprising a team, relational factors or prior relations among individuals within a team and ecosystem factors or overlapping prior membership of team members with others within the ecosystem of teams. We also study how these multilevel factors affect the duration of a match. Our results show that advantages at the compositional, relational and ecosystem levels predict which team will succeed in short or medium duration matches. Relational and ecosystem factors are particularly helpful in predicting the winner in short duration matches, whereas compositional factors are more important predicting winners in medium duration matches. However, the two types of relations have opposite effects on the duration of winning. None of the three multilevel factors help explain which team will win in long matches.",
"title": ""
},
{
"docid": "3ced47ece49eeec3edc5d720df9bb864",
"text": "Complex space systems typically provide the operator a means to understand the current state of system components. The operator often has to manually determine whether the system is able to perform a given set of high level objectives based on this information. The operations team needs a way for the system to quantify its capability to successfully complete a mission objective and convey that information in a clear, concise way. A mission-level space cyber situational awareness tool suite integrates the data into a complete picture to display the current state of the mission. The Johns Hopkins University Applied Physics Laboratory developed the Spyder tool suite for such a purpose. The Spyder space cyber situation awareness tool suite allows operators to understand the current state of their systems, allows them to determine whether their mission objectives can be completed given the current state, and provides insight into any anomalies in the system. Spacecraft telemetry, spacecraft position, ground system data, ground computer hardware, ground computer software processes, network connections, and network data flows are all combined into a system model service that serves the data to various display tools. Spyder monitors network connections, port scanning, and data exfiltration to determine if there is a cyber attack. The Spyder Tool Suite provides multiple ways of understanding what is going on in a system. Operators can see the logical and physical relationships between system components to better understand interdependencies and drill down to see exactly where problems are occurring. They can quickly determine the state of mission-level capabilities. The space system network can be analyzed to find unexpected traffic. Spyder bridges the gap between infrastructure and mission and provides situational awareness at the mission level.",
"title": ""
},
{
"docid": "f27636dbbb070e3041e0273d71c3c949",
"text": "An integrative framework is proposed for understanding how multiple biological and psychological systems are regulated in the context of adult attachment relationships, dysregulated by separation and loss experiences, and, potentially, re-regulated through individual recovery efforts. Evidence is reviewed for a coregulatory model of normative attachment, defined as a pattern of interwoven physiology between romantic partners that results from the conditioning of biological reward systems and the emergence of felt security within adult pair bonds. The loss of coregulation can portend a state of biobehavioral dysregulation, ranging from diffuse psychophysiological arousal and disorganization to a full-blown (and highly organized) stress response. The major task for successful recovery is adopting a self-regulatory strategy that attenuates the dysregulating effects of the attachment disruption. Research evidence is reviewed across multiple levels of analysis, and the article concludes with a series of testable research questions on the interconnected nature of attachment, loss, and recovery processes.",
"title": ""
},
{
"docid": "5f20df3abf9a4f7944af6b3afd16f6f8",
"text": "An important step towards the successful integration of information and communication technology (ICT) in schools is to facilitate their capacity to develop a school-based ICT policy resulting in an ICT policy plan. Such a plan can be defined as a school document containing strategic and operational elements concerning the integration of ICT in education. To write such a plan in an efficient way is challenging for schools. Therefore, an online tool [Planning for ICT in Schools (pICTos)] has been developed to guide schools in this process. A multiple case study research project was conducted with three Flemish primary schools to explore the process of developing a school-based ICT policy plan and the supportive role of pICTos within this process. Data from multiple sources (i.e. interviews with school leaders and ICT coordinators, school policy documents analysis and a teacher questionnaire) were collected and analysed. The results indicate that schools shape their ICT policy based on specific school data collected and presented by the pICTos environment. School teams learned about the actual and future place of ICT in teaching and learning. Consequently, different policy decisions were made according to each school’s vision on ‘good’ education and ICT integration.",
"title": ""
},
{
"docid": "bc7f80192416aa7787657aed1bda3997",
"text": "In this paper we propose a deep learning technique to improve the performance of semantic segmentation tasks. Previously proposed algorithms generally suffer from the over-dependence on a single modality as well as a lack of training data. We made three contributions to improve the performance. Firstly, we adopt two models which are complementary in our framework to enrich field-of-views and features to make segmentation more reliable. Secondly, we repurpose the datasets form other tasks to the segmentation task by training the two models in our framework on different datasets. This brings the benefits of data augmentation while saving the cost of image annotation. Thirdly, the number of parameters in our framework is minimized to reduce the complexity of the framework and to avoid over- fitting. Experimental results show that our framework significantly outperforms the current state-of-the-art methods with a smaller number of parameters and better generalization ability.",
"title": ""
}
] |
scidocsrr
|
fd65bf9ac1151a160ff04e53cd45af6b
|
Composite sketch recognition via deep network - a transfer learning approach
|
[
{
"docid": "804cee969d47d912d8bdc40f3a3eeb32",
"text": "The problem of matching a forensic sketch to a gallery of mug shot images is addressed in this paper. Previous research in sketch matching only offered solutions to matching highly accurate sketches that were drawn while looking at the subject (viewed sketches). Forensic sketches differ from viewed sketches in that they are drawn by a police sketch artist using the description of the subject provided by an eyewitness. To identify forensic sketches, we present a framework called local feature-based discriminant analysis (LFDA). In LFDA, we individually represent both sketches and photos using SIFT feature descriptors and multiscale local binary patterns (MLBP). Multiple discriminant projections are then used on partitioned vectors of the feature-based representation for minimum distance matching. We apply this method to match a data set of 159 forensic sketches against a mug shot gallery containing 10,159 images. Compared to a leading commercial face recognition system, LFDA offers substantial improvements in matching forensic sketches to the corresponding face images. We were able to further improve the matching performance using race and gender information to reduce the target gallery size. Additional experiments demonstrate that the proposed framework leads to state-of-the-art accuracys when matching viewed sketches.",
"title": ""
}
] |
[
{
"docid": "c37ede4112314a6d26c55bd841d0decd",
"text": "Plants have become an important source of energy, and are a fundamental piece in the puzzle to solve the problem of global warming. However, plant diseases are threatening the livelihood of this important source. Convolutional neural networks (CNN) have demonstrated great performance (beating that of humans) in object recognition and image classification problems. This paper describes the feasibility of CNN for plant disease classification for leaf images taken under the natural environment. The model is designed based on the LeNet architecture to perform the soybean plant disease classification. 12,673 samples containing leaf images of four classes, including the healthy leaf images, were obtained from the PlantVillage database. The images were taken under uncontrolled environment. The implemented model achieves 99.32% classification accuracy which show clearly that CNN can extract important features and classify plant diseases from images taken in the natural environment.",
"title": ""
},
{
"docid": "531d31b5b4b4e417761d83e1e9f2c0f2",
"text": "Database community has made significant research efforts to optimize query processing on GPUs in the past few years. However, we can hardly find that GPUs have been truly adopted in major warehousing production systems. Preparing to merge GPUs to the warehousing systems, we have identified and addressed several critical issues in a threedimensional study of warehousing queries on GPUs by varying query characteristics, software techniques, and GPU hardware configurations. We also propose an analytical model to understand and predict the query performance on GPUs. Based on our study, we present our performance insights for warehousing query execution on GPUs. The objective of our work is to provide a comprehensive guidance for GPU architects, software system designers, and database practitioners to narrow the speed gap between the GPU kernel execution (the fast mode) and data transfer to prepare GPU execution (the slow mode) for high performance in processing data warehousing queries. The GPU query engine developed in this work is open source to the public.",
"title": ""
},
{
"docid": "a2187258ceccb5483b352b286641cf63",
"text": "In this paper we present preliminary results of a novel unsupervised approach for highprecision detection and correction of errors in the output of automatic speech recognition systems. We model the likely contexts of all words in an ASR system vocabulary by performing a lexical co-occurrence analysis using a large corpus of output from the speech system. We then identify regions in the data that contain likely contexts for a given query word. Finally, we detect words or sequences of words in the contextual regions that are unlikely to appear in the context and that are phonetically similar to the query word. Initial experiments indicate that this technique can produce high-precision targeted detection and correction of misrecognized query words.",
"title": ""
},
{
"docid": "36e4260c43efca5a67f99e38e5dbbed8",
"text": "The inherent compliance of soft fluidic actuators makes them attractive for use in wearable devices and soft robotics. Their flexible nature permits them to be used without traditional rotational or prismatic joints. Without these joints, however, measuring the motion of the actuators is challenging. Actuator-level sensors could improve the performance of continuum robots and robots with compliant or multi-degree-of-freedom joints. We make the reinforcing braid of a pneumatic artificial muscle (PAM or McKibben muscle) “smart” by weaving it from conductive insulated wires. These wires form a solenoid-like circuit with an inductance that more than doubles over the PAM contraction. The reinforcing and sensing fibers can be used to measure the contraction of a PAM actuator with a simple linear function of the measured inductance, whereas other proposed self-sensing techniques rely on the addition of special elastomers or transducers, the technique presented in this paper can be implemented without modifications of this kind. We present and experimentally validate two models for Smart Braid sensors based on the long solenoid approximation and the Neumann formula, respectively. We test a McKibben muscle made from a Smart Braid in quasi-static conditions with various end loads and in dynamic conditions. We also test the performance of the Smart Braid sensor alongside steel.",
"title": ""
},
{
"docid": "078b99052a3c0bbf22c5b1eca9cf0019",
"text": "Internet of Things arises as a computational paradigm that promotes the interconnection of objects to the Internet and enables interaction, operational efficiency, and communication. With the increasing inclusion in the network of intelligent objects that have characteristics such as diversity, heterogeneity, mobility and low computational power, it is fundamental to develop mechanisms that allow management and control. In addition, it is important to identify whether the assets are working properly or have anomalies. Traffic classification techniques are important to aid in network analysis and to handle many other key aspects such as security, management, access control, provisioning, and resource allocation. In order to promote the identification of network devices, especially IoT, this article presents a technique that uses Random Forest, a supervised automatic learning algorithm, together with the inspection of the contents of the packages for this purpose. Also, we use the same algorithm to perform the classification of network traffic. In the end, the identification of the devices showed an accuracy of approximately 99%.",
"title": ""
},
{
"docid": "a545f1c872d0631244202c9c8a98969c",
"text": "Leakage of user location and traffic patterns is a serious security threat with significant implications on privacy as reported by recent surveys and identified by the US Congress Location Privacy Protection Act of 2014. While mobile phones can restrict the explicit access to location information to applications authorized by the user, they are ill-equipped to protect against side-channel attacks. In this paper, we show that a zero-permissions Android app can infer vehicular users' location and traveled routes, with high accuracy and without the users' knowledge, using gyroscope, accelerometer, and magnetometer information. We modeled this problem as a maximum likelihood route identification on a graph. The graph is generated from the OpenStreetMap publicly available database of roads. Our route identification algorithms output both a ranked list of potential routes as well a ranked list of route-clusters. Through extensive simulations over 11 cities, we show that for most cities with probability higher than 50% it is possible to output a short list of 10 routes containing the traveled route. In real driving experiments (over 980 Km) in the cities of Boston (resp. Waltham), Massachusetts, we report a probability of 30% (resp. 60%) of inferring a list of 10 routes containing the true route.",
"title": ""
},
{
"docid": "a8531b935acbc3fd94be7c36d0412aab",
"text": "While convolutional neural networks (CNN) produce state-of-the-art results in many applications including biomedical image analysis, they are not robust to variability in the data that is not well represented by the training set. An important source of variability in biomedical images is the appearance of objects such as contrast and texture due to different imaging settings. We introduce the neighborhood similarity layer (NSL) which can be used in a CNN to improve robustness to changes in the appearance of objects that are not well represented by the training data. The proposed NSL transforms its input feature map at a given pixel by computing its similarity to the surrounding neighborhood. This transformation is spatially varying, hence not a convolution. It is differentiable; therefore, networks including the proposed layer can be trained in an end-to-end manner. We demonstrate the advantages of the NSL for the vasculature segmentation and cell detection problems.",
"title": ""
},
{
"docid": "ad8aacb65cef9abe3e232d4bec484dca",
"text": "The advent of emerging technologies such as Web services, service-oriented architecture, and cloud computing has enabled us to perform business services more efficiently and effectively. However, we still suffer from unintended security leakages by unauthorized actions in business services while providing more convenient services to Internet users through such a cutting-edge technological growth. Furthermore, designing and managing Web access control policies are often error-prone due to the lack of effective analysis mechanisms and tools. In this paper, we represent an innovative policy anomaly analysis approach for Web access control policies. We focus on XACML (eXtensible Access Control Markup Language) policy since XACML has become the de facto standard for specifying and enforcing access control policies for various Web-based applications and services. We introduce a policy-based segmentation technique to accurately identify policy anomalies and derive effective anomaly resolutions. We also discuss a proof-of-concept implementation of our method called XAnalyzer and demonstrate how efficiently our approach can discover and resolve policy anomalies.",
"title": ""
},
{
"docid": "ab430a12088341758de5cde60ef26070",
"text": "BACKGROUND\nThe nonselective 5-HT(4) receptor agonists, cisapride and tegaserod have been associated with cardiovascular adverse events (AEs).\n\n\nAIM\nTo perform a systematic review of the safety profile, particularly cardiovascular, of 5-HT(4) agonists developed for gastrointestinal disorders, and a nonsystematic summary of their pharmacology and clinical efficacy.\n\n\nMETHODS\nArticles reporting data on cisapride, clebopride, prucalopride, mosapride, renzapride, tegaserod, TD-5108 (velusetrag) and ATI-7505 (naronapride) were identified through a systematic search of the Cochrane Library, Medline, Embase and Toxfile. Abstracts from UEGW 2006-2008 and DDW 2008-2010 were searched for these drug names, and pharmaceutical companies approached to provide unpublished data.\n\n\nRESULTS\nRetrieved articles on pharmacokinetics, human pharmacodynamics and clinical data with these 5-HT(4) agonists, are reviewed and summarised nonsystematically. Articles relating to cardiac safety and tolerability of these agents, including any relevant case reports, are reported systematically. Two nonselective 5-HT(4) agonists had reports of cardiovascular AEs: cisapride (QT prolongation) and tegaserod (ischaemia). Interactions with, respectively, the hERG cardiac potassium channel and 5-HT(1) receptor subtypes have been suggested to account for these effects. No cardiovascular safety concerns were reported for the newer, selective 5-HT(4) agonists prucalopride, velusetrag, naronapride, or for nonselective 5-HT(4) agonists with no hERG or 5-HT(1) affinity (renzapride, clebopride, mosapride).\n\n\nCONCLUSIONS\n5-HT(4) agonists for GI disorders differ in chemical structure and selectivity for 5-HT(4) receptors. Selectivity for 5-HT(4) over non-5-HT(4) receptors may influence the agent's safety and overall risk-benefit profile. Based on available evidence, highly selective 5-HT(4) agonists may offer improved safety to treat patients with impaired GI motility.",
"title": ""
},
{
"docid": "a4ae0d8042316362380b1976f8278743",
"text": "We are able to recognise familiar faces easily across large variations in image quality, though our ability to match unfamiliar faces is strikingly poor. Here we ask how the representation of a face changes as we become familiar with it. We use a simple image-averaging technique to derive abstract representations of known faces. Using Principal Components Analysis, we show that computational systems based on these averages consistently outperform systems based on collections of instances. Furthermore, the quality of the average improves as more images are used to derive it. These simulations are carried out with famous faces, over which we had no control of superficial image characteristics. We then present data from three experiments demonstrating that image averaging can also improve recognition by human observers. Finally, we describe how PCA on image averages appears to preserve identity-specific face information, while eliminating non-diagnostic pictorial information. We therefore suggest that this is a good candidate for a robust face representation.",
"title": ""
},
{
"docid": "0069b06db18ea5d2c6079fcb9f1bae92",
"text": "State-of-the-art techniques in Generative Adversarial Networks (GANs) such as cycleGAN is able to learn the mapping of one image domain X to another image domain Y using unpaired image data. We extend the cycleGAN to Conditional cycleGAN such that the mapping from X to Y is subjected to attribute condition Z. Using face image generation as an application example, where X is a low resolution face image, Y is a high resolution face image, and Z is a set of attributes related to facial appearance (e.g. gender, hair color, smile), we present our method to incorporate Z into the network, such that the hallucinated high resolution face image Y ′ not only satisfies the low resolution constrain inherent in X , but also the attribute condition prescribed by Z. Using face feature vector extracted from face verification network as Z, we demonstrate the efficacy of our approach on identitypreserving face image super-resolution. Our approach is general and applicable to high-quality face image generation where specific facial attributes can be controlled easily in the automatically generated results.",
"title": ""
},
{
"docid": "9bcba344fa5dda7c058b5bf3c6e5944f",
"text": "Through the recent NTCIR workshops, patent retrieval casts many challenging issues to information retrieval community. Unlike newspaper articles, patent documents are very long and well structured. These characteristics raise the necessity to reassess existing retrieval techniques that have been mainly developed for structure-less and short documents such as newspapers. This study investigates cluster-based retrieval in the context of invalidity search task of patent retrieval. Cluster-based retrieval assumes that clusters would provide additional evidence to match user’s information need. Thus far, cluster-based retrieval approaches have relied on automatically-created clusters. Fortunately, all patents have manuallyassigned cluster information, international patent classification codes. International patent classification is a standard taxonomy for classifying patents, and has currently about 69,000 nodes which are organized into a five-level hierarchical system. Thus, patent documents could provide the best test bed to develop and evaluate cluster-based retrieval techniques. Experiments using the NTCIR-4 patent collection showed that the cluster-based language model could be helpful to improving the cluster-less baseline language model. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "326b8d8d5d128706796d3107a6c2c941",
"text": "Capturing security and privacy requirements in the early stages of system development is essential for creating sufficient public confidence in order to facilitate the adaption of novel systems such as the Internet of Things (IoT). However, security and privacy requirements are often not handled properly due to their wide variety of facets and aspects which make them difficult to formulate. In this study, security-related requirements of IoT heterogeneous systems are decomposed into a taxonomy of quality attributes, and existing security mechanisms and policies are proposed to alleviate the identified forms of security attacks and to reduce the vulnerabilities in the future development of the IoT systems. Finally, the taxonomy is applied on an IoT smart grid scenario.",
"title": ""
},
{
"docid": "7cd992aec08167cb16ea1192a511f9aa",
"text": "In this thesis, we will present an Echo State Network (ESN) to investigate hierarchical cognitive control, one of the functions of Prefrontal Cortex (PFC). This ESN is designed with the intention to implement it as a robot controller, making it useful for biologically inspired robot control and for embodied and embedded PFC research. We will apply the ESN to a n-back task and a Wisconsin Card Sorting task to confirm the hypothesis that topological mapping of temporal and policy abstraction over the PFC can be explained by the effects of two requirements: a better preservation of information when information is processed in different areas, versus a better integration of information when information is processed in a single area.",
"title": ""
},
{
"docid": "b1ff2bcf087530ab01f3784d5a7dfa04",
"text": "Cross-national comparisons of relational work styles suggest that the United States is an anomaly in its low relational focus. This article describes Protestant Relational Ideology (PRI), a cultural construct that explains the origins and nature of this anomaly. This construct refers to a deepseated belief that affective and relational concerns are considered inappropriate in work settings and, therefore, are to be given less attention than in social, non-work settings. Akin to an institutional imprinting perspective, a review of sociological and historical research links PRI to the beliefs and practices of the founding communities of American society. A social cognition perspective is used to explain the mechanisms through which PRI influences American relational workways. The article also describes a program of research that uses PRI to address a wider set of organizational behavior issues that include: antecedents of prejudice and discrimination in diverse organizations; sources of intercultural miscommunication; beliefs about team conflict; mental models of “professionalism” and its effect on organizational recruitment and selection. PROTESTANT RELATIONAL IDEOLOGY 3 Protestant Relational Ideology: The Cognitive Underpinnings and Organizational Implications of an American Anomaly [Our] practices and beliefs appear to us natural, permanent, and inevitable, whereas the particular conditions that make them possible often remain invisible. Asch (1952) In the corridors of American organizations, “Focus on the task” and “Don’t take things personally” are familiar words of advice, clichés repeated as subtle reminders about what it means to act “professionally.” The message is sometimes stated more bluntly. James Clifton, CEO of The Gallup Organization, tells of how frequently managers raise concerns about one particular item in Gallup’s popular ‘Q12 Survey’ on employee engagement: the one that asks, “Do you have a best friend at work?” As one manager states, “We discourage friendships in the workplace.” These directions for appropriate work behavior reflect a deep-seated sentiment that affective and relational concerns ought to be put aside at work in order to direct one’s attention to the task at hand. To be productive and efficient is prima facie to leave personal issues and emotional sensitivity at the office door. Exceptions to this organizational preference in the United States for maintaining a polite but impersonal work style have been found, primarily in countries outside North America and Northern Europe. In these societies researchers have documented several unique cultural imperatives that specifically encourage people to closely monitor social–emotional cues in virtually all interpersonal situations (Ayman & Chemers, 1983; Diaz-Guerrero, 1967; Earley, 1997; Markus & Kitayama, 1991; Triandis, Marin, Liansky & Betancourt, 1984). The traditional path taken to account for these exceptions has been to generate theory grounded in values and traditions indigenous to their respective cultures. For example, the emphasis on expressive social PROTESTANT RELATIONAL IDEOLOGY 4 emotionality and harmony in Mexican culture has been traced to the indigenous cultural value of simpatia in Mexican society (Diaz-Guerreo, 1967; Triandis, et al., 1984). Chinese preferences to conduct business through a web of loyal interpersonal networks are described as a manifestation of quanxi in Chinese society (Bond, 1986; Tsui & Farh, 1997). Familial characteristics of business relations in many Korean organizations are conceptualized with respect to the Korean tradition of chaebol (Kim, 1988). Heightened sensitivity among the Japanese to the needs and concerns of others is argued to stem from the central role of amae in Japanese society (Doi, 1962). Such cultural studies offer rich theoretical accounts of relational styles abroad that deviate from the impersonal ideal workway of North America and Northern Europe. What is an exception versus what is the modal tendency, however, is more clearly revealed in comparative research designed to differentiate cultures along broad relational dimensions. This literature shows that in contrast to American patterns, heightened attention to relational concerns is in fact common across a large and diverse set of societies. For example, independent American self-construals contrast with more relationally sensitive, interdependent Japanese self-construals (Markus & Kitayama, 1991). American individualism also stands out from the harmony-focused Chinese collectivism (Bond, 1986; Earley & Gibson, 1998) or the Italian and French emphasis on team goals over an individual’s goals (Hampden-Turner & Trompenaars, 1993). American preferences for task-focused leaders over social-emotional leaders vary from Indian preferences for leaders high in both domains (Kool & Saksena, 1988; Sinha, 1979, 1980). Lack of attention to contextual details and relational cues in communication distinguish American social interactions from high-context Latin American, Chinese and Korean exchanges (Earley, 1997; Hall, 1983; Sanchez-Burks, et al., 2003; Ting-Toomey, 1991). The pattern that emerges suggests that mainstream American society is a cultural anomaly in its low PROTESTANT RELATIONAL IDEOLOGY 5 degree of relational focus. Across East Asia, Latin America, India, the Middle East and parts of Europe, social-emotional concerns are carefully monitored in virtually all interpersonal situations. One shortcoming of this literature, however, is that a different explanation is offered for every deviation from mainstream American patterns when it appears that it is in fact the American patterns that deviate from the norm. Anomalies beg to be explained. Moreover, the possibility of an American anomaly in relational work style has important implications for the field given the reliance on primarily American samples to generate and validate theory. What then can explain the origin and psychological nature of what appears to be a peculiar relational work style? This article describes a cultural construct called Protestant Relational Ideology, or PRI (Sanchez-Burks, et al., 2003). This construct will be used to address these questions and to explore several organizational behavior dynamics influenced by an attention to affective and relational concerns in the workplace. The next section reviews research on interpersonal patterns across cultures to further examine the extent to which mainstream American society appears as a cultural anomaly in its low degree of relational focus. The PRI construct is introduced next, followed by a review of sociological and historical research that links its origins to the ideology and practices of the founding communities of American society. A social cognition perspective is used to explain the mechanisms through which PRI influences workplace perceptions, decisions and behavior. After reviewing experimental evidence validating PRI’s main propositions, a program of research is described that uses PRI to address a wider set of issues that include: (1) antecedents of prejudice and discrimination in diverse organizations; (2) sources of intercultural difficulties in communication; (3) cultural variation in beliefs about team conflict and its implications; and (4) PROTESTANT RELATIONAL IDEOLOGY 6 implicit mental models of “professionalism” and its effect on organizational recruitment and selection. Relational Styles across Cultures There has been a long-standing interest within the social sciences in mapping out variation in how cultures define and structure the nature of interpersonal relations. The definitions of culture that underlie many of these formulations resonate with what the cognitive anthropologist Sperber (1996) describes as community-specific ideas about what is true, good and efficient. As Sumner (1906/1979) argued, these unique folkways have a directive and historical force and as such are part of the fundamental building blocks for a society. In short, culture in this area of inquiry refers to “shared understandings made manifest in act and artifact,” (Redfield, 1947). The constructs most often studied by psychologists to capture this variation in relational style include independence–interdependence (Markus & Kitayama, 1991; Singelis, 1994), individualism–collectivism (Hofstede, 1980; Hsu, 1981; Triandis, 1995; Triandis, Malpass & Davidson, 1972) and high context–low context cultures (Hall, 1976). The construct of independence–interdependence focuses specifically on the nature of the relationship between self and other (Singelis, 1994). Markus and Kitayama (1991) argue that members of interdependent cultures, for example the Japanese and Koreans, place importance on maintaining interpersonal harmony and remain highly attentive to the needs, desires and goals of others in social interactions. In contrast, members of independent cultures such as in the U.S., emphasize individual happiness and focus on how relationships can serve their own needs, desires and goals. Ambady and colleagues (1996) show that whereas Korean managers structure the way they convey information based on the relationship between self and other, Americans are less PROTESTANT RELATIONAL IDEOLOGY 7 influenced by the relationship than by the content of the information being conveyed. Research within the individualism–collectivism tradition makes similar distinctions between self and other but focuses more on the relationship between the individual and the group. Ting-Toomey and colleagues (Ting-Toomey, 1988; Ting-Toomey et al., 1991) have argued that collectivists, more often than individualists, make a large relational investment in ingroup members. The collectivism of the Chinese is reflected in their use of language that maintains “face” for self and other—a strategy that reaffirms interpersonal bonds (Earley, 1993, 1997; Hu, 1944). Americans instead rely on language that is geared more toward conveying information than to",
"title": ""
},
{
"docid": "c773efb805899ee9e365b5f19ddb40bc",
"text": "In this paper, we overview the 2009 Simulated Car Racing Championship-an event comprising three competitions held in association with the 2009 IEEE Congress on Evolutionary Computation (CEC), the 2009 ACM Genetic and Evolutionary Computation Conference (GECCO), and the 2009 IEEE Symposium on Computational Intelligence and Games (CIG). First, we describe the competition regulations and the software framework. Then, the five best teams describe the methods of computational intelligence they used to develop their drivers and the lessons they learned from the participation in the championship. The organizers provide short summaries of the other competitors. Finally, we summarize the championship results, followed by a discussion about what the organizers learned about 1) the development of high-performing car racing controllers and 2) the organization of scientific competitions.",
"title": ""
},
{
"docid": "a99b1a9409ea1241695590814e685828",
"text": "A two-phase heat spreader has been developed for cooling high heat flux sources in high-power lasers, high-intensity light-emitting diodes (LEDs), and semiconductor power devices. The heat spreader uses a passive mechanism to cool heat sources with fluxes as high as 5 W/mm2 without requiring any active power consumption for the thermal solution. The prototype is similar to a vapor chamber in which water is injected into an evacuated, air-tight shell. The shell consists of an evaporator plate, a condenser plate and an adiabatic section. The heat source is made from aluminum nitride, patterned with platinum. The heat source contains a temperature sensor and is soldered to a copper substrate that serves as the evaporator. Tests were performed with several different evaporator microstructures at different heat loads. A screen mesh was able to dissipate heat loads of 2 W/mm2, but at unacceptably high evaporator temperatures. For sintered copper powder with a 50 µm particle diameter, a heat load of 8.5 W/mm2 was supported, without the occurrence of dryout. A sintered copper powder surface coated with multi-walled carbon nanotubes (CNT) that were rendered hydrophilic showed a lowered thermal resistance for the device.",
"title": ""
},
{
"docid": "4f5b26ab2d8bd68953d473727f6f5589",
"text": "OBJECTIVE\nThe study assessed the impact of mindfulness training on occupational safety of hospital health care workers.\n\n\nMETHODS\nThe study used a randomized waitlist-controlled trial design to test the effect of an 8-week mindfulness-based stress reduction (MBSR) course on self-reported health care worker safety outcomes, measured at baseline, postintervention, and 6 months later.\n\n\nRESULTS\nTwenty-three hospital health care workers participated in the study (11 in immediate intervention group; 12 in waitlist control group). The MBSR training decreased workplace cognitive failures (F [1, 20] = 7.44, P = 0.013, (Equation is included in full-text article.)) and increased safety compliance behaviors (F [1, 20] = 7.79, P = 0.011, (Equation is included in full-text article.)) among hospital health care workers. Effects were stable 6 months following the training. The MBSR intervention did not significantly affect participants' promotion of safety in the workplace (F [1, 20] = 0.40, P = 0.54, (Equation is included in full-text article.)).\n\n\nCONCLUSIONS\nMindfulness training may potentially decrease occupational injuries of health care workers.",
"title": ""
},
{
"docid": "680d755a3a6d8fcd926eb441fad5aa57",
"text": "DNA hybridization arrays simultaneously measure the expression level for thousands of genes. These measurements provide a “snapshot” of transcription levels within the cell. A major challenge in computational biology is to uncover, from such measurements, gene/protein interactions and key biological features of cellular systems.\nIn this paper, we propose a new framework for discovering interactions between genes based on multiple expression measurements This framework builds on the use of Bayesian networks for representing statistical dependencies. A Bayesian network is a graph-based model of joint multi-variate probability distributions that captures properties of conditional independence between variables. Such models are attractive for their ability to describe complex stochastic processes, and for providing clear methodologies for learning from (noisy) observations.\nWe start by showing how Bayesian networks can describe interactions between genes. We then present an efficient algorithm capable of learning such networks and statistical method to assess our confidence in their features. Finally, we apply this method to the S. cerevisiae cell-cycle measurements of Spellman et al. [35] to uncover biological features",
"title": ""
},
{
"docid": "b3ffb805b3dcffc4e5c9cec47f90e566",
"text": "Real-time ride-sharing, which enables on-the-fly matching between riders and drivers (even en-route), is an important problem due to its environmental and societal benefits. With the emergence of many ride-sharing platforms (e.g., Uber and Lyft), the design of a scalable framework to match riders and drivers based on their various constraints while maximizing the overall profit of the platform becomes a distinguishing business strategy.\n A key challenge of such framework is to satisfy both types of the users in the system, e.g., reducing both riders' and drivers' travel distances. However, the majority of the existing approaches focus only on minimizing the total travel distance of drivers which is not always equivalent to shorter trips for riders. Hence, we propose a fair pricing model that simultaneously satisfies both the riders' and drivers' constraints and desires (formulated as their profiles). In particular, we introduce a distributed auction-based framework where each driver's mobile app automatically bids on every nearby request taking into account many factors such as both the driver's and the riders' profiles, their itineraries, the pricing model, and the current number of riders in the vehicle. Subsequently, the server determines the highest bidder and assigns the rider to that driver. We show that this framework is scalable and efficient, processing hundreds of tasks per second in the presence of thousands of drivers. We compare our framework with the state-of-the-art approaches in both industry and academia through experiments on New York City's taxi dataset. Our results show that our framework can simultaneously match more riders to drivers (i.e., higher service rate) by engaging the drivers more effectively. Moreover, our frame-work schedules shorter trips for riders (i.e., better service quality). Finally, as a consequence of higher service rate and shorter trips, our framework increases the overall profit of the ride-sharing platforms.",
"title": ""
}
] |
scidocsrr
|
5bacd831dda6f4b0a911e0abfb9d2dd0
|
A single-input dual-output Boost converter with PI controller
|
[
{
"docid": "243f86b185b98fa1ca35857e30c55529",
"text": "In this paper, a novel high step-up dc-dc converter with coupled-inductor and voltage-doubler circuits is proposed. The converter achieves high step-up voltage gain with appropriate duty ratio and low voltage stress on the power switches. Also, the energy stored in the leakage inductor of the coupled inductor can be recycled to the output. The operating principles and the steady-state analyses of the proposed converter are discussed in detail. Finally, a prototype circuit of the proposed converter is implemented in the laboratory to verify the performance of the proposed converter.",
"title": ""
},
{
"docid": "2b8efba9363b5f177089534edeb877a9",
"text": "This article presents a methodology that allows the development of new converter topologies for single-input, multiple-output (SIMO) from different basic configurations of single-input, single-output dc-dc converters. These typologies have in common the use of only one power-switching device, and they are all nonisolated converters. Sixteen different topologies are highlighted, and their main features are explained. The 16 typologies include nine twooutput-type, five three-output-type, one four-output-type, and one six-output-type dc-dc converter configurations. In addition, an experimental prototype of a three-output-type configuration with six different output voltages based on a single-ended primary inductance (SEPIC)-Cuk-boost combination converter was developed, and the proposed design methodology for a basic converter combination was experimentally verified.",
"title": ""
}
] |
[
{
"docid": "b9538c45fc55caff8b423f6ecc1fe416",
"text": " Summary. The Probabilistic I/O Automaton model of [31] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time. The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succession of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk. The algorithm is hard to analyze because of its use of nontrivial results of probability theory (specifically, random walk theory which is based on infinitely many coin flips rather than on finitely many coin flips), because of its complex setting, including asynchrony and both nondeterministic and probabilistic choice, and because of the interplay among several different sub-protocols. We formalize the Aspnes-Herlihy algorithm using probabilistic I/O automata. In doing so, we decompose it formally into three subprotocols: one to carry out the agreement attempts, one to conduct the random walks, and one to implement a shared counter needed by the random walks. Properties of all three subprotocols are proved separately, and combined using general results about automaton composition. It turns out that most of the work involves proving non-probabilistic properties (invariants, simulation mappings, non-probabilistic progress properties, etc.). The probabilistic reasoning is isolated to a few small sections of the proof. The task of carrying out this proof has led us to develop several general proof techniques for probabilistic I/O automata. These include ways to combine expectations for different complexity measures, to compose expected complexity properties, to convert probabilistic claims to deterministic claims, to use abstraction mappings to prove probabilistic properties, and to apply random walk theory in a distributed computational setting. We apply all of these techniques to analyze the expected complexity of the algorithm.",
"title": ""
},
{
"docid": "df96263c86a36ed30e8a074354b09239",
"text": "We propose three iterative superimposed-pilot based channel estimators for Orthogonal Frequency Division Multiplexing (OFDM) systems. Two are approximate maximum-likelihood, derived by using a Taylor expansion of the conditional probability density function of the received signal or by approximating the OFDM time signal as Gaussian, and one is minimum-mean square error. The complexity per iteration of these estimators is given by approximately O(NL2), O(N3) and O(NL), where N is the number of OFDM subcarriers and L is the channel length (time). Two direct (non-iterative) data detectors are also derived by averaging the log likelihood function over the channel statistics. These detectors require minimising the cost metric in an integer space, and we suggest the use of the sphere decoder for them. The Cramér--Rao bound for superimposed pilot based channel estimation is derived, and this bound is achieved by the proposed estimators. The optimal pilot placement is shown to be the equally spaced distribution of pilots. The bit error rate of the proposed estimators is simulated for N = 32 OFDM system. Our estimators perform fairly close to a separated training scheme, but without any loss of spectral efficiency. Copyright © 2011 John Wiley & Sons, Ltd. *Correspondence Chintha Tellambura, Department of Electrical and Computer Engineering, University Alberta, Edmonton, Alberta, Canada T6G 2C5. E-mail: chintha@ece.ualberta.ca Received 20 July 2009; Revised 23 July 2010; Accepted 13 October 2010",
"title": ""
},
{
"docid": "cced83a80de88c973fb3f7b6db8b9031",
"text": "Derivation of reduced order representations of dynamical systems requires the modeling of the truncated dynamics on the retained dynamics. In its most general form, this so-called closure model has to account for memory effects. In this work, we present a framework of operator inference to extract the governing dynamics of closure from data in a compact, non-Markovian form. We employ sparse polynomial regression and artificial neural networks to extract the underlying operator. For a special class of non-linear systems, observability of the closure in terms of the resolved dynamics is analyzed and theoretical results are presented on the compactness of the memory. The proposed framework is evaluated on examples consisting of linear to nonlinear systems with and without chaotic dynamics, with an emphasis on predictive performance on unseen data.",
"title": ""
},
{
"docid": "c10adaa38fd3f832767daf5e0baf07f5",
"text": "Cellular senescence entails essentially irreversible replicative arrest, apoptosis resistance, and frequently acquisition of a pro-inflammatory, tissue-destructive senescence-associated secretory phenotype (SASP). Senescent cells accumulate in various tissues with aging and at sites of pathogenesis in many chronic diseases and conditions. The SASP can contribute to senescence-related inflammation, metabolic dysregulation, stem cell dysfunction, aging phenotypes, chronic diseases, geriatric syndromes, and loss of resilience. Delaying senescent cell accumulation or reducing senescent cell burden is associated with delay, prevention, or alleviation of multiple senescence-associated conditions. We used a hypothesis-driven approach to discover pro-survival Senescent Cell Anti-apoptotic Pathways (SCAPs) and, based on these SCAPs, the first senolytic agents, drugs that cause senescent cells to become susceptible to their own pro-apoptotic microenvironment. Several senolytic agents, which appear to alleviate multiple senescence-related phenotypes in pre-clinical models, are beginning the process of being translated into clinical interventions that could be transformative.",
"title": ""
},
{
"docid": "f384196178bf6336d0708718e5b4b378",
"text": "Simulating how the global Internet behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the links, the \"mix\" of different applications used at a site, and the levels of congestion seen on different links. We discuss two key strategies for developing meaningful simulations in the face of these difficulties: searching for invariants and judiciously exploring the simulation parameter space. We finish with a brief look at a collaborative effort within the research community to develop a common network simulator.",
"title": ""
},
{
"docid": "acff8bc4a955a3a41796138151035e38",
"text": "Using data from the Fragile Families and Child Wellbeing Study (N=3,870) and cross-lagged path analysis, the authors examined whether spanking at ages 1 and 3 is adversely associated with cognitive skills and behavior problems at ages 3 and 5. The authors found spanking at age 1 was associated with a higher level of spanking and externalizing behavior at age 3, and spanking at age 3 was associated with a higher level of internalizing and externalizing behavior at age 5. The associations between spanking at age 1 and behavioral problems at age 5 operated predominantly through ongoing spanking at age 3. The authors did not find an association between spanking at age 1 and cognitive skills at age 3 or 5.",
"title": ""
},
{
"docid": "0cd6bfaa30ae2c4d62a660f9762bbf8e",
"text": "Scientists who use animals in research must justify the number of animals to be used, and committees that review proposals to use animals in research must review this justification to ensure the appropriateness of the number of animals to be used. This article discusses when the number of animals to be used can best be estimated from previous experience and when a simple power and sample size calculation should be performed. Even complicated experimental designs requiring sophisticated statistical models for analysis can usually be simplified to a single key or critical question so that simple formulae can be used to estimate the required sample size. Approaches to sample size estimation for various types of hypotheses are described, and equations are provided in the Appendix. Several web sites are cited for more information and for performing actual calculations",
"title": ""
},
{
"docid": "1497c5ce53dec0c2d02981d01a419f4b",
"text": "While image registration has been studied in different areas of computer vision, aligning images depicting different scenes remains a challenging problem, closer to recognition than to image matching. Analogous to optical flow, where an image is aligned to its temporally adjacent frame, we propose SIFT flow, a method to align an image to its neighbors in a large image collection consisting of a variety of scenes. For a query image, histogram intersection on a bag-of-visual-words representation is used to find the set of nearest neighbors in the database. The SIFT flow algorithm then consists of matching densely sampled SIFT features between the two images, while preserving spatial discontinuities. The use of SIFT features allows robust matching across different scene/object appearances and the discontinuity-preserving spatial model allows matching of objects located at different parts of the scene. Experiments show that the proposed approach is able to robustly align complicated scenes with large spatial distortions. We collect a large database of videos and apply the SIFT flow algorithm to two applications: (i) motion field prediction from a single static image and (ii) motion synthesis via transfer of moving objects.",
"title": ""
},
{
"docid": "3038ec4ac3d648a4ec052b8d7f854107",
"text": "Anomalous data can negatively impact energy forecasting by causing model parameters to be incorrectly estimated. This paper presents two approaches for the detection and imputation of anomalies in time series data. Autoregressive with exogenous inputs (ARX) and artificial neural network (ANN) models are used to extract the characteristics of time series. Anomalies are detected by performing hypothesis testing on the extrema of the residuals, and the anomalous data points are imputed using the ARX and ANN models. Because the anomalies affect the model coefficients, the data cleaning process is performed iteratively. The models are re-learned on “cleaner” data after an anomaly is imputed. The anomalous data are reimputed to each iteration using the updated ARX and ANN models. The ARX and ANN data cleaning models are evaluated on natural gas time series data. This paper demonstrates that the proposed approaches are able to identify and impute anomalous data points. Forecasting models learned on the unclean data and the cleaned data are tested on an uncleaned out-of-sample dataset. The forecasting model learned on the cleaned data outperforms the model learned on the unclean data with 1.67% improvement in the mean absolute percentage errors and a 32.8% improvement in the root mean squared error. Existing challenges include correctly identifying specific types of anomalies such as negative flows.",
"title": ""
},
{
"docid": "d8e2fe04a2a900a55561f6e59fecc9fa",
"text": "In this paper realization of digital LCR meter is presented. Realized system is based on integrated circuit AD5933 which is controlled by microcontroller ATmega128. Device can calculate resistance, capacitance and inductance of the device under test as well as Dissipation and Quality factors. Operating frequency range is from 5 to 100 kHz with frequency sweep function in maximum 511 steps. Device has full standalone capabilities with LCD for displaying of results and keyboard for configuration. Created report of measured and calculated values is stored on micro SD card in format which is compatible with MS Excel which ensures easy offline analysis on PC. Accuracy of developed system is tested and verified through comparison with commercial LCR meter.",
"title": ""
},
{
"docid": "2d340d004f81a9ed16ead41044103c5d",
"text": "Bio-medical image segmentation is one of the promising sectors where nuclei segmentation from high-resolution histopathological images enables extraction of very high-quality features for nuclear morphometrics and other analysis metrics in the field of digital pathology. The traditional methods including Otsu thresholding and watershed methods do not work properly in different challenging cases. However, Deep Learning (DL) based approaches are showing tremendous success in different modalities of bio-medical imaging including computation pathology. Recently, the Recurrent Residual U-Net (R2U-Net) has been proposed, which has shown state-of-the-art (SOTA) performance in different modalities (retinal blood vessel, skin cancer, and lung segmentation) in medical image segmentation. However, in this implementation, the R2U-Net is applied to nuclei segmentation for the first time on a publicly available dataset that was collected from the Data Science Bowl Grand Challenge in 2018. The R2U-Net shows around 92.15% segmentation accuracy in terms of the Dice Coefficient (DC) during the testing phase. In addition, the qualitative results show accurate segmentation, which clearly demonstrates the robustness of the R2U-Net model for the nuclei segmentation task.",
"title": ""
},
{
"docid": "c8f436b31b7d699461b0b77c0fdbdb22",
"text": "Understanding how a learned black box works is of crucial interest for the future of Machine Learning. In this paper, we pioneer the question of the global interpretability of learned black box models that assign numerical values to symbolic sequential data. To tackle that task, we propose a spectral algorithm for the extraction of weighted automata (WA) from such black boxes. This algorithm does not require the access to a dataset or to the inner representation of the black box: the inferred model can be obtained solely by querying the black box, feeding it with inputs and analyzing its outputs. Experiments using Recurrent Neural Networks (RNN) trained on a wide collection of 48 synthetic datasets and 2 real datasets show that the obtained approximation is of great quality.",
"title": ""
},
{
"docid": "8010b3fdc1c223202157419c4f61bacf",
"text": "Thanks to information explosion, data for the objects of interest can be collected from increasingly more sources. However, for the same object, there usually exist conflicts among the collected multi-source information. To tackle this challenge, truth discovery, which integrates multi-source noisy information by estimating the reliability of each source, has emerged as a hot topic. Several truth discovery methods have been proposed for various scenarios, and they have been successfully applied in diverse application domains. In this survey, we focus on providing a comprehensive overview of truth discovery methods, and summarizing them from different aspects. We also discuss some future directions of truth discovery research. We hope that this survey will promote a better understanding of the current progress on truth discovery, and offer some guidelines on how to apply these approaches in application domains.",
"title": ""
},
{
"docid": "7e57c7abcd4bcb79d5f0fe8b6cd9a836",
"text": "Among the many viruses that are known to infect the human liver, hepatitis B virus (HBV) and hepatitis C virus (HCV) are unique because of their prodigious capacity to cause persistent infection, cirrhosis, and liver cancer. HBV and HCV are noncytopathic viruses and, thus, immunologically mediated events play an important role in the pathogenesis and outcome of these infections. The adaptive immune response mediates virtually all of the liver disease associated with viral hepatitis. However, it is becoming increasingly clear that antigen-nonspecific inflammatory cells exacerbate cytotoxic T lymphocyte (CTL)-induced immunopathology and that platelets enhance the accumulation of CTLs in the liver. Chronic hepatitis is characterized by an inefficient T cell response unable to completely clear HBV or HCV from the liver, which consequently sustains continuous cycles of low-level cell destruction. Over long periods of time, recurrent immune-mediated liver damage contributes to the development of cirrhosis and hepatocellular carcinoma.",
"title": ""
},
{
"docid": "f93ebf9beefe35985b6e31445044e6d1",
"text": "Recent genetic studies have suggested that the colonization of East Asia by modern humans was more complex than a single origin from the South, and that a genetic contribution via a Northern route was probably quite substantial. Here we use a spatially-explicit computer simulation approach to investigate the human migration hypotheses of this region based on one-route or two-route models. We test the likelihood of each scenario by using Human Leukocyte Antigen (HLA) − A, −B, and − DRB1 genetic data of East Asian populations, with both selective and demographic parameters considered. The posterior distribution of each parameter is estimated by an Approximate Bayesian Computation (ABC) approach. Our results strongly support a model with two main routes of colonization of East Asia on both sides of the Himalayas, with distinct demographic histories in Northern and Southern populations, characterized by more isolation in the South. In East Asia, gene flow between populations originating from the two routes probably existed until a remote prehistoric period, explaining the continuous pattern of genetic variation currently observed along the latitude. A significant although dissimilar level of balancing selection acting on the three HLA loci is detected, but its effect on the local genetic patterns appears to be minor compared to those of past demographic events.",
"title": ""
},
{
"docid": "8e8ed9826c1d0e767eced89259cf5d1e",
"text": "Forensic investigators should acquire and analyze large amount of digital evidence and submit to the court the technical truth about facts in virtual worlds. Since digital evidence is complex, diffuse, volatile and can be accidentally or improperly modified after acquired, the chain of custody must ensure that collected evidence can be accepted as truthful by the court. In this scenario, traditional paper-based chain of custody is inefficient and cannot guarantee that the forensic processes follow legal and technical principles in an electronic society. Computer forensics practitioners use forensic software to acquire copies or images from electronic devices and register associated metadata, like computer hard disk serial number and practitioner name. Usually, chain of custody software and data are insufficient to guarantee to the court the quality of forensic images, or guarantee that only the right person had access to the evidence or even guarantee that copies and analysis only were made by authorized manipulations and in the acceptable addresses. Recent developments in forensic software make possible to collect in multiple locations and analysis in distributed environments. In this work we propose the use of the new network facilities existing in Advanced Forensic Format (AFF), an open and extensible format designed for forensic tolls, to increase the quality of electronic chain of custody.",
"title": ""
},
{
"docid": "3f55bac8aaba79cdb28284bbdc4c6e8e",
"text": "We present an OpenCL compilation framework to generate high-performance hardware for FPGAs. For an OpenCL application comprising a host program and a set of kernels, it compiles the host program, generates Verilog HDL for each kernel, compiles the circuit using Altera Complete Design Suite 12.0, and downloads the compiled design onto an FPGA.We can then run the application by executing the host program on a Windows(tm)-based machine, which communicates with kernels on an FPGA using a PCIe interface. We implement four applications on an Altera Stratix IV and present the throughput and area results for each application. We show that we can achieve a clock frequency in excess of 160MHz on our benchmarks, and that OpenCL computing paradigm is a viable design entry method for high-performance computing applications on FPGAs.",
"title": ""
},
{
"docid": "f398eee40f39acd2c2955287ccbb4924",
"text": "One of the ultimate goals of natural language processing (NLP) systems is understanding the meaning of what is being transmitted, irrespective of the medium (e.g., written versus spoken) or the form (e.g., static documents versus dynamic dialogues). Although much work has been done in traditional language domains such as speech and static written text, little has yet been done in the newer communication domains enabled by the Internet, e.g., online chat and instant messaging. This is in part due to the fact that there are no annotated chat corpora available to the broader research community. The purpose of this research is to build a chat corpus, tagged with lexical (token part-of-speech labels), syntactic (post parse tree), and discourse (post classification) information. Such a corpus can then be used to develop more complex, statistical-based NLP applications that perform tasks such as author profiling, entity identification, and social network analysis.",
"title": ""
},
{
"docid": "5f89aac70e93b9fcf4c37d119770f747",
"text": "Partial differential equations (PDEs) play a prominent role in many disciplines of science and engineering. PDEs are commonly derived based on empirical observations. However, with the rapid development of sensors, computational power, and data storage in the past decade, huge quantities of data can be easily collected and efficiently stored. Such vast quantity of data offers new opportunities for data-driven discovery of physical laws. Inspired by the latest development of neural network designs in deep learning, we propose a new feed-forward deep network, called PDENet, to fulfill two objectives at the same time: to accurately predict dynamics of complex systems and to uncover the underlying hidden PDE models. Comparing with existing approaches, our approach has the most flexibility by learning both differential operators and the nonlinear response function of the underlying PDE model. A special feature of the proposed PDE-Net is that all filters are properly constrained, which enables us to easily identify the governing PDE models while still maintaining the expressive and predictive power of the network. These constrains are carefully designed by fully exploiting the relation between the orders of differential operators and the orders of sum rules of filters (an important concept originated from wavelet theory). Numerical experiments show that the PDE-Net has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment. Equal contribution School of Mathematical Sciences, Peking University, Beijing, China Beijing Computational Science Research Center, Beijing, China Beijing International Center for Mathematical Research, Peking University, Beijing, China Center for Data Science, Peking University Laboratory for Biomedical Image Analysis, Beijing Institute of Big Data Research. Correspondence to: Bin Dong <dongbin@math.pku.edu.cn>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).",
"title": ""
},
{
"docid": "282a6b06fb018fb7e2ec223f74345944",
"text": "The DIPPER architecture is a collection of software agents for prototyping spoken dialogue systems. Implemented on top of the Open Agent Architecture (OAA), it comprises agents for speech input and output, dialogue management, and further supporting agents. We define a formal syntax and semantics for the DIPPER information state update language. The language is independent of particular programming languages, and incorporates procedural attachments for access to external resources using OAA.",
"title": ""
}
] |
scidocsrr
|
b788cf524ee3c5d7e09aa6869f8d5ab0
|
Object detection algorithm for segregating similar coloured objects and database formation
|
[
{
"docid": "d4fa5b9d4530b12a394c1e98ea2793b1",
"text": "Most successful object recognition systems rely on binary classification, deciding only if an object is present or not, but not providing information on the actual object location. To perform localization, one can take a sliding window approach, but this strongly increases the computational cost, because the classifier function has to be evaluated over a large set of candidate subwindows. In this paper, we propose a simple yet powerful branch-and-bound scheme that allows efficient maximization of a large class of classifier functions over all possible subimages. It converges to a globally optimal solution typically in sublinear time. We show how our method is applicable to different object detection and retrieval scenarios. The achieved speedup allows the use of classifiers for localization that formerly were considered too slow for this task, such as SVMs with a spatial pyramid kernel or nearest neighbor classifiers based on the chi2-distance. We demonstrate state-of-the-art performance of the resulting systems on the UIUC Cars dataset, the PASCAL VOC 2006 dataset and in the PASCAL VOC 2007 competition.",
"title": ""
},
{
"docid": "550e84d58db67e1d89ac437654f4ccb6",
"text": "Skin detection from images, typically used as a preprocessing step, has a wide range of applications such as dermatology diagnostics, human computer interaction designs, and etc. It is a challenging problem due to many factors such as variation in pigment melanin, uneven illumination, and differences in ethnicity geographics. Besides, age and gender introduce additional difficulties to the detection process. It is hard to determine whether a single pixel is skin or nonskin without considering the context. An efficient traditional hand-engineered skin color detection algorithm requires extensive work by domain experts. Recently, deep learning algorithms, especially convolutional neural networks (CNNs), have achieved great success in pixel-wise labeling tasks. However, CNN-based architectures are not sufficient for modeling the relationship between pixels and their neighbors. In this letter, we integrate recurrent neural networks (RNNs) layers into the fully convolutional neural networks (FCNs), and develop an end-to-end network for human skin detection. In particular, FCN layers capture generic local features, while RNN layers model the semantic contextual dependencies in images. Experimental results on the COMPAQ and ECU skin datasets validate the effectiveness of the proposed approach, where RNN layers enhance the discriminative power of skin detection in complex background situations.",
"title": ""
}
] |
[
{
"docid": "db8cd016ec1ab0644aa32f68346db618",
"text": "This paper presents SpanDex, a set of extensions to Android’s Dalvik virtual machine that ensures apps do not leak users’ passwords. The primary technical challenge addressed by SpanDex is precise, sound, and efficient handling of implicit information flows (e.g., information transferred by a program’s control flow). SpanDex handles implicit flows by borrowing techniques from symbolic execution to precisely quantify the amount of information a process’ control flow reveals about a secret. To apply these techniques at runtime without sacrificing performance, SpanDex runs untrusted code in a data-flow sensitive sandbox, which limits the mix of operations that an app can perform on sensitive data. Experiments with a SpanDex prototype using 50 popular Android apps and an analysis of a large list of leaked passwords predicts that for 90% of users, an attacker would need over 80 login attempts to guess their password. Today the same attacker would need only one attempt for all users.",
"title": ""
},
{
"docid": "6f872a7e9620cff3b1cc4b75a04b09a5",
"text": "Effective management of asthma and other respiratory diseases requires constant monitoring and frequent data collection using a spirometer and longitudinal analysis. However, even after three decades of clinical use, there are very few personalized spirometers available on the market, especially those connecting to smartphones. To address this problem, we have developed mobileSpiro, a portable, low-cost spirometer intended for patient self-monitoring. The mobileSpiro API, and the accompanying Android application, interfaces with the spirometer hardware to capture, process and analyze the data. Our key contributions are automated algorithms on the smartphone which play a technician's role in detecting erroneous patient maneuvers, ensuring data quality, and coaching patients with easy-to-understand feedback, all packaged as an Android app. We demonstrate that mobileSpiro is as accurate as a commercial ISO13485 device, with an inter-device deviation in flow reading of less than 8%, and detects more than 95% of erroneous cough maneuvers in a public CDC dataset.",
"title": ""
},
{
"docid": "e708fc43b5ac8abf8cc2707195e8a45e",
"text": "We develop analytical models for predicting the magnetic field distribution in Halbach magnetized machines. They are formulated in polar coordinates and account for the relative recoil permeability of the magnets. They are applicable to both internal and external rotor permanent-magnet machines with either an iron-cored or air-cored stator and/or rotor. We compare predicted results with those obtained by finite-element analyses and measurements. We show that the air-gap flux density varies significantly with the pole number and that an optimal combination of the magnet thickness and the pole number exists for maximum air-gap flux density, while the back iron can enhance the air-gap field and electromagnetic torque when the radial thickness of the magnet is small.",
"title": ""
},
{
"docid": "e464cde1434026c17b06716c6a416b7a",
"text": "Three experiments supported the hypothesis that people are more willing to express attitudes that could be viewed as prejudiced when their past behavior has established their credentials as nonprejudiced persons. In Study 1, participants given the opportunity to disagree with blatantly sexist statements were later more willing to favor a man for a stereotypically male job. In Study 2, participants who first had the opportunity to select a member of a stereotyped group (a woman or an African American) for a category-neutral job were more likely to reject a member of that group for a job stereotypically suited for majority members. In Study 3, participants who had established credentials as nonprejudiced persons revealed a greater willingness to express a politically incorrect opinion even when the audience was unaware of their credentials. The general conditions under which people feel licensed to act on illicit motives are discussed.",
"title": ""
},
{
"docid": "8383cd262477e2b80c57742229c9dd64",
"text": "Pie charts and their variants are prevalent in business settings and many other uses, even if they are not popular with the academic community. In a recent study, we found that contrary to general belief, there is no clear evidence that these charts are read based on the central angle. Instead, area and arc length appear to be at least equally important. In this paper, we build on that study to test several pie chart variations that are popular in information graphics: exploded pie chart, pie with larger slice, elliptical pie, and square pie (in addition to a regular pie chart used as the baseline). We find that even variants that do not distort central angle cause greater error than regular pie charts. Charts that distort the shape show the highest error. Many of our predictions based on the previous study’s results are borne out by this study’s findings.",
"title": ""
},
{
"docid": "30bc7923529eec5ac7d62f91de804f8e",
"text": "In this paper, we consider the scene parsing problem and propose a novel MultiPath Feedback recurrent neural network (MPF-RNN) for parsing scene images. MPF-RNN can enhance the capability of RNNs in modeling long-range context information at multiple levels and better distinguish pixels that are easy to confuse. Different from feedforward CNNs and RNNs with only single feedback, MPFRNN propagates the contextual features learned at top layer through weighted recurrent connections to multiple bottom layers to help them learn better features with such “hindsight”. For better training MPF-RNN, we propose a new strategy that considers accumulative loss at multiple recurrent steps to improve performance of the MPF-RNN on parsing small objects. With these two novel components, MPF-RNN has achieved significant improvement over strong baselines (VGG16 and Res101) on five challenging scene parsing benchmarks, including traditional SiftFlow, Barcelona, CamVid, Stanford Background as well as the recently released large-scale ADE20K.",
"title": ""
},
{
"docid": "bc66054dc60a0b8de2d6e0b769240272",
"text": "In this paper, we present the idea and methodologies on predicting the age span of users over microblog dataset. Given a user’s personal information such as user tags, job, education, self-description, and gender, as well as the content of his/her microblogs, we automatically classify the user’s age into one of four predefined ranges. Particularly, we extract a set of features from the given information about the user, and employ a statistic-based framework to solve this problem. The measurement shows that our proposed method incorporating selected features has an accuracy of around 71% on average over the training dataset.",
"title": ""
},
{
"docid": "15ada8f138d89c52737cfb99d73219f0",
"text": "A dual-band circularly polarized stacked annular-ring patch antenna is presented in this letter. This antenna operates at both the GPS L1 frequency of 1575 MHz and L2 frequency of 1227 MHz, whose frequency ratio is about 1.28. The proposed antenna is formed by two concentric annular-ring patches that are placed on opposite sides of a substrate. Wide axial-ratio bandwidths (larger than 2%), determined by 3-dB axial ratio, are achieved at both bands. The measured gains at 1227 and 1575 MHz are about 6 and 7 dBi, respectively, with the loss of substrate taken into consideration. Both simulated and measured results are presented. The method of varying frequency ratio is also discussed.",
"title": ""
},
{
"docid": "82ef80d6257c5787dcf9201183735497",
"text": "Big data is becoming a research focus in intelligent transportation systems (ITS), which can be seen in many projects around the world. Intelligent transportation systems will produce a large amount of data. The produced big data will have profound impacts on the design and application of intelligent transportation systems, which makes ITS safer, more efficient, and profitable. Studying big data analytics in ITS is a flourishing field. This paper first reviews the history and characteristics of big data and intelligent transportation systems. The framework of conducting big data analytics in ITS is discussed next, where the data source and collection methods, data analytics methods and platforms, and big data analytics application categories are summarized. Several case studies of big data analytics applications in intelligent transportation systems, including road traffic accidents analysis, road traffic flow prediction, public transportation service plan, personal travel route plan, rail transportation management and control, and assets maintenance are introduced. Finally, this paper discusses some open challenges of using big data analytics in ITS.",
"title": ""
},
{
"docid": "52afd42744f96b3c6492186c9ddd16a6",
"text": "Structured hourly nurse rounding is an effective method to improve patient satisfaction and clinical outcomes. This program evaluation describes outcomes related to the implementation of hourly nurse rounding in one medical-surgical unit in a large community hospital. Overall Hospital Consumer Assessment of Healthcare Providers and Systems domain scores increased with the exception of responsiveness of staff. Patient falls and hospital-acquired pressure ulcers decreased during the project period.",
"title": ""
},
{
"docid": "cb39f6ac5646e733604902a4b74b797c",
"text": "In this paper, we present a generative model based approach to solve the multi-view stereo problem. The input images are considered to be generated by either one of two processes: (i) an inlier process, which generates the pixels which are visible from the reference camera and which obey the constant brightness assumption, and (ii) an outlier process which generates all other pixels. Depth and visibility are jointly modelled as a hiddenMarkov Random Field, and the spatial correlations of both are explicitly accounted for. Inference is made tractable by an EM-algorithm, which alternates between estimation of visibility and depth, and optimisation of model parameters. We describe and compare two implementations of the E-step of the algorithm, which correspond to the Mean Field and Bethe approximations of the free energy. The approach is validated by experiments on challenging real-world scenes, of which two are contaminated by independently moving objects.",
"title": ""
},
{
"docid": "2faa73eec710382a6f3d658562bf7928",
"text": "We appreciate the comments provided by Thompson et al. in their Letter to the Editor, regarding our study BThe myth: in vivo degradation of polypropylene-based meshes^ [1]. However, we question the motives of the authors, who have notably disclosed that they provide medicolegal testimony on behalf of the plaintiffs in mesh litigation, for bringing their courtroom rhetoric into this discussion. Thompson et al. grossly erred in claiming that we only analyzed the exposed surface of the explants, and not the flaked material that had been removed when cleaning the explants (Bremoved material^) and ended up in the cleaning solution. As stated in our paper, however, the flaked material was analyzed using light microscopy (LM), scanning electron microscopy (SEM), and Fourier transform infrared (FTIR) microscopy before cleaning and after each of the five sequences of the overall cleaning process. Analyzing the cleaning solution would be redundant and therefore serve no purpose, i.e., the material on the surface was already analyzed and then ended up in the cleaning solution. Based on our chemical and microscopic analyses (LM, SEM, and FTIR), we concluded that the explanted Prolene meshes that we examined did not degrade or oxidize in vivo. Thompson et al. noted that that there are Bwell over 100 peer-reviewed articles, accepting or describing the degradation of PP [polypropylene] in variable conditions and degradation of other implantable polymers in the body .̂ They also claimed that they are not aware of any other peer-reviewed journal articles supporting the notion that PP does not degrade in the body. As stated in our paper, it is well documented that unstabilized PP oxidizes readily under ultraviolet (UV) light and upon exposure to high temperatures. However, as we also discuss and cite to in our paper, properly formulated PP is stable in oxidizing media, including elevated temperatures, in in vivo applications, and to a lesser extent, under UV light. Thompson et al. further claimed that our study Bdoes not explain the multiple features of PP degradation reported in the literature.^ This is an erroneous statement because they must have either failed to review or chose to ignore the discussion of the literature in our paper. For instance, the literature is replete with the chemistry of PP degradation, confirming simultaneous production of carbonyl groups and loss of molecular weight. It is well known chemistry that oxidative degradation of PP produces carbonyl groups and if there is no carbonyl group formation, there is no oxidative degradation. To further highlight this point, Clavé et al. [2] have often been cited as supporting the notion that PP degrades in vivo, and as discussed in our manuscript, their findings and statements in the study confirmed that they were unable to prove the existence of PP degradation from any of their various tests. They further failed to include that Liebert’s investigation reported explicitly that stabilized PP, such as Prolene, did not degrade. Thompson et al. also claimed that the degradation process for PP continues until no more PP can be oxidized, with the corresponding appearance of external surface features and hardening and shrinkage of the material. The fallacy of their statement, in the context of the explanted meshes that we examined, is highlighted by the clean fibers that retained their manufacturing extrusion lines and the lack of a wide range of crack morphology (e.g., varying crack depths into the core of the PP fibers) for a given explant and across explants from different patients with different implantation durations. This reply refers to the comment available at doi:10.1007/s00192-016-3233-z.",
"title": ""
},
{
"docid": "1fba9ed825604e8afde8459a3d3dc0c0",
"text": "Person re-identification (re-ID) models trained on one domain often fail to generalize well to another. In our attempt, we present a \"learning via translation\" framework. In the baseline, we translate the labeled images from source to target domain in an unsupervised manner. We then train re-ID models with the translated images by supervised methods. Yet, being an essential part of this framework, unsupervised image-image translation suffers from the information loss of source-domain labels during translation. Our motivation is two-fold. First, for each image, the discriminative cues contained in its ID label should be maintained after translation. Second, given the fact that two domains have entirely different persons, a translated image should be dissimilar to any of the target IDs. To this end, we propose to preserve two types of unsupervised similarities, 1) self-similarity of an image before and after translation, and 2) domain-dissimilarity of a translated source image and a target image. Both constraints are implemented in the similarity preserving generative adversarial network (SPGAN) which consists of an Siamese network and a CycleGAN. Through domain adaptation experiment, we show that images generated by SPGAN are more suitable for domain adaptation and yield consistent and competitive re-ID accuracy on two large-scale datasets.",
"title": ""
},
{
"docid": "fec4f80f907d65d4b73480b9c224d98a",
"text": "This paper presents a novel finite position set-phase locked loop (FPS-PLL) for sensorless control of surface-mounted permanent-magnet synchronous generators (PMSGs) in variable-speed wind turbines. The proposed FPS-PLL is based on the finite control set-model predictive control concept, where a finite number of rotor positions are used to estimate the back electromotive force of the PMSG. Then, the estimated rotor position, which minimizes a certain cost function, is selected to be the optimal rotor position. This eliminates the need of a fixed-gain proportional-integral controller, which is commonly utilized in the conventional PLL. The performance of the proposed FPS-PLL has been experimentally investigated and compared with that of the conventional one using a 14.5 kW PMSG with a field-oriented control scheme utilized as the generator control strategy. Furthermore, the robustness of the proposed FPS-PLL is investigated against PMSG parameters variations.",
"title": ""
},
{
"docid": "90c2121fc04c0c8d9c4e3d8ee7b8ecc0",
"text": "Measuring similarity between two data objects is a more challenging problem for data mining and knowledge discovery tasks. The traditional clustering algorithms have been mainly stressed on numerical data, the implicit property of which can be exploited to define distance function between the data points to define similarity measure. The problem of similarity becomes more complex when the data is categorical which do not have a natural ordering of values or can be called as non geometrical attributes. Clustering on relational data sets when majority of its attributes are of categorical types makes interesting facts. No earlier work has been done on clustering categorical attributes of relational data set types making use of the property of functional dependency as parameter to measure similarity. This paper is an extension of earlier work on clustering relational data sets where domains are unique and similarity is context based and introduces a new notion of similarity based on dependency of an attribute on other attributes prevalent in the relational data set. This paper also gives a brief overview of popular similarity measures of categorical attributes. This novel similarity measure can be used to apply on tuples and their respective values. The important property of categorical domain is that they have smaller number of attribute values. The similarity measure of relational data sets then can be applied to the smaller data sets for efficient results.",
"title": ""
},
{
"docid": "0ebcd0c087454a9812ee54a0cd71a1a9",
"text": "In this paper, we present the Smart City Architecture developed in the context of the ARTEMIS JU SP3 SOFIA project. It is an Event Driven Architecture that allows the management and cooperation of heterogeneous sensors for monitoring public spaces. The main components of the architecture are implemented in a testbed on a subway scenario with the objective to demonstrate that our proposed solution, can enhance the detection of anomalous events and simplify both the operators tasks and the communications to passengers in case of emergency.",
"title": ""
},
{
"docid": "21dd193ec6849fa78ba03333708aebea",
"text": "Since the inception of Bitcoin technology, its underlying data structureâĂŞ-the blockchainâĂŞ-has garnered much attention due to properties such as decentralization, transparency, and immutability. These properties make blockchains suitable for apps that require disintermediation through trustless exchange, consistent and incorruptible transaction records, and operational models beyond cryptocurrency. In particular, blockchain and its programmable smart contracts have the potential to address healthcare interoperability issues, such as enabling effective interactions between users and medical applications, delivering patient data securely to a variety of organizations and devices, and improving the overall efficiency of medical practice workflow. Despite the interest in using blockchain technology for healthcare interoperability, however, little information is available on the concrete architectural styles and recommendations for designing blockchain-based apps targeting healthcare. This paper provides an initial step in filling this gap by showing: (1) the features and implementation challenges in healthcare interoperability, (2) an end-to-end case study of a blockchain-based healthcare app that we are developing, and (3) how designing blockchain-based apps using familiar software patterns can help address healthcare specific challenges.",
"title": ""
},
{
"docid": "456fd41267a82663fee901b111ff7d47",
"text": "The tagging of Named Entities, the names of particular things or classes, is regarded as an important component technology for many NLP applications. The first Named Entity set had 7 types, organization, location, person, date, time, money and percent expressions. Later, in the IREX project artifact was added and ACE added two, GPE and facility, to pursue the generalization of the technology. However, 7 or 8 kinds of NE are not broad enough to cover general applications. We proposed about 150 categories of NE (Sekine et al. 2002) and now we have extended it again to 200 categories. Also we have developed dictionaries and an automatic tagger for NEs in Japanese.",
"title": ""
},
{
"docid": "71f7ce3b6e4a20a112f6a1ae9c22e8e1",
"text": "The neural correlates of many emotional states have been studied, most recently through the technique of fMRI. However, nothing is known about the neural substrates involved in evoking one of the most overwhelming of all affective states, that of romantic love, about which we report here. The activity in the brains of 17 subjects who were deeply in love was scanned using fMRI, while they viewed pictures of their partners, and compared with the activity produced by viewing pictures of three friends of similar age, sex and duration of friendship as their partners. The activity was restricted to foci in the medial insula and the anterior cingulate cortex and, subcortically, in the caudate nucleus and the putamen, all bilaterally. Deactivations were observed in the posterior cingulate gyrus and in the amygdala and were right-lateralized in the prefrontal, parietal and middle temporal cortices. The combination of these sites differs from those in previous studies of emotion, suggesting that a unique network of areas is responsible for evoking this affective state. This leads us to postulate that the principle of functional specialization in the cortex applies to affective states as well.",
"title": ""
}
] |
scidocsrr
|
06151e8749d9a5ed94b1c04894eb5d39
|
Rule-based composition of intelligent mechatronic components in manufacturing systems using prolog
|
[
{
"docid": "b20aa2222759644b4b60b5b450424c9e",
"text": "Manufacturing has faced significant changes during the last years, namely the move from a local economy towards a global and competitive economy, with markets demanding for highly customized products of high quality at lower costs, and with short life cycles. In this environment, manufacturing enterprises, to remain competitive, must respond closely to customer demands by improving their flexibility and agility, while maintaining their productivity and quality. Dynamic response to emergence is becoming a key issue in manufacturing field because traditional manufacturing control systems are built upon rigid control architectures, which cannot respond efficiently and effectively to dynamic change. In these circumstances, the current challenge is to develop manufacturing control systems that exhibit intelligence, robustness and adaptation to the environment changes and disturbances. The introduction of multi-agent systems and holonic manufacturing systems paradigms addresses these requirements, bringing the advantages of modularity, decentralization, autonomy, scalability and reusability. This paper surveys the literature in manufacturing control systems using distributed artificial intelligence techniques, namely multi-agent systems and holonic manufacturing systems principles. The paper also discusses the reasons for the weak adoption of these approaches by industry and points out the challenges and research opportunities for the future. & 2008 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "80aeb12d50a77ad455e5786cf75e901f",
"text": "New over-the-air (OTA) measurement technology is wanted for quantitative testing of modern wireless devices for use in multipath. We show that the reverberation chamber emulates a rich isotropic multipath (RIMP), making it an extreme reference environment for testing of wireless devices. This thereby complements testing in anechoic chambers representing the opposite extreme reference environment: pure line-of-sight (LOS). Antenna diversity gain was defined for RIMP environments based on improved fading performance. This paper finds this RIMP-diversity gain also valid as a metric of the cumulative improvement of the 1% worst users randomly distributed in the RIMP environment. The paper argues that LOS in modern wireless systems is random due to randomness of the orientations of the users and their devices. This leads to the definition of cumulative LOS-diversity gain of the 1% worst users in random LOS. This is generally not equal to the RIMP-diversity gain. The paper overviews the research on reverberation chambers for testing of wireless devices in RIMP environments. Finally, it presents a simple theory that can accurately model measured throughput for a long-term evolution (LTE) system with orthogonal frequency-division multiplexing (OFDM) and multiple-input-multiple-output (MIMO), the effects of which can clearly be seen and depend on the controllable time delay spread in the chamber.",
"title": ""
},
{
"docid": "c5081f86c4a173a40175e65b05d9effb",
"text": "Convergence insufficiency is characterized by an inability to maintain effortless alignment of the two eyes (binocular convergence) while performing near tasks. Conventional rehabilitative vision therapy for the condition is monotonous and dull, leading to low levels of compliance. If the therapy is not performed then improvements in the condition are unlikely. This paper examines the use of computer games as a new delivery paradigm for vision therapy, specifically at how they can be used in the treatment of convergence insufficiency while at home. A game was created and tested in a small scale clinical trial. Results show clinical improvements, as well as high levels of compliance and motivation. Additionally, the game was able to objectively track patient progress and compliance.",
"title": ""
},
{
"docid": "cc15da3e71152d3df3b3b6260f8b0719",
"text": "Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.",
"title": ""
},
{
"docid": "edb5b733e77271dd4e1afaf742388a68",
"text": "The Intolerance of Uncertainty Model was initially developed as an explanation for worry within the context of generalized anxiety disorder. However, recent research has identified intolerance of uncertainty (IU) as a possible transdiagnostic maintaining factor across the anxiety disorders and depression. The aim of this study was to determine whether IU mediated the relationship between neuroticism and symptoms related to various anxiety disorders and depression in a treatment-seeking sample (N=328). Consistent with previous research, IU was significantly associated with neuroticism as well as with symptoms of social phobia, panic disorder and agoraphobia, obsessive-compulsive disorder, generalized anxiety disorder, and depression. Moreover, IU explained unique variance in these symptom measures when controlling for neuroticism. Mediational analyses showed that IU was a significant partial mediator between neuroticism and all symptom measures, even when controlling for symptoms of other disorders. More specifically, anxiety in anticipation of future uncertainty (prospective anxiety) partially mediated the relationship between neuroticism and symptoms of generalized anxiety disorder (i.e. worry) and obsessive-compulsive disorder, whereas inaction in the face of uncertainty (inhibitory anxiety) partially mediated the relationship between neuroticism and symptoms of social anxiety, panic disorder and agoraphobia, and depression. Sobel's test demonstrated that all hypothesized meditational pathways were associated with significant indirect effects, although the mediation effect was stronger for worry than other symptoms. Potential implications of these findings for the treatment of anxiety disorders and depression are discussed.",
"title": ""
},
{
"docid": "e5b7435bd9b761e85bf3ffed0c4a8ee0",
"text": "A novel method of analysis for a linear series-fed microstrip antenna array is developed, which is based on a set of canonical coefficients defined for the elements of the array. This method accounts for the mutual coupling between the elements, and allows for a design that has an arbitrary amplitude and phase of the radiating patch currents. This method has the simplicity of a CAD approach while maintaining an accuracy close to that of a full-wave method. The coefficients involved in the formulation are determined by full-wave simulation on either a single patch element or on two patch elements (for mutual coupling calculations).",
"title": ""
},
{
"docid": "1231b1e1e0ace856815e32dbdc38a113",
"text": "Availability of cloud systems is one of the main concerns of cloud computing. The term, availability of clouds, is mainly evaluated by ubiquity of information comparing with resource scaling. In clouds, load balancing, as a method, is applied across different data centers to ensure the network availability by minimizing use of computer hardware, software failures and mitigating recourse limitations. This work discusses the load balancing in cloud computing and then demonstrates a case study of system availability based on a typical Hospital Database Management solution.",
"title": ""
},
{
"docid": "ca2e577e819ac49861c65bfe8d26f5a1",
"text": "A design of a delay based self-oscillating class-D power amplifier for piezoelectric actuators is presented and modelled. First order and second order configurations are discussed in detail and analytical results reveal the stability criteria of a second order system, which should be respected in the design. It also shows if the second order system converges, it will tend to give a correct pulse modulation regarding to the input modulation index. Experimental results show the effectiveness of this design procedure. For a piezoelectric load of 400 nF, powered by a 150 V 10 kHz sinusoidal signal, a total harmonic distortion (THD) of 4.3% is obtained.",
"title": ""
},
{
"docid": "6c6afdefc918e6dfdb6bc5f5bb96cf45",
"text": "Due to the complexity and uncertainty of socioeconomic environments and cognitive diversity of group members, the cognitive information over alternatives provided by a decision organization consisting of several experts is usually uncertain and hesitant. Hesitant fuzzy preference relations provide a useful means to represent the hesitant cognitions of the decision organization over alternatives, which describe the possible degrees that one alternative is preferred to another by using a set of discrete values. However, in order to depict the cognitions over alternatives more comprehensively, besides the degrees that one alternative is preferred to another, the decision organization would give the degrees that the alternative is non-preferred to another, which may be a set of possible values. To effectively handle such common cases, in this paper, the dual hesitant fuzzy preference relation (DHFPR) is introduced and the methods for group decision making (GDM) with DHFPRs are investigated. Firstly, a new operator to aggregate dual hesitant fuzzy cognitive information is developed, which treats the membership and non-membership information fairly, and can generate more neutral results than the existing dual hesitant fuzzy aggregation operators. Since compatibility is a very effective tool to measure the consensus in GDM with preference relations, then two compatibility measures for DHFPRs are proposed. After that, the developed aggregation operator and compatibility measures are applied to GDM with DHFPRs and two GDM methods are designed, which can be applied to different decision making situations. Each GDM method involves a consensus improving model with respect to DHFPRs. The model in the first method reaches the desired consensus level by adjusting the group members’ preference values, and the model in the second method improves the group consensus level by modifying the weights of group members according to their contributions to the group decision, which maintains the group members’ original opinions and allows the group members not to compromise for reaching the desired consensus level. In actual applications, we may choose a proper method to solve the GDM problems with DHFPRs in light of the actual situation. Compared with the GDM methods with IVIFPRs, the proposed methods directly apply the original DHFPRs to decision making and do not need to transform them into the IVIFPRs, which can avoid the loss and distortion of original information, and thus can generate more precise decision results.",
"title": ""
},
{
"docid": "4d73c50244d16dab6d3773dbeebbae98",
"text": "We describe the latest version of Microsoft's conversational speech recognition system for the Switchboard and CallHome domains. The system adds a CNN-BLSTM acoustic model to the set of model architectures we combined previously, and includes character-based and dialog session aware LSTM language models in rescoring. For system combination we adopt a two-stage approach, whereby acoustic model posteriors are first combined at the senone/frame level, followed by a word-level voting via confusion networks. We also added another language model rescoring step following the confusion network combination. The resulting system yields a 5.1% word error rate on the NIST 2000 Switchboard test set, and 9.8% on the CallHome subset.",
"title": ""
},
{
"docid": "08765f109452855227eb85395e4c49b1",
"text": "and on their differing feelings toward the politicians (in this case, across liking, trusting, and feeling affiliated with the candidates). After 16 test runs, the voters did indeed change their attitudes and feelings toward the candidates in different and yet generally realistic ways, and even changed their attitudes about other issues based on what a candidate extolled.",
"title": ""
},
{
"docid": "fee3ab82c8bcc4f76ab6a2307158ea51",
"text": "Three experiments investigated the effects of making errors oneself, as compared to just hearing the correct answer without error generation, hearing another person make an error, or being \"on-the-hook,\" that is, possibly but not necessarily being the person who would be \"called-on\" to give a response. In all three experiments, generating either an error of commission or generating the correct response, oneself, out loud, compared to being a person who heard another's commission errors (or correct responses), was beneficial for later recall of the correct answer. Experiment 1 suggested that the decrement in recall from self- to other-generation could be partially offset by being \"on-the-hook.\" However, this benefit was fragile and did not hold up either at a delay or when the presence of the other participants was downplayed. The beneficial effect of self-generation, both of correct responses and of errors of commission is consistent with reconsolidation theory. That theory holds that retrieval has a special status as a memory process that renders the retrieved traces labile. If the person was correct, reconsolidating the correct trace results in strengthening. If wrong, the malleability of the retrieved trace implied by reconsolidation theory makes it open to enhanced modification and correction. If the person was not the agent who retrieved, though, such as when someone else retrieves information, or when nothing is retrieved as is the case with omission errors (which we argue is truly how the term \"unsuccessful retrieval\" should be used), the benefit conferred by the special malleability entailed by the postulated reconsolidation process does not obtain.",
"title": ""
},
{
"docid": "1f3a41fc5202d636fcfe920603df57e4",
"text": "We present data on corporal punishment (CP) by a nationally representative sample of 991 American parents interviewed in 1995. Six types of CP were examined: slaps on the hand or leg, spanking on the buttocks, pinching, shaking, hitting on the buttocks with a belt or paddle, and slapping in the face. The overall prevalence rate (the percentage of parents using any of these types of CP during the previous year) was 35% for infants and reached a peak of 94% at ages 3 and 4. Despite rapid decline after age 5, just over half of American parents hit children at age 12, a third at age 14, and 13% at age 17. Analysis of chronicity found that parents who hit teenage children did so an average of about six times during the year. Severity, as measured by hitting the child with a belt or paddle, was greatest for children age 5-12 (28% of such children). CP was more prevalent among African American and low socioeconomic status parents, in the South, for boys, and by mothers. The pervasiveness of CP reported in this article, and the harmful side effects of CP shown by recent longitudinal research, indicates a need for psychology and sociology textbooks to reverse the current tendency to almost ignore CP and instead treat it as a major aspect of the socialization experience of American children; and for developmental psychologists to be cognizant of the likelihood that parents are using CP far more often than even advocates of CP recommend, and to inform parents about the risks involved.",
"title": ""
},
{
"docid": "a3386199b44e3164fafe8a8ae096b130",
"text": "Diehl Aerospace GmbH (DAs) is currently involved in national German Research & Technology (R&T) projects (e.g. SYSTAVIO, SESAM) and in European R&T projects like ASHLEY to extend and to improve the Integrated Modular Avionics (IMA) technology. Diehl Aerospace is investing to expand its current IMA technology to enable further integration of systems including hardware modules, associated software, tools and processes while increasing the level of standardization. An additional objective is to integrate more systems on a common computing platform which uses the same toolchain, processes and integration experiences. New IMA components enable integration of high integrity fast loop system applications such as control applications. Distributed architectures which provide new types of interfaces allow integration of secondary power distribution systems along with other IMA functions. Cross A/C type usage is also a future emphasis to increase standardization and decrease development and operating costs as well as improvements on time to market and affordability of systems.",
"title": ""
},
{
"docid": "c8b5c1cba52f2fe85c21aa6526ce8985",
"text": "Quantifying how the spectral content of speech relates to changes in mental state may be crucial in building an objective speech-based depression classification system with clinical utility. This paper investigates the hypothesis that important depression based information can be captured within the covariance structure of a Gaussian Mixture Model (GMM) of recorded speech. Significant negative correlations found between a speaker’s average weighted variance a GMMbased indicator of speaker variability and their level of depression support this hypothesis. Further evidence is provided by the comparison of classification accuracies from seven different GMM-UBM systems, each formed by varying different parameter combinations during MAP adaption. This analysis shows that variance-only adaptation either outperforms or matches the de facto standard mean-only adaptation when classifying both the presence and severity of depression. This result is perhaps the first of its kind seen in GMM-UBM speech classification.",
"title": ""
},
{
"docid": "066e0f4902bb4020c6d3fad7c06ee519",
"text": "Automatic traffic light detection (TLD) plays an important role for driver-assistance system and autonomous vehicles. State-of-the-art TLD systems showed remarkable results by exploring visual information from static frames. However, traffic lights from different countries, regions, and manufactures are always visually distinct. The existing large intra-class variance makes the pre-trained detectors perform good on one dataset but fail on the others with different origins. One the other hand, LED traffic lights are widely used because of better energy efficiency. Based on the observation LED traffic light flashes in proportion to the input AC power frequency, we propose a hybrid TLD approach which combines the temporally frequency analysis and visual information using high-speed camera. Exploiting temporal information is shown to be very effective in the experiments. It is considered to be more robust than visual information-only methods.",
"title": ""
},
{
"docid": "f48d02ff3661d3b91c68d6fcf750f83e",
"text": "There have been a number of techniques developed in recent years for the efficient analysis of probabilistic inference problems, represented as Bayes' networks or influence diagrams (Lauritzen and Spiegelhalter [9], Pearl [12], Shachter [14]). To varying degrees these methods exploit the conditional independence assumed and revealed in the problem structure to analyze problems in polynomial time, essentially polynomial in the number of variables and the size of the largest state space encountered during the evaluation. Unfortunately, there are many problems of interest for which the variables of interest are continuous rather than discrete, so the relevant state spaces become infinite and the polynomial complexity is of little help.",
"title": ""
},
{
"docid": "5718c733a80805c5dbb4333c2d298980",
"text": "{Portions reprinted, with permission from Keim et al. #2001 IEEE Abstract Simple presentation graphics are intuitive and easy-to-use, but show only highly aggregated data presenting only a very small number of data values (as in the case of bar charts) and may have a high degree of overlap occluding a significant portion of the data values (as in the case of the x-y plots). In this article, the authors therefore propose a generalization of traditional bar charts and x-y plots, which allows the visualization of large amounts of data. The basic idea is to use the pixels within the bars to present detailed information of the data records. The so-called pixel bar charts retain the intuitiveness of traditional bar charts while allowing very large data sets to be visualized in an effective way. It is shown that, for an effective pixel placement, a complex optimization problem has to be solved. The authors then present an algorithm which efficiently solves the problem. The application to a number of real-world ecommerce data sets shows the wide applicability and usefulness of this new idea, and a comparison to other well-known visualization techniques (parallel coordinates and spiral techniques) shows a number of clear advantages. Information Visualization (2002) 1, 20 – 34. DOI: 10.1057/palgrave/ivs/9500003",
"title": ""
},
{
"docid": "2dd6fff23e32efc7d6ead42d0dbc4ff0",
"text": "Recent technological advances in wheat genomics provide new opportunities to uncover genetic variation in traits of breeding interest and enable genome-based breeding to deliver wheat cultivars for the projected food requirements for 2050. There has been tremendous progress in development of whole-genome sequencing resources in wheat and its progenitor species during the last 5 years. High-throughput genotyping is now possible in wheat not only for routine gene introgression but also for high-density genome-wide genotyping. This is a major transition phase to enable genome-based breeding to achieve progressive genetic gains to parallel to projected wheat production demands. These advances have intrigued wheat researchers to practice less pursued analytical approaches which were not practiced due to the short history of genome sequence availability. Such approaches have been successful in gene discovery and breeding applications in other crops and animals for which genome sequences have been available for much longer. These strategies include, (i) environmental genome-wide association studies in wheat genetic resources stored in genbanks to identify genes for local adaptation by using agroclimatic traits as phenotypes, (ii) haplotype-based analyses to improve the statistical power and resolution of genomic selection and gene mapping experiments, (iii) new breeding strategies for genome-based prediction of heterosis patterns in wheat, and (iv) ultimate use of genomics information to develop more efficient and robust genome-wide genotyping platforms to precisely predict higher yield potential and stability with greater precision. Genome-based breeding has potential to achieve the ultimate objective of ensuring sustainable wheat production through developing high yielding, climate-resilient wheat cultivars with high nutritional quality.",
"title": ""
},
{
"docid": "559e5a5da1f0a924fc432e7f4c3548bd",
"text": "Deep learning is recently showing outstanding results for solving a wide variety of robotic tasks in the areas of perception, planning, localization, and control. Its excellent capabilities for learning representations from the complex data acquired in real environments make it extremely suitable for many kinds of autonomous robotic applications. In parallel, Unmanned Aerial Vehicles (UAVs) are currently being extensively applied for several types of civilian tasks in applications going from security, surveillance, and disaster rescue to parcel delivery or warehouse management. In this paper, a thorough review has been performed on recent reported uses and applications of deep learning forUAVs, including themost relevant developments as well as their performances and limitations. In addition, a detailed explanation of the main deep learning techniques is provided. We conclude with a description of the main challenges for the application of deep learning for UAV-based solutions.",
"title": ""
},
{
"docid": "04ed876237214c1366f966b80ebb7fd4",
"text": "Load Balancing is essential for efficient operations indistributed environments. As Cloud Computing is growingrapidly and clients are demanding more services and betterresults, load balancing for the Cloud has become a veryinteresting and important research area. Many algorithms weresuggested to provide efficient mechanisms and algorithms forassigning the client's requests to available Cloud nodes. Theseapproaches aim to enhance the overall performance of the Cloudand provide the user more satisfying and efficient services. Inthis paper, we investigate the different algorithms proposed toresolve the issue of load balancing and task scheduling in CloudComputing. We discuss and compare these algorithms to providean overview of the latest approaches in the field.",
"title": ""
}
] |
scidocsrr
|
b458ce1c4b32894522418d88521b0413
|
Using Smartphones to Detect Car Accidents and Provide Situational Awareness to Emergency Responders
|
[
{
"docid": "8718d91f37d12b8ff7658723a937ea84",
"text": "We consider the problem of monitoring road and traffic conditions in a city. Prior work in this area has required the deployment of dedicated sensors on vehicles and/or on the roadside, or the tracking of mobile phones by service providers. Furthermore, prior work has largely focused on the developed world, with its relatively simple traffic flow patterns. In fact, traffic flow in cities of the developing regions, which comprise much of the world, tends to be much more complex owing to varied road conditions (e.g., potholed roads), chaotic traffic (e.g., a lot of braking and honking), and a heterogeneous mix of vehicles (2-wheelers, 3-wheelers, cars, buses, etc.).\n To monitor road and traffic conditions in such a setting, we present Nericell, a system that performs rich sensing by piggybacking on smartphones that users carry with them in normal course. In this paper, we focus specifically on the sensing component, which uses the accelerometer, microphone, GSM radio, and/or GPS sensors in these phones to detect potholes, bumps, braking, and honking. Nericell addresses several challenges including virtually reorienting the accelerometer on a phone that is at an arbitrary orientation, and performing honk detection and localization in an energy efficient manner. We also touch upon the idea of triggered sensing, where dissimilar sensors are used in tandem to conserve energy. We evaluate the effectiveness of the sensing functions in Nericell based on experiments conducted on the roads of Bangalore, with promising results.",
"title": ""
},
{
"docid": "2ecd0bf132b3b77dc1625ef8d09c925b",
"text": "This paper presents an efficient algorithm to compute time-to-x (TTX) criticality measures (e.g. time-to-collision, time-to-brake, time-to-steer). Such measures can be used to trigger warnings and emergency maneuvers in driver assistance systems. Our numerical scheme finds a discrete time approximation of TTX values in real time using a modified binary search algorithm. It computes TTX values with high accuracy by incorporating realistic vehicle dynamics and using realistic emergency maneuver models. It is capable of handling complex object behavior models (e.g. motion prediction based on DGPS maps). Unlike most other methods presented in the literature, our approach enables decisions in scenarios with multiple static and dynamic objects in the scene. The flexibility of our method is demonstrated on two exemplary applications: intersection assistance for left-turn-across-path scenarios and pedestrian protection by automatic steering.",
"title": ""
}
] |
[
{
"docid": "f850321173db137674eb74a0dd2afc30",
"text": "The relational data model has been dominant and widely used since 1970. However, as the need to deal with big data grows, new data models, such as Hadoop and NoSQL, were developed to address the limitation of the traditional relational data model. As a result, determining which data model is suitable for applications has become a challenge. The purpose of this paper is to provide insight into choosing the suitable data model by conducting a benchmark using Yahoo! Cloud Serving Benchmark (YCSB) on three different database systems: (1) MySQL for relational data model, (2) MongoDB for NoSQL data model, and (3) HBase for Hadoop framework. The benchmark was conducted by running four different workloads. Each workload is executed using a different increasing operation and thread count, while observing how their change respectively affects throughput, latency, and runtime.",
"title": ""
},
{
"docid": "497fdaf295df72238f9ec0cb879b6a48",
"text": "A vehicle or fleet management system is implemented for tracking the movement of the vehicle at any time from any location. This proposed system helps in real time tracking of the vehicle using a smart phone application. This method is easy and efficient when compared to other implementations. In emerging technology of developing IOT (Internet of Things) the generic 8 bit/16 bit micro controllers are replaced by 32bit micro controllers in the embedded systems. This has many advantages like use of 32bit micro controller’s scalability, reusability and faster execution speed. Implementation of RTOS is very much necessary for having a real time system. RTOS features are application portability, reusability, more efficient use of system resources. The proposed system uses a 32bit ARM7 based microcontroller with an embedded Real Time Operating System (RTOS).The vehicle unit application is written on FreeRTOS. The peripheral drivers like UART, External interrupt are developed for RTOS aware environment. The vehicle unit consists of a GPS/GPRS module where the position of the vehicle is got from the Global Positioning System (GPS) and the General Packet Radio Service (GPRS) is used to update the timely information of the vehicle position. The vehicle unit updates the location to the Fleet management application on the web server. The vehicle management is a java based web application integrated with MySQL Database. The web application in the proposed system is based on OpenGTS open source vehicle tracking application. A GoTrack Android application is configured to work with web application. The smart phone application also provides a separate login for administrator to add, edit and remove the vehicles on the fleet management system. The users and administrators can continuously monitor the vehicle using a smart phone application.",
"title": ""
},
{
"docid": "92684148cd7d2a6a21657918015343b0",
"text": "Radiative wireless power transfer (WPT) is a promising technology to provide cost-effective and real-time power supplies to wireless devices. Although radiative WPT shares many similar characteristics with the extensively studied wireless information transfer or communication, they also differ significantly in terms of design objectives, transmitter/receiver architectures and hardware constraints, and so on. In this paper, we first give an overview on the various WPT technologies, the historical development of the radiative WPT technology and the main challenges in designing contemporary radiative WPT systems. Then, we focus on the state-of-the-art communication and signal processing techniques that can be applied to tackle these challenges. Topics discussed include energy harvester modeling, energy beamforming for WPT, channel acquisition, power region characterization in multi-user WPT, waveform design with linear and non-linear energy receiver model, safety and health issues of WPT, massive multiple-input multiple-output and millimeter wave enabled WPT, wireless charging control, and wireless power and communication systems co-design. We also point out directions that are promising for future research.",
"title": ""
},
{
"docid": "3bb9fc6e09c9ce13252a04d6978d1bfc",
"text": "Recently, sparse coding has been successfully applied in visual tracking. The goal of this paper is to review the state-of-the-art tracking methods based on sparse coding. We first analyze the benefits of using sparse coding in visual tracking and then categorize these methods into appearance modeling based on sparse coding (AMSC) and target searching based on sparse representation (TSSR) as well as their combination. For each categorization, we introduce the basic framework and subsequent improvements with emphasis on their advantages and disadvantages. Finally, we conduct extensive experiments to compare the representative methods on a total of 20 test sequences. The experimental results indicate that: (1) AMSC methods significantly outperform TSSR methods. (2) For AMSC methods, both discriminative dictionary and spatial order reserved pooling operators are important for achieving high tracking accuracy. (3) For TSSR methods, the widely used identity pixel basis will degrade the performance when the target or candidate images are not aligned well or severe occlusion occurs. (4) For TSSR methods, ‘1 norm minimization is not necessary. In contrast, ‘2 norm minimization can obtain comparable performance but with lower computational cost. The open questions and future research topics are also discussed. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "50268ed4eb8f14966d9d0ec32b01429f",
"text": "Women's empowerment is an important goal in achieving sustainable development worldwide. Offering access to microfinance services to women is one way to increase women's empowerment. However, empirical evidence provides mixed results with respect to its effectiveness. We reviewed previous research on the impact of microfinance services on different aspects of women's empowerment. We propose a Three-Dimensional Model of Women's Empowerment to integrate previous findings and to gain a deeper understanding of women's empowerment in the field of microfinance services. This model proposes that women's empowerment can take place on three distinct dimensions: (1) the micro-level, referring to an individuals' personal beliefs as well as actions, where personal empowerment can be observed (2) the meso-level, referring to beliefs as well as actions in relation to relevant others, where relational empowerment can be observed and (3) the macro-level, referring to outcomes in the broader, societal context where societal empowerment can be observed. Importantly, we propose that time and culture are important factors that influence women's empowerment. We suggest that the time lag between an intervention and its evaluation may influence when empowerment effects on the different dimensions occur and that the type of intervention influences the sequence in which the three dimensions can be observed. We suggest that cultures may differ with respect to which components of empowerment are considered indicators of empowerment and how women's position in society may influence the development of women's empowerment. We propose that a Three-Dimensional Model of Women's Empowerment should guide future programs in designing, implementing, and evaluating their interventions. As such our analysis offers two main practical implications. First, based on the model we suggest that future research should differentiate between the three dimensions of women's empowerment to increase our understanding of women's empowerment and to facilitate comparisons of results across studies and cultures. Second, we suggest that program designers should specify how an intervention should stimulate which dimension(s) of women's empowerment. We hope that this model inspires longitudinal and cross-cultural research to examine the development of women's empowerment on the personal, relational, and societal dimension.",
"title": ""
},
{
"docid": "32acba3e072e0113759278c57ee2aee2",
"text": "Software product lines (SPL) relying on UML technology have been a breakthrough in software reuse in the IT domain. In the industrial automation domain, SPL are not yet established in industrial practice. One reason for this is that conventional function block programming techniques do not adequately support SPL architecture definition and product configuration, while UML tools are not industrially accepted for control software development. In this paper, the use of object oriented (OO) extensions of IEC 61131–3 are used to bridge this gap. The SPL architecture and product specifications are expressed as UML class diagrams, which serve as straightforward specifications for configuring the IEC 61131–3 control application with OO extensions. A product configurator tool has been developed using PLCopen XML technology to support the generation of an executable IEC 61131–3 application according to chosen product options. The approach is demonstrated using a mobile elevating working platform as a case study.",
"title": ""
},
{
"docid": "1f1158ad55dc8a494d9350c5a5aab2f2",
"text": "Individuals display a mathematics disability when their performance on standardized calculation tests or on numerical reasoning tasks is comparatively low, given their age, education and intellectual reasoning ability. Low performance due to cerebral trauma is called acquired dyscalculia. Mathematical learning difficulties with similar features but without evidence of cerebral trauma are referred to as developmental dyscalculia. This review identifies types of developmental dyscalculia, the neuropsychological processes that are linked with them and procedures for identifying dyscalculia. The concept of dyslexia is one with which professionals working in the areas of special education, learning disabilities are reasonably familiar. The concept of dyscalculia, on the other hand, is less well known. This article describes this condition and examines its implications for understanding mathematics learning disabilities. Individuals display a mathematics disability when their performance on standardized calculation tests or on numerical reasoning tasks is significantly depressed, given their age, education and intellectual reasoning ability ( Mental Disorders IV (DSM IV)). When this loss of ability to calculate is due to cerebral trauma, the condition is called acalculia or acquired dyscalculia. Mathematical learning difficulties that share features with acquired dyscalculia but without evidence of cerebral trauma are referred to as developmental dyscalculia (Hughes, Kolstad & Briggs, 1994). The focus of this review is on developmental dyscalculia (DD). Students who show DD have difficulty recalling number facts and completing numerical calculations. They also show chronic difficulties with numerical processing skills such recognizing number symbols, writing numbers or naming written numerals and applying procedures correctly (Gordon, 1992). They may have low self efficacy and selective attentional difficulties (Gross Tsur, Auerbach, Manor & Shalev, 1996). Not all students who display low mathematics achievement have DD. Mathematics underachievement can be due to a range of causes, for example, lack of motivation or interest in learning mathematics, low self efficacy, high anxiety, inappropriate earlier teaching or poor school attendance. It can also be due to generalised poor learning capacity, immature general ability, severe language disorders or sensory processing. Underachievement due to DD has a neuropsychological foundation. The students lack particular cognitive or information processing strategies necessary for acquiring and using arithmetic knowledge. They can learn successfully in most contexts and have relevant general language and sensory processing. They also have access to a curriculum from which their peers learn successfully. It is also necessary to clarify the relationship between DD and reading disabilities. Some aspects of both literacy and arithmetic learning draw on the same cognitive processes. Both, for example, 1 This article was published in Australian Journal of Learning Disabilities, 2003 8, (4).",
"title": ""
},
{
"docid": "83e3ce2b70e1f06073fd0a476bf04ff7",
"text": "Each year, a number of natural disasters strike across the globe, killing hundreds and causing billions of dollars in property and infrastructure damage. Minimizing the impact of disasters is imperative in today's society. As the capabilities of software and hardware evolve, so does the role of information and communication technology in disaster mitigation, preparation, response, and recovery. A large quantity of disaster-related data is available, including response plans, records of previous incidents, simulation data, social media data, and Web sites. However, current data management solutions offer few or no integration capabilities. Moreover, recent advances in cloud computing, big data, and NoSQL open the door for new solutions in disaster data management. In this paper, a Knowledge as a Service (KaaS) framework is proposed for disaster cloud data management (Disaster-CDM), with the objectives of 1) storing large amounts of disaster-related data from diverse sources, 2) facilitating search, and 3) supporting their interoperability and integration. Data are stored in a cloud environment using a combination of relational and NoSQL databases. The case study presented in this paper illustrates the use of Disaster-CDM on an example of simulation models.",
"title": ""
},
{
"docid": "64e2b73e8a2d12a1f0bbd7d07fccba72",
"text": "Point-of-interest (POI) recommendation is an important service to Location-Based Social Networks (LBSNs) that can benefit both users and businesses. In recent years, a number of POI recommender systems have been proposed, but there is still a lack of systematical comparison thereof. In this paper, we provide an allaround evaluation of 12 state-of-the-art POI recommendation models. From the evaluation, we obtain several important findings, based on which we can better understand and utilize POI recommendation models in various scenarios. We anticipate this work to provide readers with an overall picture of the cutting-edge research on POI recommendation.",
"title": ""
},
{
"docid": "21393a1c52b74517336ef3e08dc4d730",
"text": "The technical part of these Guidelines and Recommendations, produced under the auspices of EFSUMB, provides an introduction to the physical principles and technology on which all forms of current commercially available ultrasound elastography are based. A difference in shear modulus is the common underlying physical mechanism that provides tissue contrast in all elastograms. The relationship between the alternative technologies is considered in terms of the method used to take advantage of this. The practical advantages and disadvantages associated with each of the techniques are described, and guidance is provided on optimisation of scanning technique, image display, image interpretation and some of the known image artefacts.",
"title": ""
},
{
"docid": "22eb9b1de056d03d15c0a3774a898cfd",
"text": "Massive volumes of big RDF data are growing beyond the performance capacity of conventional RDF data management systems operating on a single node. Applications using large RDF data demand efficient data partitioning solutions for supporting RDF data access on a cluster of compute nodes. In this paper we present a novel semantic hash partitioning approach and implement a Semantic HAsh Partitioning-Enabled distributed RDF data management system, called Shape. This paper makes three original contributions. First, the semantic hash partitioning approach we propose extends the simple hash partitioning method through direction-based triple groups and direction-based triple replications. The latter enhances the former by controlled data replication through intelligent utilization of data access locality, such that queries over big RDF graphs can be processed with zero or very small amount of inter-machine communication cost. Second, we generate locality-optimized query execution plans that are more efficient than popular multi-node RDF data management systems by effectively minimizing the inter-machine communication cost for query processing. Third but not the least, we provide a suite of locality-aware optimization techniques to further reduce the partition size and cut down on the inter-machine communication cost during distributed query processing. Experimental results show that our system scales well and can process big RDF datasets more efficiently than existing approaches.",
"title": ""
},
{
"docid": "e472a8e75ddf72549aeb255aa3d6fb79",
"text": "In the presence of normal sensory and motor capacity, intelligent behavior is widely acknowledged to develop from the interaction of short-and long-term memory. While the behavioral, cellular, and molecular underpinnings of the long-term memory process have long been associated with the hippocampal formation, and this structure has become a major model system for the study of memory, the neural substrates of specific short-term memory functions have more and more become identified with prefrontal cortical areas (Goldman-Rakic, 1987; Fuster, 1989). The special nature of working memory was first identified in studies of human cognition and modern neuro-biological methods have identified a specific population of neurons, patterns of their intrinsic and extrinsic circuitry, and signaling molecules that are engaged in this process in animals. In this article, I will first define key features of working memory and then descdbe its biological basis in primates. Distinctive Features of a Working Memory System Working memory is the term applied to the type of memory that is active and relevant only for a short period of time, usually on the scale of seconds. A common example of working memory is keeping in mind a newly read phone number until it is dialed and then immediately forgotten. This process has been captu red by the analogy to a mental sketch pad (Baddeley, 1986) an~l is clearly different from the permanent inscription on neuronal circuitry due to learning. The criterion-useful or relevant only transiently distinguishes working memory from the processes that have been variously termed semantic (Tulving, 1972) or procedural (Squire and Cohen, 1984) memory, processes that can be considered associative in the traditional sense, i.e., information acquired by the repeated contiguity between stimuli and responses and/or consequences. If semantic and procedural memory are the processes by which stimuli and events acquire archival permanence , working memory is the process for the retrieval and proper utilization of this acquired knowledge. In this context, the contents of working memory are as much on the output side of long-term storage sites as they are an important source of input to those sites. Considerable evidence is now at hand to demonstrate that the brain obeys the distinction between working and other forms of memory , and that the prefrontal cortex has a preeminent role mainly in the former (Goldman.Rakic, 1987). However, memory-guided behavior obviously reflects the operation of a widely distributed system of brain structures and psychological functions, and understanding …",
"title": ""
},
{
"docid": "4f186e992cd7d5eadb2c34c0f26f4416",
"text": "a r t i c l e i n f o Mobile devices, namely phones and tablets, have long gone \" smart \". Their growing use is both a cause and an effect of their technological advancement. Among the others, their increasing ability to store and exchange sensitive information, has caused interest in exploiting their vulnerabilities, and the opposite need to protect users and their data through secure protocols for access and identification on mobile platforms. Face and iris recognition are especially attractive, since they are sufficiently reliable, and just require the webcam normally equipping the involved devices. On the contrary, the alternative use of fingerprints requires a dedicated sensor. Moreover, some kinds of biometrics lend themselves to uses that go beyond security. Ambient intelligence services bound to the recognition of a user, as well as social applications, such as automatic photo tagging on social networks, can especially exploit face recognition. This paper describes FIRME (Face and Iris Recognition for Mobile Engagement) as a biometric application based on a multimodal recognition of face and iris, which is designed to be embedded in mobile devices. Both design and implementation of FIRME rely on a modular architecture , whose workflow includes separate and replaceable packages. The starting one handles image acquisition. From this point, different branches perform detection, segmentation, feature extraction, and matching for face and iris separately. As for face, an antispoofing step is also performed after segmentation. Finally, results from the two branches are fused. In order to address also security-critical applications, FIRME can perform continuous reidentification and best sample selection. To further address the possible limited resources of mobile devices, all algorithms are optimized to be low-demanding and computation-light. The term \" mobile \" referred to capture equipment for different kinds of signals, e.g. images, has been long used in many cases where field activities required special portability and flexibility. As an example we can mention mobile biometric identification devices used by the U.S. army for different kinds of security tasks. Due to the critical task involving them, such devices have to offer remarkable quality, in terms of resolution and quality of the acquired data. Notwithstanding this formerly consolidated reference for the term mobile, nowadays, it is most often referred to modern phones, tablets and similar smart devices, for which new and engaging applications are designed. For this reason, from now on, the term mobile will refer only to …",
"title": ""
},
{
"docid": "739db4358ac89d375da0ed005f4699ad",
"text": "All doctors have encountered patients whose symptoms they cannot explain. These individuals frequently provoke despair and disillusionment. Many doctors make a link between inexplicable physical symptoms and assumed psychiatric ill ness. An array of adjectives in medicine apply to symptoms without established organic basis – ‘supratentorial’, ‘psychosomatic’, ‘functional’ – and these are sometimes used without reference to their real meaning. In psychiatry, such symptoms fall under the umbrella of the somatoform disorders, which includes a broad range of diagnoses. Conversion disorder is just one of these. Its meaning is not always well understood and it is often confused with somatisation disorder.† Our aim here is to clarify the notion of a conversion disorder (and the differences between conversion and other somatoform disorders) and to discuss prevalence, aetiology, management and prognosis.",
"title": ""
},
{
"docid": "39958f4825796d62e7a5935d04d5175d",
"text": "This paper presents a wireless system which enables real-time health monitoring of multiple patient(s). In health care centers patient's data such asheart rate needs to be constantly monitored. The proposed system monitors the heart rate and other such data of patient's body. For example heart rate is measured through a Photoplethysmograph. A transmitting module is attached which continuously transmits the encoded serial data using Zigbee module. A receiver unit is placed in doctor's cabin, which receives and decodes the data and continuously displays it on a User interface visible on PC/Laptop. Thus doctor can observe and monitor many patients at the same time. System also continuously monitors the patient(s) data and in case of any potential irregularities, in the condition of a patient, the alarm system connected to the system gives an audio-visual warning signal that the patient of a particular room needs immediate attention. In case, the doctor is not in his chamber, the GSM modem connected to the system also sends a message to all the doctors of that unit giving the room number of the patient who needs immediate care.",
"title": ""
},
{
"docid": "7c86594614a6bd434ee4e749eb661cee",
"text": "The ACT-R system is a general system for modeling a wide range of higher level cognitive processes. Recently, it has been embellished with a theory of how its higher level processes interact with a visual interface. This includes a theory of how visual attention can move across the screen, encoding information into a form that can be processed by ACT-R. This system is applied to modeling several classic phenomena in the literature that depend on the speed and selectivity with which visual attention can move across a visual display. ACT-R is capable of interacting with the same computer screens that subjects do and, as such, is well suited to provide a model for tasks involving human-computer interaction. In this article, we discuss a demonstration of ACT-R's application to menu selection and show that the ACT-R theory makes unique predictions, without estimating any parameters, about the time to search a menu. These predictions are confirmed. John R. Anderson is a cognitive scientist with an interest in cognitive architectures and intelligent tutoring systems; he is a Professor of Psychology and Computer Science at Carnegie Mellon University. Michael Matessa is a graduate student studying cognitive psychology at Carnegie Mellon University; his interests include cognitive architectures and modeling the acquisition of information from the environment. Christian Lebiere is a computer scientist with an interest in intelligent architectures; he is a Research Programmer in the Department of Psycholo and a graduate student in the School of Computer Science at Carnegie Me1 By on University. 440 ANDERSON, MATESSA, LEBIERE",
"title": ""
},
{
"docid": "42a0e0ab1ae2b190c913e69367b85001",
"text": "One of the most challenging problems facing network operators today is network attacks identification due to extensive number of vulnerabilities in computer systems and creativity of attackers. To address this problem, we present a deep learning approach for intrusion detection systems. Our approach uses Deep Auto-Encoder (DAE) as one of the most well-known deep learning models. The proposed DAE model is trained in a greedy layer-wise fashion in order to avoid overfitting and local optima. The experimental results on the KDD-CUP'99 dataset show that our approach provides substantial improvement over other deep learning-based approaches in terms of accuracy, detection rate and false alarm rate.",
"title": ""
},
{
"docid": "1bdf1bfe81bf6f947df2254ae0d34227",
"text": "We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance. We use recurrent neural networks (RNNs) and their variants as music language models (MLMs) and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.",
"title": ""
}
] |
scidocsrr
|
2a0f9145e65397a911763cbce3384a5f
|
Deep Retinal Image Understanding
|
[
{
"docid": "01f08a2710177959bf37698577fefd4f",
"text": "Glaucoma is a chronic eye disease that leads to vision loss. As it cannot be cured, detecting the disease in time is important. Current tests using intraocular pressure (IOP) are not sensitive enough for population based glaucoma screening. Optic nerve head assessment in retinal fundus images is both more promising and superior. This paper proposes optic disc and optic cup segmentation using superpixel classification for glaucoma screening. In optic disc segmentation, histograms, and center surround statistics are used to classify each superpixel as disc or non-disc. A self-assessment reliability score is computed to evaluate the quality of the automated optic disc segmentation. For optic cup segmentation, in addition to the histograms and center surround statistics, the location information is also included into the feature space to boost the performance. The proposed segmentation methods have been evaluated in a database of 650 images with optic disc and optic cup boundaries manually marked by trained professionals. Experimental results show an average overlapping error of 9.5% and 24.1% in optic disc and optic cup segmentation, respectively. The results also show an increase in overlapping error as the reliability score is reduced, which justifies the effectiveness of the self-assessment. The segmented optic disc and optic cup are then used to compute the cup to disc ratio for glaucoma screening. Our proposed method achieves areas under curve of 0.800 and 0.822 in two data sets, which is higher than other methods. The methods can be used for segmentation and glaucoma screening. The self-assessment will be used as an indicator of cases with large errors and enhance the clinical deployment of the automatic segmentation and screening.",
"title": ""
}
] |
[
{
"docid": "8e520ad94c7555b9bb1546786b532adb",
"text": "We propose Machines Talking To Machines (M2M), a framework combining automation and crowdsourcing to rapidly bootstrap endto-end dialogue agents for goal-oriented dialogues in arbitrary domains. M2M scales to new tasks with just a task schema and an API client from the dialogue system developer, but it is also customizable to cater to task-specific interactions. Compared to the Wizard-of-Oz approach for data collection, M2M achieves greater diversity and coverage of salient dialogue flows while maintaining the naturalness of individual utterances. In the first phase, a simulated user bot and a domain-agnostic system bot converse to exhaustively generate dialogue “outlines”, i.e. sequences of template utterances and their semantic parses. In the second phase, crowd workers provide contextual rewrites of the dialogues to make the utterances more natural while preserving their meaning. The entire process can finish within a few hours. We propose a new corpus of 3,000 dialogues spanning 2 domains collected with M2M, and present comparisons with popular dialogue datasets on the quality and diversity of the surface forms and dialogue flows.",
"title": ""
},
{
"docid": "b5ab4c11feee31195fdbec034b4c99d9",
"text": "Abstract Traditionally, firewalls and access control have been the most important components used in order to secure servers, hosts and computer networks. Today, intrusion detection systems (IDSs) are gaining attention and the usage of these systems is increasing. This thesis covers commercial IDSs and the future direction of these systems. A model and taxonomy for IDSs and the technologies behind intrusion detection is presented. Today, many problems exist that cripple the usage of intrusion detection systems. The decreasing confidence in the alerts generated by IDSs is directly related to serious problems like false positives. By studying IDS technologies and analyzing interviews conducted with security departments at Swedish banks, this thesis identifies the major problems within IDSs today. The identified problems, together with recent IDS research reports published at the RAID 2002 symposium, are used to recommend the future direction of commercial intrusion detection systems. Intrusion Detection Systems – Technologies, Weaknesses and Trends",
"title": ""
},
{
"docid": "7e25cdacce18dcb236f5347b8d2eaa76",
"text": "Nowadays, quadrotors have become a popular UAV research platform because full control can be achieved through speed variations in each and every one of its four rotors. Here, a non-linear dynamic model based on quaternions for attitude is presented as well as its corresponding LQR Gain Scheduling Control. All considerations for the quadrotor movements are described through their state variables. Modeling is carried out through the Newton-Euler formalism. Finally, the control system is simulated and the results shown in a novel and direct unit quaternion. Thus, a successful trajectory and attitude control of a quadrotor is achieved.",
"title": ""
},
{
"docid": "f9396d1384842a951b317fa63d9f312d",
"text": "Path loss models are used to estimate the path loss between transmitter and receiver in applications that involve transmission of electromagnetic waves. This paper describes the wireless underground sensor networks, which communicates within the soil and is different from the terrestrial wireless sensor network. An integrated study of electromagnetic waves propagation for a wireless underground sensor network is described in this paper. Here, path loss model and signal strength are presented, because of electromagnetic waves attenuation in the soil medium. Tests were conducted at various soil volumetric water contents and node deployment depths using three different frequencies, when soil consists of 0% sands, 35% silt and 15% clay. The results showed that radio signal path loss was minimum when frequency and moisture content are low. The experimental results revealed that a 20% increase in the soil moisture content increased the path loss by more than 30%.",
"title": ""
},
{
"docid": "e0c832f48352a5cb107a41b0907ad707",
"text": "In the same commercial ecosystem, although the different main bodies of logistics service such as transportation, suppliers and purchasers drive their interests differently, all the different stakeholders in the same business or consumers coexist mutually and share resources with each other. Based on this, this paper constructs a model of bonded logistics supply chain management based on the theory of commercial ecology, focusing on the logistics mode of transportation and multi-attribute behavior decision-making model based on the risk preference of the mode of transport of goods. After the weight is divided, this paper solves the model with ELECTRE-II algorithm and provides a scientific basis for decision-making of bonded logistics supply chain management through the decision model and ELECTRE-II algorithm.",
"title": ""
},
{
"docid": "26295dded01b06c8b11349723fea81dd",
"text": "The increasing popularity of parametric design tools goes hand in hand with the use of building performance simulation (BPS) tools from the early design phase. However, current methods require a significant computational time and a high number of parameters as input, as they are based on traditional BPS tools conceived for detailed building design phase. Their application to the urban scale is hence difficult. As an alternative to the existing approaches, we developed an interface to CitySim, a validated building simulation tool adapted to urban scale assessments, bundled as a plug-in for Grasshopper, a popular parametric design platform. On the one hand, CitySim allows faster simulations and requires fewer parameters than traditional BPS tools, as it is based on algorithms providing a good trade-off between the simulations requirements and their accuracy at the urban scale; on the other hand, Grasshopper allows the easy manipulation of building masses and energy simulation parameters through semi-automated parametric",
"title": ""
},
{
"docid": "999070b182a328b1927be4575f04e434",
"text": "Accurate malaria diagnosis is critical to prevent malaria fatalities, curb overuse of antimalarial drugs, and promote appropriate management of other causes of fever. While several diagnostic tests exist, the need for a rapid and highly accurate malaria assay remains. Microscopy and rapid diagnostic tests are the main diagnostic modalities available, yet they can demonstrate poor performance and accuracy. Automated microscopy platforms have the potential to significantly improve and standardize malaria diagnosis. Based on image recognition and machine learning algorithms, these systems maintain the benefits of light microscopy and provide improvements such as quicker scanning time, greater scanning area, and increased consistency brought by automation. While these applications have been in development for over a decade, recently several commercial platforms have emerged. In this review, we discuss the most advanced computer vision malaria diagnostic technologies and investigate several of their features which are central to field use. Additionally, we discuss the technological and policy barriers to implementing these technologies in low-resource settings world-wide.",
"title": ""
},
{
"docid": "2f8a74054d456d1136f0a36303b722bc",
"text": "The swarm intelligence paradigm has proven to have very interesting properties such as robustness, flexibility and ability to solve complex problems exploiting parallelism and self-organization. Several robotics implementations of this paradigm confirm that these properties can be exploited for the control of a population of physically independent mobile robots. The work presented here introduces a new robotic concept called swarm-bot in which the collective interaction exploited by the swarm intelligence mechanism goes beyond the control layer and is extended to the physical level. This implies the addition of new mechanical functionalities on the single robot, together with new electronics and software to manage it. These new functionalities, even if not directly related to mobility and navigation, allow to address complex mobile robotics problems, such as extreme all-terrain exploration. The work shows also how this new concept is investigated using a simulation tool (swarmbot3d) specifically developed for quickly designing and evaluating new control algorithms. Experimental work shows how the simulated detailed representation of one s-bot has been calibrated to match the behaviour of the real robot.",
"title": ""
},
{
"docid": "d5f8c9f7a495d9ebc5517b18ced3e784",
"text": "BACKGROUND\nFor some adolescents feeling lonely can be a protracted and painful experience. It has been suggested that engaging in health risk behaviours such as substance use and sexual behaviour may be a way of coping with the distress arising from loneliness during adolescence. However, the association between loneliness and health risk behaviour has been little studied to date. To address this research gap, the current study examined this relation among Russian and U.S. adolescents.\n\n\nMETHODS\nData were used from the Social and Health Assessment (SAHA), a school-based survey conducted in 2003. A total of 1995 Russian and 2050 U.S. students aged 13-15 years old were included in the analysis. Logistic regression was used to examine the association between loneliness and substance use, sexual risk behaviour, and violence.\n\n\nRESULTS\nAfter adjusting for demographic characteristics and depressive symptoms, loneliness was associated with a significantly increased risk of adolescent substance use in both Russia and the United States. Lonely Russian girls were significantly more likely to have used marijuana (odds ratio [OR]: 2.28; confidence interval [CI]: 1.17-4.45), while lonely Russian boys had higher odds for past 30-day smoking (OR, 1.87; CI, 1.08-3.24). In the U.S. loneliness was associated with the lifetime use of illicit drugs (excepting marijuana) among boys (OR, 3.09; CI, 1.41-6.77) and with lifetime marijuana use (OR, 1.79; CI, 1.26-2.55), past 30-day alcohol consumption (OR, 1.80; CI, 1.18-2.75) and past 30-day binge drinking (OR, 2.40; CI, 1.56-3.70) among girls. The only relation between loneliness and sexual risk behaviour was among Russian girls, where loneliness was associated with significantly higher odds for ever having been pregnant (OR, 1.69; CI: 1.12-2.54). Loneliness was not associated with violent behaviour among boys or girls in either country.\n\n\nCONCLUSION\nLoneliness is associated with adolescent health risk behaviour among boys and girls in both Russia and the United States. Further research is now needed in both settings using quantitative and qualitative methods to better understand the association between loneliness and health risk behaviours so that effective interventions can be designed and implemented to mitigate loneliness and its effects on adolescent well-being.",
"title": ""
},
{
"docid": "cf40a2d0e816a120bc6950f631b31754",
"text": "Recommender systems support users in selecting items and services in an information-rich environment. Although recommender systems have been improved in terms of accuracy, such systems are still insufficient in terms of novelty and serendipity, giving unsatisfactory results to users. Two methods of “serendipitous recommendation” are therefore proposed. However, a method for recommending serendipitous items accurately to users does not yet exist, because what kinds of items are serendipitous is not clearly defined. Accordingly, a human preference model of serendipitous items based on actual data concerning a user’s impression collected by questionnaires was devised. Two serendipitous recommendation methods based on the model were devised and evaluated according to a user’s actual impression. The evaluation results show that one of these recommendation methods, the one using general unexpectedness independent of user profiles, can recommend the serendipitous items accurately.",
"title": ""
},
{
"docid": "62c90e9a0a3c1bd75366cc28b57351a8",
"text": "There is an extensive literature about online controlled experiments, both on the statistical methods available to analyze experiment results [1, 2, 3] as well as on the infrastructure built by several large scale Internet companies [4, 5, 6, 7] but also on the organizational challenges of embracing online experiments to inform product development [6, 8]. At Booking.com we have been conducting evidenced based product development using online experiments for more than ten years. Our methods and infrastructure were designed from their inception to reflect Booking.com culture, that is, with democratization and decentralization of experimentation and decision making in mind. In this paper we explain how building a central repository of successes and failures to allow for knowledge sharing, having a generic and extensible code library which enforces a loose coupling between experimentation and business logic, monitoring closely and transparently the quality and the reliability of the data gathering pipelines to build trust in the experimentation infrastructure, and putting in place safeguards to enable anyone to have end to end ownership of their experiments have allowed such a large organization as Booking.com to truly and successfully democratize",
"title": ""
},
{
"docid": "15dfa65d40eb6cd60c3df952a7b864c4",
"text": "The lack of theoretical progress in the IS field may be surprising. From an empirical viewpoint, the IS field resembles other management fields. Specifically, as fields of inquiry develop, their theories are often placed on a hierarchy from ad hoc classification systems (in which categories are used to summarize empirical observations), to taxonomies (in which the relationships between the categories can be described), to conceptual frameworks (in which propositions summarize explanations and predictions), to theoretical systems (in which laws are contained within axiomatic or formal theories) (Parsons and Shils 1962). In its short history, IS research has developed from classification systems to conceptual frameworks. In the 1970s, it was considered pre-paradigmatic. Today, it is approaching the level of development in empirical research of other management fields, like organizational behavior (Webster 2001). However, unlike other fields that have journals devoted to review articles (e.g., the Academy of Management Review), we see few review articles in ISand hence the creation of MISQ Review as a device for accelerating development of the discipline.",
"title": ""
},
{
"docid": "32b860121b49bd3a61673b3745b7b1fd",
"text": "Online reviews are a growing market, but it is struggling with fake reviews. They undermine both the value of reviews to the user, and their trust in the review sites. However, fake positive reviews can boost a business, and so a small industry producing fake reviews has developed. The two sides are facing an arms race that involves more and more natural language processing (NLP). So far, NLP has been used mostly for detection, and works well on human-generated reviews. But what happens if NLP techniques are used to generate fake reviews as well? We investigate the question in an adversarial setup, by assessing the detectability of different fake-review generation strategies. We use generative models to produce reviews based on meta-information, and evaluate their effectiveness against deceptiondetection models and human judges. We find that meta-information helps detection, but that NLP-generated reviews conditioned on such information are also much harder to detect than conventional ones.",
"title": ""
},
{
"docid": "66cf3513f01a9164836875275f775b61",
"text": "We present a detailed evaluation and analysis of neural sequence-to-sequence models for text simplification on two distinct datasets: Wikipedia and Newsela. We employ both human and automatic evaluation to investigate the capacity of neural models to generalize across corpora, and we highlight challenges that these models face when tested on a different genre. Furthermore, we establish a strong baseline on the Newsela dataset and show that a simple neural architecture can be efficiently used for in-domain and cross-domain text",
"title": ""
},
{
"docid": "f6463026a75a981c22e00a98990a095a",
"text": "Thanks to their anonymity (pseudonymity) and elimination of trusted intermediaries, cryptocurrencies such as Bitcoin have created or stimulated growth in many businesses and communities. Unfortunately, some of these are criminal, e.g., money laundering, illicit marketplaces, and ransomware. Next-generation cryptocurrencies such as Ethereum will include rich scripting languages in support of smart contracts, programs that autonomously intermediate transactions. In this paper, we explore the risk of smart contracts fueling new criminal ecosystems. Specifically, we show how what we call criminal smart contracts (CSCs) can facilitate leakage of confidential information, theft of cryptographic keys, and various real-world crimes (murder, arson, terrorism).\n We show that CSCs for leakage of secrets (a la Wikileaks) are efficiently realizable in existing scripting languages such as that in Ethereum. We show that CSCs for theft of cryptographic keys can be achieved using primitives, such as Succinct Non-interactive ARguments of Knowledge (SNARKs), that are already expressible in these languages and for which efficient supporting language extensions are anticipated. We show similarly that authenticated data feeds, an emerging feature of smart contract systems, can facilitate CSCs for real-world crimes (e.g., property crimes).\n Our results highlight the urgency of creating policy and technical safeguards against CSCs in order to realize the promise of smart contracts for beneficial goals.",
"title": ""
},
{
"docid": "d83a771852fe065cd376b60966f29972",
"text": "Coupled microring arrangements with balanced gain and loss, also known as parity-time symmetric systems, are investigated both analytically and experimentally. In these configurations, stable single-mode lasing can be achieved at pump powers well above threshold. This self-adaptive mode management technique is broadband and robust to small fabrication imperfections. The results presented in this paper provide a new avenue in designing mode-selective chip-scale in-plane semiconductor lasers by utilizing the complex dynamics of coupled gain/loss cavities.",
"title": ""
},
{
"docid": "2bf2e36bbbbdd9e091395636fcc2a729",
"text": "An open-source framework for real-time structured light is presented. It is called “SLStudio”, and enables real-time capture of metric depth images. The framework is modular, and extensible to support new algorithms for scene encoding/decoding, triangulation, and aquisition hardware. It is the aim that this software makes real-time 3D scene capture more widely accessible and serves as a foundation for new structured light scanners operating in real-time, e.g. 20 depth images per second and more. The use cases for such scanners are plentyfull, however due to the computational constraints, all public implementations so far are limited to offline processing. With “SLStudio”, we are making a platform available which enables researchers from many different fields to build application specific real time 3D scanners. The software is hosted at http://compute.dtu.dk/~jakw/slstudio.",
"title": ""
},
{
"docid": "41e434a96d528881434e39c536f6b4e7",
"text": "Following recent breakthroughs in convolutional neural networks and monolithic model architectures, state-ofthe-art object detection models can reliably and accurately scale into the realm of up to thousands of classes. Things quickly break down, however, when scaling into the tens of thousands, or, eventually, to millions or billions of unique objects. Further, bounding box-trained end-to-end models require extensive training data. Even though – with some tricks using hierarchies – one can sometimes scale up to thousands of classes, the labor requirements for clean image annotations quickly get out of control. In this paper, we present a two-layer object detection method for brand logos and other stylized objects for which prototypical images exist. It can scale to large numbers of unique classes. Our first layer is a CNN from the Single Shot Multibox Detector family of models that learns to propose regions where some stylized object is likely to appear. The contents of a proposed bounding box is then run against an image index that is targeted for the retrieval task at hand. The proposed architecture scales to a large number of object classes, allows to continuously add new classes without retraining, and exhibits state-of-the-art quality on a stylized object detection task such as logo recognition.",
"title": ""
},
{
"docid": "45cff09810b8741d8be1010aa6ff3000",
"text": "This paper discusses experience in applying time harmonic three-dimensional (3D) finite element (FE) analysis in analyzing an axial-flux (AF) solid-rotor induction motor (IM). The motor is a single rotor - single stator AF IM. The construction presented in this paper has not been analyzed before in any technical documents. The field analysis and the comparison of torque calculation results of the 3D calculations with measured torque results are presented",
"title": ""
},
{
"docid": "2b16725c22f06b8155ce948636877004",
"text": "The Internet of Things (IoT) aims to connect billions of smart objects to the Internet, which can bring a promising future to smart cities. These objects are expected to generate large amounts of data and send the data to the cloud for further processing, especially for knowledge discovery, in order that appropriate actions can be taken. However, in reality sensing all possible data items captured by a smart object and then sending the complete captured data to the cloud is less useful. Further, such an approach would also lead to resource wastage (e.g., network, storage, etc.). The Fog (Edge) computing paradigm has been proposed to counterpart the weakness by pushing processes of knowledge discovery using data analytics to the edges. However, edge devices have limited computational capabilities. Due to inherited strengths and weaknesses, neither Cloud computing nor Fog computing paradigm addresses these challenges alone. Therefore, both paradigms need to work together in order to build a sustainable IoT infrastructure for smart cities. In this article, we review existing approaches that have been proposed to tackle the challenges in the Fog computing domain. Specifically, we describe several inspiring use case scenarios of Fog computing, identify ten key characteristics and common features of Fog computing, and compare more than 30 existing research efforts in this domain. Based on our review, we further identify several major functionalities that ideal Fog computing platforms should support and a number of open challenges toward implementing them, to shed light on future research directions on realizing Fog computing for building sustainable smart cities.",
"title": ""
}
] |
scidocsrr
|
e019a1d032c42d22677425c640b6da08
|
An Effective Method of Probe Calibration in Phase-Resolved Near-Field Scanning for EMI Application
|
[
{
"docid": "76d2ba510927bd7f56155e1cf1cbbc52",
"text": "As the first part of a study that aims to propose tools to take into account some electromagnetic compatibility aspects, we have developed a model to predict the electric and magnetic fields emitted by a device. This model is based on a set of equivalent sources (electric and magnetic dipoles) obtained from the cartographies of the tangential components of electric and magnetic near fields. One of its features is to be suitable for a commercial electromagnetic simulation tool based on a finite element method. This paper presents the process of modeling and the measurement and calibration procedure to obtain electromagnetic fields necessary for the model; the validation and the integration of the model into a commercial electromagnetic simulator are then performed on a Wilkinson power divider.",
"title": ""
}
] |
[
{
"docid": "49a538fc40d611fceddd589b0c9cb433",
"text": "Both intuition and creativity are associated with knowledge creation, yet a clear link between them has not been adequately established. First, the available empirical evidence for an underlying relationship between intuition and creativity is sparse in nature. Further, this evidence is arguable as the concepts are diversely operationalized and the measures adopted are often not validated sufficiently. Combined, these issues make the findings from various studies examining the link between intuition and creativity difficult to replicate. Nevertheless, the role of intuition in creativity should not be neglected as it is often reported to be a core component of the idea generation process, which in conjunction with idea evaluation are crucial phases of creative cognition. We review the prior research findings in respect of idea generation and idea evaluation from the view that intuition can be construed as the gradual accumulation of cues to coherence. Thus, we summarize the literature on what role intuitive processes play in the main stages of the creative problem-solving process and outline a conceptual framework of the interaction between intuition and creativity. Finally, we discuss the main challenges of measuring intuition as well as possible directions for future research.",
"title": ""
},
{
"docid": "bddec3337cfbc17412b042b58e1cdfeb",
"text": "Business organisations are constantly looking for ways to gain an advantage over their competitors (Beyleveld & Schurink, 2005; Castaneda & Toulson, 2013). Historically, their focus was on producing as much as possible without considering exact demand (Turner & Chung, 2005). Recently, businesses have embarked upon finding more efficient ways to deal with large turnovers (Umble, Haft & Umble, 2003). One way of achieving this is by employing an Enterprise Resource Planning (ERP) system. An ERP system is a mandatory, integrated, customised, packaged software-based system that handles most of the system requirements in all business operational functions such as finance, human resources, manufacturing, sales and marketing (Wua, Onga & Hsub, 2008). Although expectations from ERP systems are high, these systems have not always led to significant organisational enhancement (Soh, Kien & Tay-Yap, 2000) and most ERP projects turn out to be over budget, not on time and unsuccessful (Abugabah & Sanzogni, 2010; Hong & Kim, 2002; Kumar, Maheshwari & Kumar, 2003).",
"title": ""
},
{
"docid": "2189f0c48e453231bed41574c39f093c",
"text": "Rapid isolation of high-purity microbial genomic DNA is necessary for genome analysis. In this study, the authors compared a one-hour procedure using a microwave with enzymatic and boiling methods of genomic DNA extraction from Gram-negative and Gram-positive bacteria. High DNA concentration and purity were observed for both MRSA and ESBL strains (80.1 and 91.1 μg/ml; OD260/280, 1.82 and 1.70, respectively) when the extraction protocol included microwave pre-heating. DNA quality was further confirmed by PCR detection of mecA and CTX-M. In conclusion, the microwave-based procedure was rapid, efficient, cost-effective, and applicable for both Gram-positive and Gram-negative bacteria.",
"title": ""
},
{
"docid": "3bdd6168db10b8b195ce88ae9c4a75f9",
"text": "Nowadays Intrusion Detection System (IDS) which is increasingly a key element of system security is used to identify the malicious activities in a computer system or network. There are different approaches being employed in intrusion detection systems, but unluckily each of the technique so far is not entirely ideal. The prediction process may produce false alarms in many anomaly based intrusion detection systems. With the concept of fuzzy logic, the false alarm rate in establishing intrusive activities can be reduced. A set of efficient fuzzy rules can be used to define the normal and abnormal behaviors in a computer network. Therefore some strategy is needed for best promising security to monitor the anomalous behavior in computer network. In this paper I present a few research papers regarding the foundations of intrusion detection systems, the methodologies and good fuzzy classifiers using genetic algorithm which are the focus of current development efforts and the solution of the problem of Intrusion Detection System to offer a realworld view of intrusion detection. Ultimately, a discussion of the upcoming technologies and various methodologies which promise to improve the capability of computer systems to detect intrusions is offered.",
"title": ""
},
{
"docid": "5572ab4560ef280e72c50d8def00e4ab",
"text": "Methylation of N6-adenosine (m6A) has been observed in many different classes of RNA, but its prevalence in microRNAs (miRNAs) has not yet been studied. Here we show that a knockdown of the m6A demethylase FTO affects the steady-state levels of several miRNAs. Moreover, RNA immunoprecipitation with an anti-m6A-antibody followed by RNA-seq revealed that a significant fraction of miRNAs contains m6A. By motif searches we have discovered consensus sequences discriminating between methylated and unmethylated miRNAs. The epigenetic modification of an epigenetic modifier as described here adds a new layer to the complexity of the posttranscriptional regulation of gene expression.",
"title": ""
},
{
"docid": "c28b1ce1bcd5e56eb807bed4e9c167af",
"text": "In the recent years, new molecules have appeared in the illicit market, claimed to contain \"non-illegal\" compounds, although exhibiting important psychoactive effects; this heterogeneous and rapidly evolving class of compounds are commonly known as \"New Psychoactive Substances\" or, less properly, \"Smart Drugs\" and are easily distributed through the e-commerce or in the so-called \"Smart Shops\". They include, among other, synthetic cannabinoids, cathinones and tryptamine analogs of psylocin. Whereas cases of intoxication and death have been reported, the phenomenon appears to be largely underestimated and is a matter of concern for Public Health. One of the major points of concern depends on the substantial ineffectiveness of the current methods of toxicological screening of biological samples to identify the new compounds entering the market. These limitations emphasize an urgent need to increase the screening capabilities of the toxicology laboratories, and to develop rapid, versatile yet specific assays able to identify new molecules. The most recent advances in mass spectrometry technology, introducing instruments capable of detecting hundreds of compounds at nanomolar concentrations, are expected to give a fundamental contribution to broaden the diagnostic spectrum of the toxicological screening to include not only all these continuously changing molecules but also their metabolites. In the present paper a critical overview of the opportunities, strengths and limitations of some of the newest analytical approaches is provided, with a particular attention to liquid phase separation techniques coupled to high accuracy, high resolution mass spectrometry.",
"title": ""
},
{
"docid": "d348d178b17d63ae49cfe6fd4e052758",
"text": "BACKGROUND & AIMS\nChanges in gut microbiota have been reported to alter signaling mechanisms, emotional behavior, and visceral nociceptive reflexes in rodents. However, alteration of the intestinal microbiota with antibiotics or probiotics has not been shown to produce these changes in humans. We investigated whether consumption of a fermented milk product with probiotic (FMPP) for 4 weeks by healthy women altered brain intrinsic connectivity or responses to emotional attention tasks.\n\n\nMETHODS\nHealthy women with no gastrointestinal or psychiatric symptoms were randomly assigned to groups given FMPP (n = 12), a nonfermented milk product (n = 11, controls), or no intervention (n = 13) twice daily for 4 weeks. The FMPP contained Bifidobacterium animalis subsp Lactis, Streptococcus thermophiles, Lactobacillus bulgaricus, and Lactococcus lactis subsp Lactis. Participants underwent functional magnetic resonance imaging before and after the intervention to measure brain response to an emotional faces attention task and resting brain activity. Multivariate and region of interest analyses were performed.\n\n\nRESULTS\nFMPP intake was associated with reduced task-related response of a distributed functional network (49% cross-block covariance; P = .004) containing affective, viscerosensory, and somatosensory cortices. Alterations in intrinsic activity of resting brain indicated that ingestion of FMPP was associated with changes in midbrain connectivity, which could explain the observed differences in activity during the task.\n\n\nCONCLUSIONS\nFour-week intake of an FMPP by healthy women affected activity of brain regions that control central processing of emotion and sensation.",
"title": ""
},
{
"docid": "05520d9ec32fca131dab3a7a0fbea2f1",
"text": "Non-Orthogonal Multiple Access (NOMA) is considered as a promising downlink Multiple Access (MA) scheme for future radio access. In this paper two power allocation strategies for NOMA are proposed. The first strategy is based on channel state information experienced by NOMA users. The other strategy is based on pre-defined QoS per NOMA user. In this paper we develop mathematical models for the proposed strategies. Also we clarify the potential gains of NOMA using proposed power allocation strategies over Orthogonal Multiple Access (OMA). Simulation results showed that NOMA performance using the proposed strategies achieves superior performance compared to that for OMA.",
"title": ""
},
{
"docid": "bfe58868ab05a6ba607ef1f288d37f33",
"text": "There is much debate as to whether online offenders are a distinct group of sex offenders or if they are simply typical sex offenders using a new technology. A meta-analysis was conducted to examine the extent to which online and offline offenders differ on demographic and psychological variables. Online offenders were more likely to be Caucasian and were slightly younger than offline offenders. In terms of psychological variables, online offenders had greater victim empathy, greater sexual deviancy, and lower impression management than offline offenders. Both online and offline offenders reported greater rates of childhood physical and sexual abuse than the general population. Additionally, online offenders were more likely to be Caucasian, younger, single, and unemployed compared with the general population. Many of the observed differences can be explained by assuming that online offenders, compared with offline offenders, have greater self-control and more psychological barriers to acting on their deviant interests.",
"title": ""
},
{
"docid": "42a2c21ded0b785baa9a5a3a3597e435",
"text": "The conduct of many trials for the successful production of large quantities of pure proteins for structural biology and biotechnology applications, particularly of active, authentically processed enzymes, large eukaryotic multi-subunit complexes and membrane proteins, has spurred the development of recombinant expression systems. Beyond the well-established Escherichia coli, mammalian cell culture and baculovirus-infected insect cell expression systems, a plethora of alternative expression systems has been discovered, engineered, matured and deployed, resulting in crystal, nuclear magnetic resonance, and electron microscopy structures. In this review, we visit alternative expression hosts for structural biology ranging from bacteria and archaea to filamentous and unicellular yeasts and protozoa, with particular emphasis on their applicability to the structural determination of high-value, challenging proteins and complexes.",
"title": ""
},
{
"docid": "3b576f0ba86940be5cfcbe7b6aa44af7",
"text": "In this paper, we present an effective method to analyze the recognition confidence of handwritten Chinese character, based on the softmax regression score of a high performance convolutional neural network (CNN). Through careful and thorough statistics of 827,685 testing samples that randomly selected from total 8836 different classes of Chinese characters, we find that the confidence measurement based on CNN is an useful metric to know how reliable the recognition results are. Furthermore, we find by experiments that the recognition confidence can be used to find out similar and confusable character-pairs, to check wrongly or cursively written samples, and even to discover and correct mislabeled samples. Many interesting observation and statistics are given and analyzed in this study.",
"title": ""
},
{
"docid": "279870c84659e0eb6668e1ec494e77c9",
"text": "There is a need to move from opinion-based education to evidence-based education. Best evidence medical education (BEME) is the implementation, by teachers in their practice, of methods and approaches to education based on the best evidence available. It involves a professional judgement by the teacher about his/her teaching taking into account a number of factors-the QUESTS dimensions. The Quality of the research evidence available-how reliable is the evidence? the Utility of the evidence-can the methods be transferred and adopted without modification, the Extent of the evidence, the Strength of the evidence, the Target or outcomes measured-how valid is the evidence? and the Setting or context-how relevant is the evidence? The evidence available can be graded on each of the six dimensions. In the ideal situation the evidence is high on all six dimensions, but this is rarely found. Usually the evidence may be good in some respects, but poor in others.The teacher has to balance the different dimensions and come to a decision on a course of action based on his or her professional judgement.The QUESTS dimensions highlight a number of tensions with regard to the evidence in medical education: quality vs. relevance; quality vs. validity; and utility vs. the setting or context. The different dimensions reflect the nature of research and innovation. Best Evidence Medical Education encourages a culture or ethos in which decision making takes place in this context.",
"title": ""
},
{
"docid": "2ea99ae4dd94095e7f758353d35839ca",
"text": "An increasing number of companies rely on distributed data storage and processing over large clusters of commodity machines for critical business decisions. Although plain MapReduce systems provide several benefits, they carry certain limitations that impact developer productivity and optimization opportunities. Higher level programming languages plus conceptual data models have recently emerged to address such limitations. These languages offer a single machine programming abstraction and are able to perform sophisticated query optimization and apply efficient execution strategies. In massively distributed computation, data shuffling is typically the most expensive operation and can lead to serious performance bottlenecks if not done properly. An important optimization opportunity in this environment is that of judicious placement of repartitioning operators and choice of alternative implementations. In this paper we discuss advanced partitioning strategies, their implementation, and how they are integrated in the Microsoft Scope system. We show experimentally that our approach significantly improves performance for a large class of real-world jobs.",
"title": ""
},
{
"docid": "43c8a3e596b54e135fce272b589bc167",
"text": "The present study examined the effects of steep high-frequency sensorineural hearing loss (SHF-SNHL) on speech recognition using acoustic temporal fine structure (TFS) in the low-frequency region where the absolute thresholds appeared to be normal. In total, 28 participants with SHF-SNHL were assigned to 3 groups according to the cut-off frequency (1, 2, and 4 kHz, respectively) of their pure-tone absolute thresholds. Fourteen age-matched normal-hearing (NH) individuals were enrolled as controls. For each Mandarin sentence, the acoustic TFS in 10 frequency bands (each 3-ERB wide) was extracted using the Hilbert transform and was further lowpass filtered at 1, 2, and 4 kHz. Speech recognition scores were compared among the NH and 1-, 2-, and 4-kHz SHF-SNHL groups using stimuli with varying bandwidths. Results showed that speech recognition with the same TFS-speech stimulus bandwidth differed significantly in groups and filtering conditions. Sentence recognition in quiet conditions was better than that in noise. Compared with the NH participants, nearly all the SHF-SNHL participants showed significantly poorer sentence recognition within their frequency regions with \"normal hearing\" (defined clinically by normal absolute thresholds) in both quiet and noisy conditions. These may result from disrupted auditory nerve function in the \"normal hearing\" low-frequency regions.",
"title": ""
},
{
"docid": "62d3ead21fb561ff4f0657b05f828752",
"text": "In this paper, a wideband circularly polarized cylindrical shaped dielectric resonator antenna (DRA) with simple microstrip feed network has been designed and investigated. The proposed design uses dual vertical microstrip lines arranged in a perpendicular fashion to excite fundamental orthogonal hybrid <inline-formula> <tex-math notation=\"LaTeX\">${HE}_{11\\delta }^{x}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${HE}_{11\\delta }^{y}$ </tex-math></inline-formula> modes in the cylindrical DR. The Phase quadrature relationships between orthogonal modes have been attained by varying corresponding microstrips heights. To ratify the simulation results, an antenna prototype is fabricated and measured. Measured input reflection coefficient and axial ratio bandwidth (at <inline-formula> <tex-math notation=\"LaTeX\">$\\Phi =0^{\\circ }$ </tex-math></inline-formula>, <inline-formula> <tex-math notation=\"LaTeX\">$\\theta =0^{\\circ }$ </tex-math></inline-formula>) of 30.37% (2.82–3.83 GHz) and 24.6% (2.75–3.52 GHz) has been achieved, respectively. This antenna design achieves an average gain of 5.5 dBi and radiation efficiency of above 96% over operational frequency band. Justifiable agreement between simulated and fabricated antenna results are obtained.",
"title": ""
},
{
"docid": "24e9f079fe1fd0155c2f6a948f021da4",
"text": "Current blood glucose monitoring (BGM) techniques are invasive as they require a finger prick blood sample, a repetitively painful process that creates the risk of infection. BGM is essential to avoid complications arising due to abnormal blood glucose levels in diabetic patients. Laser light-based sensors have demonstrated a superior potential for BGM. Existing near-infrared (NIR)-based BGM techniques have shortcomings, such as the absorption of light in human tissue, higher signal-to-noise ratio, and lower accuracy, and these disadvantages have prevented NIR techniques from being employed for commercial BGM applications. A simple, compact, and cost-effective non-invasive device using visible red laser light of wavelength 650 nm for BGM (RL-BGM) is implemented in this paper. The RL-BGM monitoring device has three major technical advantages over NIR. Unlike NIR, red laser light has ~30 times better transmittance through human tissue. Furthermore, when compared with NIR, the refractive index of laser light is more sensitive to the variations in glucose level concentration resulting in faster response times ~7–10 s. Red laser light also demonstrates both higher linearity and accuracy for BGM. The designed RL-BGM device has been tested for both in vitro and in vivo cases and several experimental results have been generated to ensure the accuracy and precision of the proposed BGM sensor.",
"title": ""
},
{
"docid": "48dbd48a531867486b2d018442f64ebb",
"text": "The purpose of this paper is to analyze the extent to which the use of social media can support customer knowledge management (CKM) in organizations relying on a traditional bricks-and-mortar business model. The paper uses a combination of qualitative case study and netnography on Starbucks, an international coffee house chain. Data retrieved from varied sources such as newspapers, newswires, magazines, scholarly publications, books, and social media services were textually analyzed. Three major findings could be culled from the paper. First, Starbucks deploys a wide range of social media tools for CKM that serve as effective branding and marketing instruments for the organization. Second, Starbucks redefines the roles of its customers through the use of social media by transforming them from passive recipients of beverages to active contributors of innovation. Third, Starbucks uses effective strategies to alleviate customers’ reluctance for voluntary knowledge sharing, thereby promoting engagement in social media. The scope of the paper is limited by the window of the data collection period. Hence, the findings should be interpreted in the light of this constraint. The lessons gleaned from the case study suggest that social media is not a tool exclusive to online businesses. It can be a potential game-changer in supporting CKM efforts even for traditional businesses. This paper represents one of the earliest works that analyzes the use of social media for CKM in an organization that relies on a traditional bricks-and-mortar business model.",
"title": ""
},
{
"docid": "4122ab690b70d0d6281a68890db9e92d",
"text": "Machine learning, a collection of data-analytical techniques aimed at building predictive models from multi-dimensional datasets, is becoming integral to modern biological research. By enabling one to generate models that learn from large datasets and make predictions on likely outcomes, machine learning can be used to study complex cellular systems such as biological networks. Here, we provide a primer on machine learning for life scientists, including an introduction to deep learning. We discuss opportunities and challenges at the intersection of machine learning and network biology, which could impact disease biology, drug discovery, microbiome research, and synthetic biology.",
"title": ""
},
{
"docid": "02d5d8e3ebdee2a2c1919e3fc9862109",
"text": "Biometric systems are vulnerable to the diverse attacks that emerged as a challenge to assure the reliability in adopting these systems in real-life scenario. In this work, we propose a novel solution to detect a presentation attack based on exploring both statistical and Cepstral features. The proposed Presentation Attack Detection (PAD) algorithm will extract the statistical features that can capture the micro-texture variation using Binarized Statistical Image Features (BSIF) and Cepstral features that can reflect the micro changes in frequency using 2D Cepstrum analysis. We then fuse these features to form a single feature vector before making a decision on whether a capture attempt is a normal presentation or an artefact presentation using linear Support Vector Machine (SVM). Extensive experiments carried out on a publicly available face and iris spoof database show the efficacy of the proposed PAD algorithm with an Average Classification Error Rate (ACER) = 10.21% on face and ACER = 0% on the iris biometrics.",
"title": ""
},
{
"docid": "e1bb6bcd75b14e970c461ef0b55dc9fe",
"text": "The aim of this study was to assess and compare the body image of breast cancer patients (n = 70) whom underwent breast conserving surgery or mastectomy, as well as to compare patients’ scores with that of a sample of healthy control women (n = 70). A secondary objective of this study was to examine the reliability and validity of the 10-item Greek version of the Body Image Scale, a multidimensional measure of body image changes and concerns. Exploratory and confirmatory factor analyses on the items of this scale resulted in a two factor solution, indicating perceived attractiveness, and body and appearance satisfaction. Comparison of the two surgical groups revealed that women treated with mastectomy felt less attractive and more self-conscious, did not like their overall appearance, were dissatisfied with their scar, and avoided contact with people. Hierarchical regression analysis showed that more general body image concerns were associated with belonging to the mastectomy group, compared to the cancer-free group of women. Implications for clinical practice and recommendations for future investigations are discussed.",
"title": ""
}
] |
scidocsrr
|
46f981312e5c729d7362fdb5d8aaf491
|
Darwin the detective : Observable facial muscle contractions reveal emotional high-stakes lies ☆
|
[
{
"docid": "19dd8a5dd93964db26a8b8e26285b996",
"text": "In this article we argue that self-deception evolved to facilitate interpersonal deception by allowing people to avoid the cues to conscious deception that might reveal deceptive intent. Self-deception has two additional advantages: It eliminates the costly cognitive load that is typically associated with deceiving, and it can minimize retribution if the deception is discovered. Beyond its role in specific acts of deception, self-deceptive self-enhancement also allows people to display more confidence than is warranted, which has a host of social advantages. The question then arises of how the self can be both deceiver and deceived. We propose that this is achieved through dissociations of mental processes, including conscious versus unconscious memories, conscious versus unconscious attitudes, and automatic versus controlled processes. Given the variety of methods for deceiving others, it should come as no surprise that self-deception manifests itself in a number of different psychological processes, and we discuss various types of self-deception. We then discuss the interpersonal versus intrapersonal nature of self-deception before considering the levels of consciousness at which the self can be deceived. Finally, we contrast our evolutionary approach to self-deception with current theories and debates in psychology and consider some of the costs associated with self-deception.",
"title": ""
},
{
"docid": "78ae476295aa266a170a981a34767bdd",
"text": "Darwin did not focus on deception. Only a few sentences in his book mentioned the issue. One of them raised the very interesting question of whether it is difficult to voluntarily inhibit the emotional expressions that are most difficult to voluntarily fabricate. Another suggestion was that it would be possible to unmask a fabricated expression by the absence of the difficult-to-voluntarily-generate facial actions. Still another was that during emotion body movements could be more easily suppressed than facial expression. Research relevant to each of Darwin's suggestions is reviewed, as is other research on deception that Darwin did not foresee.",
"title": ""
}
] |
[
{
"docid": "7f43ad2fd344aa7260e3af33d3f69e32",
"text": "Charge pump circuits are used for obtaining higher voltages than normal power supply voltage in flash memories, DRAMs and low voltage designs. In this paper, we present a charge pump circuit in standard CMOS technology that is suited for low voltage operation. Our proposed charge pump uses a cross- connected NMOS cell as the basic element and PMOS switches are employed to connect one stage to the next. The simulated output voltages of the proposed 4 stage charge pump for input voltage of 0.9 V, 1.2 V, 1.5 V, 1.8 V and 2.1 V are 3.9 V, 5.1 V, 6.35 V, 7.51 V and 8.4 V respectively. This proposed charge pump is suitable for low power CMOS mixed-mode designs.",
"title": ""
},
{
"docid": "f50342dfacd198dc094ef96415de4899",
"text": "While the ubiquity and importance of nonliteral language are clear, people’s ability to use and understand it remains a mystery. Metaphor in particular has been studied extensively across many disciplines in cognitive science. One approach focuses on the pragmatic principles that listeners utilize to infer meaning from metaphorical utterances. While this approach has generated a number of insights about how people understand metaphor, to our knowledge there is no formal model showing that effects in metaphor understanding can arise from basic principles of communication. Building upon recent advances in formal models of pragmatics, we describe a computational model that uses pragmatic reasoning to interpret metaphorical utterances. We conduct behavioral experiments to evaluate the model’s performance and show that our model produces metaphorical interpretations that closely fit behavioral data. We discuss implications of the model for metaphor understanding, principles of communication, and formal models of language understanding.",
"title": ""
},
{
"docid": "d93eb3d262bfe5dee777d27d21407cc6",
"text": "Economics can be distinguished from other social sciences by the belief that most (all?) behavior can be explained by assuming that agents have stable, well-defined preferences and make rational choices consistent with those preferences in markets that (eventually) clear. An empirical result qualifies as an anomaly if it is difficult to \"rationalize,\" or if implausible assumptions are necessary to explain it within the paradigm. This column presents a series of such anomalies. Readers are invited to suggest topics for future columns by sending a note with some reference to (or better yet copies oO the relevant research. Comments on anomalies printed here are also welcome. The address is: Richard Thaler, c/o Journal of Economic Perspectives, Johnson Graduate School of Management, Malott Hall, Cornell University, Ithaca, NY 14853. After this issue, the \"Anomalies\" column will no longer appear in every issue and instead will appear occasionally, when a pressing anomaly crosses Dick Thaler's desk. However, suggestions for new columns and comments on old ones are still welcome. Thaler would like to quash one rumor before it gets started, namely that he is cutting back because he has run out of anomalies. Au contraire, it is the dilemma of choosing which juicy anomaly to discuss that lakes so much time.",
"title": ""
},
{
"docid": "7ad4c2f0b66a11891bd19d175becf5c2",
"text": "The presence of noise represent a relevant issue in image feature extraction and classification. In deep learning, representation is learned directly from the data and, therefore, the classification model is influenced by the quality of the input. However, the ability of deep convolutional neural networks to deal with images that have a different quality when compare to those used to train the network is still to be fully understood. In this paper, we evaluate the generalization of models learned by different networks using noisy images. Our results show that noise cause the classification problem to become harder. However, when image quality is prone to variations after deployment, it might be advantageous to employ models learned using noisy data.",
"title": ""
},
{
"docid": "ffd84e3418a6d1d793f36bfc2efed6be",
"text": "Anterior cingulate cortex (ACC) is a part of the brain's limbic system. Classically, this region has been related to affect, on the basis of lesion studies in humans and in animals. In the late 1980s, neuroimaging research indicated that ACC was active in many studies of cognition. The findings from EEG studies of a focal area of negativity in scalp electrodes following an error response led to the idea that ACC might be the brain's error detection and correction device. In this article, these various findings are reviewed in relation to the idea that ACC is a part of a circuit involved in a form of attention that serves to regulate both cognitive and emotional processing. Neuroimaging studies showing that separate areas of ACC are involved in cognition and emotion are discussed and related to results showing that the error negativity is influenced by affect and motivation. In addition, the development of the emotional and cognitive roles of ACC are discussed, and how the success of this regulation in controlling responses might be correlated with cingulate size. Finally, some theories are considered about how the different subdivisions of ACC might interact with other cortical structures as a part of the circuits involved in the regulation of mental and emotional activity.",
"title": ""
},
{
"docid": "a76826da7f077cf41aaa7c8eca9be3fe",
"text": "In this paper we present an open-source design for the development of low-complexity, anthropomorphic, underactuated robot hands with a selectively lockable differential mechanism. The differential mechanism used is a variation of the whiffletree (or seesaw) mechanism, which introduces a set of locking buttons that can block the motion of each finger. The proposed design is unique since with a single motor and the proposed differential mechanism the user is able to control each finger independently and switch between different grasping postures in an intuitive manner. Anthropomorphism of robot structure and motion is achieved by employing in the design process an index of anthropomorphism. The proposed robot hands can be easily fabricated using low-cost, off-the-shelf materials and rapid prototyping techniques. The efficacy of the proposed design is validated through different experimental paradigms involving grasping of everyday life objects and execution of daily life activities. The proposed hands can be used as affordable prostheses, helping amputees regain their lost dexterity.",
"title": ""
},
{
"docid": "f8ac1e028ec61c8b1dcf8ce138ea1776",
"text": "This paper presents power-control strategies of a grid-connected hybrid generation system with versatile power transfer. The hybrid system is the combination of photovoltaic (PV) array, wind turbine, and battery storage via a common dc bus. Versatile power transfer was defined as multimodes of operation, including normal operation without use of battery, power dispatching, and power averaging, which enables grid- or user-friendly operation. A supervisory control regulates power generation of the individual components so as to enable the hybrid system to operate in the proposed modes of operation. The concept and principle of the hybrid system and its control were described. A simple technique using a low-pass filter was introduced for power averaging. A modified hysteresis-control strategy was applied in the battery converter. Modeling and simulations were based on an electromagnetic-transient-analysis program. A 30-kW hybrid inverter and its control system were developed. The simulation and experimental results were presented to evaluate the dynamic performance of the hybrid system under the proposed modes of operation.",
"title": ""
},
{
"docid": "39cb45c62b83a40f8ea42cb872a7aa59",
"text": "Levy flights are employed in a lattice model of contaminant migration by bioturbation, the reworking of sediment by benthic organisms. The model couples burrowing, foraging, and conveyor-belt feeding with molecular diffusion. The model correctly predicts a square-root dependence on bioturbation rates over a wide range of biomass densities. The model is used to predict the effect of bioturbation on the redistribution of contaminants in laboratory microcosms containing pyrene-inoculated sediments and the tubificid oligochaete Limnodrilus hoffmeisteri. The model predicts the dynamic flux from the sediment and in-bed concentration profiles that are consistent with observations. The sensitivity of flux and concentration profiles to the specific mechanisms of bioturbation are explored with the model. The flux of pyrene to the overlying water was largely controlled by the simulated foraging activities.",
"title": ""
},
{
"docid": "1b5b9b450d0bb9593b70e346c04fc794",
"text": "Due to the real time nature of computer games, the main concerns of game developers have been related to efficiency of algorithm execution and visual presentation, instead of code reusability and maintenance. In order to support these concerns, game developers usually have been implementing all the required functionality to build a game, from scratch, in new projects. However, the complexity in game projects has been increasing over time, rendering such practices infeasible. Then, it is necessary to seek new approaches for game development. This work describes a game development tool (developed for a Master's Thesis), which applies an approach that can address this issue: a framework. The Guff framework is an easy to use tool that features automatic resource management on behalf of the developer, a state machine approach to specify and manage game levels, and extensive employment of free and open source libraries to avoid implementing already available functionalities.",
"title": ""
},
{
"docid": "3c5e3f2fe99cb8f5b26a880abfe388f8",
"text": "Facial point detection is an active area in computer vision due to its relevance to many applications. It is a nontrivial task, since facial shapes vary significantly with facial expressions, poses or occlusion. In this paper, we address this problem by proposing a discriminative deep face shape model that is constructed based on an augmented factorized three-way Restricted Boltzmann Machines model. Specifically, the discriminative deep model combines the top-down information from the embedded face shape patterns and the bottom up measurements from local point detectors in a unified framework. In addition, along with the model, effective algorithms are proposed to perform model learning and to infer the true facial point locations from their measurements. Based on the discriminative deep face shape model, 68 facial points are detected on facial images in both controlled and “in-the-wild” conditions. Experiments on benchmark data sets show the effectiveness of the proposed facial point detection algorithm against state-of-the-art methods.",
"title": ""
},
{
"docid": "806a83d17d242a7fd5272862158db344",
"text": "Solar power has become an attractive alternative of electricity energy. Solar cells that form the basis of a solar power system are mainly based on multicrystalline silicon. A set of solar cells are assembled and interconnected into a large solar module to offer a large amount of electricity power for commercial applications. Many defects in a solar module cannot be visually observed with the conventional CCD imaging system. This paper aims at defect inspection of solar modules in electroluminescence (EL) images. The solar module charged with electrical current will emit infrared light whose intensity will be darker for intrinsic crystal grain boundaries and extrinsic defects including micro-cracks, breaks and finger interruptions. The EL image can distinctly highlight the invisible defects but also create a random inhomogeneous background, which makes the inspection task extremely difficult. The proposed method is based on independent component analysis (ICA), and involves a learning and a detection stage. The large solar module image is first divided into small solar cell subimages. In the training stage, a set of defect-free solar cell subimages are used to find a set of independent basis images using ICA. In the inspection stage, each solar cell subimage under inspection is reconstructed as a linear combination of the learned basis images. The coefficients of the linear combination are used as the feature vector for classification. Also, the reconstruction error between the test image and its reconstructed image from the ICA basis images is also evaluated for detecting the presence of defects. Experimental results have shown that the image reconstruction with basis images distinctly outperforms the ICA feature extraction approach. It can achieve a mean recognition rate of 93.4% for a set of 80 test samples.",
"title": ""
},
{
"docid": "d3b01d3ce120ac7ceda6b61a04210f39",
"text": "The present experiment investigated two facets of object permanence in young infants: the ability to represent the existence and the location of a hidden stationary object, and the ability to ‘represent the existence and the trajectory of a hidden moving object. Sixand B-month-old infants sat in front of a screen; to the left of the screen was an inclined ramp. The infants watched the following event: the screen was raised and lowered, and a toy car rolled down the ramp, passed behind the screen, and exited the apparatus to the right. After the infants habituated to this event, they saw two test events. These were identical to the habituation event, except that a box was placed behind the screen. In one event (possible event), the box stood in back of the car’s tracks; in the other (impossible event), it stood on top of the tracks, blocking the car’s path. Infants looked longer at the impossible than at the possible event, indicating that they were surprised to see the car reappear from behind the screen when the box stood in its path. A control experiment in which the box was placed in front (possible event) or on top (impossible event) of the car’s tracks yielded similar results. Together, the results of these experiments suggest that infants understood that (I) the box continued to exist, in its same location, after it was occluded by the screen; (2) the car continued to exist, and pursued its trajectory, after it disappeared behind the screen; and (3) the car could not roll through the space occupied by the box. These results have implications for theory and research on the development of infants’ knowledge about objects and infants’ reasoning abilities. *The research reported in this manuscript was supported by a grant from the University Research Institute of the University of Texas at Austin. I thank Judy Deloache and Gwen Gustafson, for their careful reading of the paper; Marty Banks, Kathy Cain, Carol Dweck, Marcia Graber, and Liz Spelke, for helpful comments on different versions of the paper; and Stanley Wasserman and Dawn Iaccobucci for their help with the statistical analyses. I also thank the undergraduates who served as observers and experimenters, and the parents who kindly allowed their infants to participate in the studies. Reprint requests should be sent to Renee Baillargeon, Psychology Department, University of Illinois at Urbana/Champaign 603 E Daniel Street, Champaign, IL. 61820, U.S.A. OOlO-0277/86/$6.80",
"title": ""
},
{
"docid": "776ddd7f1330dba24ed49d32bf4969c5",
"text": "BACKGROUND\nInternet sources are becoming increasingly important in seeking health information, such that they may have a significant effect on health care decisions and outcomes. Hence, given the wide range of different sources of Web-based health information (WHI) from different organizations and individuals, it is important to understand how information seekers evaluate and select the sources that they use, and more specifically, how they assess their credibility and trustworthiness.\n\n\nOBJECTIVE\nThe aim of this study was to review empirical studies on trust and credibility in the use of WHI. The article seeks to present a profile of the research conducted on trust and credibility in WHI seeking, to identify the factors that impact judgments of trustworthiness and credibility, and to explore the role of demographic factors affecting trust formation. On this basis, it aimed to identify the gaps in current knowledge and to propose an agenda for future research.\n\n\nMETHODS\nA systematic literature review was conducted. Searches were conducted using a variety of combinations of the terms WHI, trust, credibility, and their variants in four multi-disciplinary and four health-oriented databases. Articles selected were published in English from 2000 onwards; this process generated 3827 unique records. After the application of the exclusion criteria, 73 were analyzed fully.\n\n\nRESULTS\nInterest in this topic has persisted over the last 15 years, with articles being published in medicine, social science, and computer science and originating mostly from the United States and the United Kingdom. Documents in the final dataset fell into 3 categories: (1) those using trust or credibility as a dependent variable, (2) those using trust or credibility as an independent variable, and (3) studies of the demographic factors that influence the role of trust or credibility in WHI seeking. There is a consensus that website design, clear layout, interactive features, and the authority of the owner have a positive effect on trust or credibility, whereas advertising has a negative effect. With regard to content features, authority of the author, ease of use, and content have a positive effect on trust or credibility formation. Demographic factors influencing trust formation are age, gender, and perceived health status.\n\n\nCONCLUSIONS\nThere is considerable scope for further research. This includes increased clarity of the interaction between the variables associated with health information seeking, increased consistency on the measurement of trust and credibility, a greater focus on specific WHI sources, and enhanced understanding of the impact of demographic variables on trust and credibility judgments.",
"title": ""
},
{
"docid": "164fca8833981d037f861aada01d5f7f",
"text": "Kernel methods provide a principled way to perform non linear, nonparametric learning. They rely on solid functional analytic foundations and enjoy optimal statistical properties. However, at least in their basic form, they have limited applicability in large scale scenarios because of stringent computational requirements in terms of time and especially memory. In this paper, we take a substantial step in scaling up kernel methods, proposing FALKON, a novel algorithm that allows to efficiently process millions of points. FALKON is derived combining several algorithmic principles, namely stochastic subsampling, iterative solvers and preconditioning. Our theoretical analysis shows that optimal statistical accuracy is achieved requiring essentially O(n) memory and O(n √ n) time. An extensive experimental analysis on large scale datasets shows that, even with a single machine, FALKON outperforms previous state of the art solutions, which exploit parallel/distributed architectures.",
"title": ""
},
{
"docid": "16f5b9d30f579fd494f7d239b2ebee3a",
"text": "Previous studies have identified that images carry the attribute of memorability, a predictive value of whether a novel image will be later remembered or forgotten. Here we investigate the interplay between intrinsic and extrinsic factors that affect image memorability. First, we find that intrinsic differences in memorability exist at a finer-grained scale than previously documented. Second, we test two extrinsic factors: image context and observer behavior. Building on prior findings that images that are distinct with respect to their context are better remembered, we propose an information-theoretic model of image distinctiveness. Our model can automatically predict how changes in context change the memorability of natural images. In addition to context, we study a second extrinsic factor: where an observer looks while memorizing an image. It turns out that eye movements provide additional information that can predict whether or not an image will be remembered, on a trial-by-trial basis. Together, by considering both intrinsic and extrinsic effects on memorability, we arrive at a more complete and fine-grained model of image memorability than previously available.",
"title": ""
},
{
"docid": "fe1882df52ed6555a087f7683efe80d1",
"text": "Enforcing security on various implementations of OAuth in Android apps should consider a wide range of issues comprehensively. OAuth implementations in Android apps differ from the recommended specification due to the provider and platform factors, and the varied implementations often become vulnerable. Current vulnerability assessments on these OAuth implementations are ad hoc and lack a systematic manner. As a result, insecure OAuth implementations are still widely used and the situation is far from optimistic in many mobile app ecosystems.\n To address this problem, we propose a systematic vulnerability assessment framework for OAuth implementations on Android platform. Different from traditional OAuth security analyses that are experiential with a restrictive three-party model, our proposed framework utilizes an systematic security assessing methodology that adopts a five-party, three-stage model to detect typical vulnerabilities of popular OAuth implementations in Android apps. Based on this framework, a comprehensive investigation on vulnerable OAuth implementations is conducted at the level of an entire mobile app ecosystem. The investigation studies the Chinese mainland mobile app markets (e.g., Baidu App Store, Tencent, Anzhi) that covers 15 mainstream OAuth service providers. Top 100 relevant relying party apps (RP apps) are thoroughly assessed to detect vulnerable OAuth implementations, and we further perform an empirical study of over 4,000 apps to validate how frequently developers misuse the OAuth protocol. The results demonstrate that 86.2% of the apps incorporating OAuth services are vulnerable, and this ratio of Chinese mainland Android app market is much higher than that (58.7%) of Google Play.",
"title": ""
},
{
"docid": "a526cd280b4d15d3f2a3acbed60afae3",
"text": "Vehicular communications, though a reality, must continue to evolve to support higher throughput and, above all, ultralow latency to accommodate new use cases, such as the fully autonomous vehicle. Cybersecurity must be assured since the risk of losing control of vehicles if a country were to come under attack is a matter of national security. This article presents the technological enablers that ensure security requirements are met. Under the umbrella of a dedicated network slice, this article proposes the use of content-centric networking (CCN), instead of conventional transmission control protocol/Internet protocol (TCP/IP) routing and permissioned blockchains that allow for the dynamic control of the source reliability, and the integrity and validity of the information exchanged.",
"title": ""
},
{
"docid": "862c1e0d575920cdba1f221a05c96e6f",
"text": "This project focuses on the development of a line follower algorithm for a Two Wheels Balancing Robot. In this project, ATMEGA32 is chosen as the brain board controller to react towards the data received from Balance Processor Chip on the balance board to monitor the changes of the environment through two infra-red distance sensor to solve the inclination angle problem. Hence, the system will immediately restore to the set point (balance position) through the implementation of internal PID algorithms at the balance board. Application of infra-red light sensors with the PID control is vital, in order to develop a smooth line follower robot. As a result of combination between line follower program and internal self balancing algorithms, we are able to develop a dynamically stabilized balancing robot with line follower function. Keywords—infra-red sensor, PID algorithms, line follower Balancing robot",
"title": ""
},
{
"docid": "2c10278d2d5ce98d139e401d12ee9d91",
"text": "The use of Cloud Computing Services appears to offer significant cost advantages. Particularly start-up companies benefit from these advantages, since frequently they do not operate an internal IT infrastructure. But are costs associated with Cloud Computing Services really that low? We found that particular cost types and factors are frequently underestimated by practitioners. In this paper we present a Total Cost of Ownership (TCO) approach for Cloud Computing Services. We applied a multi-method approach (systematic literature review, analysis of real Cloud Computing Services, expert interview, case study) for the development and evaluation of the formal mathematical model. We found that our model fits the practical requirements and supports decision-making in Cloud Computing.",
"title": ""
}
] |
scidocsrr
|
ec66a4af130f2cc1b98f0aa05a051cc6
|
MINE : Mutual Information Neural Estimation MINE : M UTUAL I NFORMATION N EURAL E STIMATION
|
[
{
"docid": "34e6033c7eb0bc0e16847c8c9b9d113c",
"text": "Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models. We achieve this by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihoodproblem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). We show that in the nonparametric limit our method yields an exact maximum-likelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation. Contrary to competing approaches which combine VAEs with GANs, our approach has a clear theoretical justification, retains most advantages of standard Variational Autoencoders and is easy to implement.",
"title": ""
}
] |
[
{
"docid": "e134a35340fbf5f825d0d64108a171c3",
"text": "The present study investigated relations of anxiety sensitivity and other theoretically relevant personality factors to Copper's [Psychological Assessment 6 (1994) 117.] four categories of substance use motivations as applied to teens' use of alcohol, cigarettes, and marijuana. A sample of 508 adolescents (238 females, 270 males; mean age = 15.1 years) completed the Trait subscale of the State-Trait Anxiety Inventory for Children, the Childhood Anxiety Sensitivity Index (CASI), and the Intensity and Novelty subscales of the Arnett Inventory of Sensation Seeking. Users of each substance also completed the Drinking Motives Questionnaire-Revised (DMQ-R) and/or author-compiled measures for assessing motives for cigarette smoking and marijuana use, respectively. Multiple regression analyses revealed that, in the case of each drug, the block of personality variables predicted \"risky\" substance use motives (i.e., coping, enhancement, and/or conformity motives) over-and-above demographics. High intensity seeking and low anxiety sensitivity predicted enhancement motives for alcohol use, high anxiety sensitivity predicted conformity motives for alcohol and marijuana use, and high trait anxiety predicted coping motives for alcohol and cigarette use. Moreover, anxiety sensitivity moderated the relation between trait anxiety and coping motives for alcohol and cigarette use: the trait anxiety-coping motives relation was stronger for high, than for low, anxiety sensitive individuals. Implications of the findings for improving substance abuse prevention efforts for youth will be discussed.",
"title": ""
},
{
"docid": "a9221eec3d27b66420889ec657f9c0ff",
"text": "Due to the amount of anonymity afforded to users of the Tor infrastructure, Tor has become a useful tool for malicious users. With Tor, the users are able to compromise the non-repudiation principle of computer security. Also, the potentially hackers may launch attacks such as DDoS or identity theft behind Tor. For this reason, there are needed new systems and models to detect the intrusion in Tor networks. In this paper, we present the application of Deep Recurrent Neural Networks (DRNNs) for prediction of user behavior in Tor networks. We constructed a Tor server and a Deep Web browser (Tor client) in our laboratory. Then, the client sends the data browsing to the Tor server using the Tor network. We used Wireshark Network Analyzer to get the data and thenused the DRNNs to make the prediction. The simulation results show that our simulation system has a good prediction of user behavior in Tor networks.",
"title": ""
},
{
"docid": "d5ed3d05cedbee5ef3c06b152d0a19ae",
"text": "The ability to ask questions is a powerful tool to gather information in order to learn about the world and resolve ambiguities. In this paper, we explore a novel problem of generating discriminative questions to help disambiguate visual instances. Our work can be seen as a complement and new extension to the rich research studies on image captioning and question answering. We introduce the first large-scale dataset with over 10,000 carefully annotated images-question tuples to facilitate benchmarking. In particular, each tuple consists of a pair of images and 4.6 discriminative questions (as positive samples) and 5.9 non-discriminative questions (as negative samples) on average. In addition, we present an effective method for visual discriminative question generation. The method can be trained in a weakly supervised manner without discriminative images-question tuples but just existing visual question answering datasets. Promising results are shown against representative baselines through quantitative evaluations and user studies.",
"title": ""
},
{
"docid": "ddd7b66fcb5bdd90f973efba65772c10",
"text": "Many applications of geometry processing and computer vision rely on geometric properties of curves, particularly their curvature. Several methods have already been proposed to estimate the curvature of a planar curve, most of them for curves in digital spaces. This work proposes a new scheme for estimating curvature and torsion of planar and spatial curves, based on weighted least–squares fitting and local arc–length approximation. The method is simple enough to admit a convergence analysis that take into acount the effect of noise in the samples. The implementation of the method is compared to other curvature estimation methods showing a good performance. Applications to prediction in geometry compression are presented both as a practical application and as a validation of this new scheme.",
"title": ""
},
{
"docid": "5fd10b2277918255133f2e37a55e1103",
"text": "Cross-modal retrieval has become a highlighted research topic for retrieval across multimedia data such as image and text. A two-stage learning framework is widely adopted by most existing methods based on deep neural network (DNN): The first learning stage is to generate separate representation for each modality and the second learning stage is to get the cross-modal common representation. However the existing methods have three limitations: 1) In the first learning stage they only model intramodality correlation but ignore intermodality correlation with rich complementary context. 2) In the second learning stage they only adopt shallow networks with single-loss regularization but ignore the intrinsic relevance of intramodality and intermodality correlation. 3) Only original instances are considered while the complementary fine-grained clues provided by their patches are ignored. For addressing the above problems this paper proposes a cross-modal correlation learning (CCL) approach with multigrained fusion by hierarchical network and the contributions are as follows: 1) In the first learning stage CCL exploits multilevel association with joint optimization to preserve the complementary context from intramodality and intermodality correlation simultaneously. 2) In the second learning stage a multitask learning strategy is designed to adaptively balance the intramodality semantic category constraints and intermodality pairwise similarity constraints. 3) CCL adopts multigrained modeling which fuses the coarse-grained instances and fine-grained patches to make cross-modal correlation more precise. Comparing with 13 state-of-the-art methods on 6 widely-used cross-modal datasets the experimental results show our CCL approach achieves the best performance.",
"title": ""
},
{
"docid": "17faf590307caf41095530fcec1069c7",
"text": "Fine-grained visual recognition typically depends on modeling subtle difference from object parts. However, these parts often exhibit dramatic visual variations such as occlusions, viewpoints, and spatial transformations, making it hard to detect. In this paper, we present a novel attention-based model to automatically, selectively and accurately focus on critical object regions with higher importance against appearance variations. Given an image, two different Convolutional Neural Networks (CNNs) are constructed, where the outputs of two CNNs are correlated through bilinear pooling to simultaneously focus on discriminative regions and extract relevant features. To capture spatial distributions among the local regions with visual attention, soft attention based spatial LongShort Term Memory units (LSTMs) are incorporated to realize spatially recurrent yet visually selective over local input patterns. All the above intuitions equip our network with the following novel model: two-stream CNN layers, bilinear pooling layer, spatial recurrent layer with location attention are jointly trained via an end-to-end fashion to serve as the part detector and feature extractor, whereby relevant features are localized and extracted attentively. We show the significance of our network against two well-known visual recognition tasks: fine-grained image classification and person re-identification.",
"title": ""
},
{
"docid": "50d69148b4f259b0040d0f71642bf118",
"text": "Given a real world graph, how should we lay-out its edges? How can we compress it? These questions are closely related, and the typical approach so far is to find clique-like communities, like the cavemen graph', and compress them. We show that the block-diagonal mental image of the cavemen graph' is the wrong paradigm, in full agreement with earlier results that real world graphs have no good cuts. Instead, we propose to envision graphs as a collection of hubs connecting spokes, with super-hubs connecting the hubs, and so on, recursively. Based on the idea, we propose the SLASHBURN method to recursively split a graph into hubs and spokes connected only by the hubs. We also propose techniques to select the hubs and give an ordering to the spokes, in addition to the basic SLASHBURN. We give theoretical analysis of the proposed hub selection methods. Our view point has several advantages: (a) it avoids the no good cuts' problem, (b) it gives better compression, and (c) it leads to faster execution times for matrix-vector operations, which are the back-bone of most graph processing tools. Through experiments, we show that SLASHBURN consistently outperforms other methods for all data sets, resulting in better compression and faster running time. Moreover, we show that SLASHBURN with the appropriate spokes ordering can further improve compression while hardly sacrificing the running time.",
"title": ""
},
{
"docid": "0459d9b635da3a6defe768427ef20834",
"text": "Matrix factorization (MF) is used by many popular algorithms such as collaborative filtering. GPU with massive cores and high memory bandwidth sheds light on accelerating MF much further when appropriately exploiting its architectural characteristics.\n This paper presents cuMF, a CUDA-based matrix factorization library that optimizes alternate least square (ALS) method to solve very large-scale MF. CuMF uses a set of techniques to maximize the performance on single and multiple GPUs. These techniques include smart access of sparse data leveraging GPU memory hierarchy, using data parallelism in conjunction with model parallelism, minimizing the communication overhead among GPUs, and a novel topology-aware parallel reduction scheme.\n With only a single machine with four Nvidia GPU cards, cuMF can be 6-10 times as fast, and 33-100 times as cost-efficient, compared with the state-of-art distributed CPU solutions. Moreover, cuMF can solve the largest matrix factorization problem ever reported in current literature, with impressively good performance.",
"title": ""
},
{
"docid": "f56d5487c5f59d9b951841b993cbec07",
"text": "We present Air+Touch, a new class of interactions that interweave touch events with in-air gestures, offering a unified input modality with expressiveness greater than each input modality alone. We demonstrate how air and touch are highly complementary: touch is used to designate targets and segment in-air gestures, while in-air gestures add expressivity to touch events. For example, a user can draw a circle in the air and tap to trigger a context menu, do a finger 'high jump' between two touches to select a region of text, or drag and in-air 'pigtail' to copy text to the clipboard. Through an observational study, we devised a basic taxonomy of Air+Touch interactions, based on whether the in-air component occurs before, between or after touches. To illustrate the potential of our approach, we built four applications that showcase seven exemplar Air+Touch interactions we created.",
"title": ""
},
{
"docid": "fd552ab0c10bcbd35a18dbb1b3920d37",
"text": "We propose the hypothesis that word etymology is useful for NLP applications as a bridge between languages. We support this hypothesis with experiments in crosslanguage (English-Italian) document categorization. In a straightforward bag-ofwords experimental set-up we add etymological ancestors of the words in the documents, and investigate the performance of a model built on English data, on Italian test data (and viceversa). The results show not only statistically significant, but a large improvement – a jump of almost 40 points in F1-score – over the raw (vanilla bag-ofwords) representation.",
"title": ""
},
{
"docid": "2d0e362a903e18f39bbbae320d29b396",
"text": "We give algorithms for finding the k shortest paths (not required to be simple) connecting a pair of vertices in a digraph. Our algorithms output an implicit representation of these paths in a digraph with n vertices and m edges, in time O(m + nlogn + k). We can also find the k shortest paths from a given source s to each vertex in the graph, in total time O(m + n logn + kn). We describe applications to dynamic programming problems including the knapsack problem, sequence alignment, and maximum inscribed",
"title": ""
},
{
"docid": "6b530ee6c18f0c71b9b057108b2b2174",
"text": "We present a multi-modulus frequency divider based upon novel dual-modulus 4/5 and 2/3 true single-phase clocked (TSPC) prescalers. High-speed and low-power operation was achieved by merging the combinatorial counter logic with the flip-flop stages and removing circuit nodes at the expense of allowing a small short-circuit current during a short fraction of the operation cycle, thus minimizing the amount of nodes in the circuit. The divider is designed for operation in wireline or fibre-optic serial link transceivers with programmable divider ratios of 64, 80, 96, 100, 112, 120 and 140. The divider is implemented as part of a phase-locked loop around a quadrature voltage controlled oscillator in a 65nm CMOS technology. The maximum operating frequency is measured to be 17GHz with 2mW power consumption from a 1.0V supply voltage, and occupies 25×50μm2.",
"title": ""
},
{
"docid": "4768001167cefad7b277e3b77de648bb",
"text": "MicroRNAs (miRNAs) regulate gene expression at the posttranscriptional level and are therefore important cellular components. As is true for protein-coding genes, the transcription of miRNAs is regulated by transcription factors (TFs), an important class of gene regulators that act at the transcriptional level. The correct regulation of miRNAs by TFs is critical, and increasing evidence indicates that aberrant regulation of miRNAs by TFs can cause phenotypic variations and diseases. Therefore, a TF-miRNA regulation database would be helpful for understanding the mechanisms by which TFs regulate miRNAs and understanding their contribution to diseases. In this study, we manually surveyed approximately 5000 reports in the literature and identified 243 TF-miRNA regulatory relationships, which were supported experimentally from 86 publications. We used these data to build a TF-miRNA regulatory database (TransmiR, http://cmbi.bjmu.edu.cn/transmir), which contains 82 TFs and 100 miRNAs with 243 regulatory pairs between TFs and miRNAs. In addition, we included references to the published literature (PubMed ID) information about the organism in which the relationship was found, whether the TFs and miRNAs are involved with tumors, miRNA function annotation and miRNA-associated disease annotation. TransmiR provides a user-friendly interface by which interested parties can easily retrieve TF-miRNA regulatory pairs by searching for either a miRNA or a TF.",
"title": ""
},
{
"docid": "5260fcf528d2da192504c7ebdcfebff7",
"text": "Divided nevus of the penis is exceedingly rare. Desruelles et al. reported the first divided nevus on the penis in 1998, and, since then, only 17 cases have been reported in the English language literature. This article presents the successful excision and histopathologic evaluation of the nevi. The glans was reconstructed by a full-thickness skin graft using remnant foreskin. Six months after the operation, the patient showed no deformity of the glans and no loss of sensation. The lesion on the glans can be successfully reconstructed using the remnant foreskin with satisfactory aesthetic and functional outcome. This method is desirable with minimal donor-site morbidity and inconspicuous donor-site scars.",
"title": ""
},
{
"docid": "74770d8f7e0ac066badb9760a6a2b925",
"text": "Memristor-based synaptic network has been widely investigated and applied to neuromorphic computing systems for the fast computation and low design cost. As memristors continue to mature and achieve higher density, bit failures within crossbar arrays can become a critical issue. These can degrade the computation accuracy significantly. In this work, we propose a defect rescuing design to restore the computation accuracy. In our proposed design, significant weights in a specified network are first identified and retraining and remapping algorithms are described. For a two layer neural network with 92.64% classification accuracy on MNIST digit recognition, our evaluation based on real device testing shows that our design can recover almost its full performance when 20% random defects are present.",
"title": ""
},
{
"docid": "a637d37cb1c4a937b64494903b33193d",
"text": "The multienzyme complexes, pyruvate dehydrogenase and alpha-ketoglutarate dehydrogenase, involved in the central metabolism of Escherichia coli consist of multiple copies of three different enzymes, E1, E2 and E3, that cooperate to channel substrate intermediates between their active sites. The E2 components form the core of the complex, while a mixture of E1 and E3 components binds to the core. We present a random steady-state model to describe catalysis by such multienzyme complexes. At a fast time scale, the model describes the enzyme catalytic mechanisms of substrate channeling at a steady state, by polynomially approximating the analytic solution of a biochemical master equation. At a slower time scale, the structural organization of the different enzymes in the complex and their random binding/unbinding to the core is modeled using methods from equilibrium statistical mechanics. Biologically, the model describes the optimization of catalytic activity by substrate sharing over the entire enzyme complex. The resulting enzymatic models illustrate the random steady state (RSS) for modeling multienzyme complexes in metabolic pathways.",
"title": ""
},
{
"docid": "27c7afd468d969509eec2b2a3260a679",
"text": "The impact of predictive genetic testing on cancer care can be measured by the increased demand for and utilization of genetic services as well as in the progress made in reducing cancer risks in known mutation carriers. Nonetheless, differential access to and utilization of genetic counseling and cancer predisposition testing among underserved racial and ethnic minorities compared with the white population has led to growing health care disparities in clinical cancer genetics that are only beginning to be addressed. Furthermore, deficiencies in the utility of genetic testing in underserved populations as a result of limited testing experience and in the effectiveness of risk-reducing interventions compound access and knowledge-base disparities. The recent literature on racial/ethnic health care disparities is briefly reviewed, and is followed by a discussion of the current limitations of risk assessment and genetic testing outside of white populations. The importance of expanded testing in underserved populations is emphasized.",
"title": ""
},
{
"docid": "33e45b66cca92f15270500c32a1c0b94",
"text": "We study a dataset of billions of program binary files that appeared on 100 million computers over the course of 12 months, discovering that 94% of these files were present on a single machine. Though malware polymorphism is one cause for the large number of singleton files, additional factors also contribute to polymorphism, given that the ratio of benign to malicious singleton files is 80:1. The huge number of benign singletons makes it challenging to reliably identify the minority of malicious singletons. We present a large-scale study of the properties, characteristics, and distribution of benign and malicious singleton files. We leverage the insights from this study to build a classifier based purely on static features to identify 92% of the remaining malicious singletons at a 1.4% percent false positive rate, despite heavy use of obfuscation and packing techniques by most malicious singleton files that we make no attempt to de-obfuscate. Finally, we demonstrate robustness of our classifier to important classes of automated evasion attacks.",
"title": ""
},
{
"docid": "4b5d5d4da56ad916afdad73cc0180cb5",
"text": "This work proposes a substrate integrated waveguide (SIW) power divider employing the Wilkinson configuration for improving the isolation performance of conventional T-junction SIW power dividers. Measurement results at 15GHz show that the isolation (S23, S32) between output ports is about 17 dB and the output return losses (S22, S33) are about 14.5 dB, respectively. The Wilkinson-type performance has been greatly improved from those (7.0 dB ∼ 8.0 dB) of conventional T-junction SIW power dividers. The measured input return loss (23 dB) and average insertion loss (3.9 dB) are also improved from those of conventional ones. The proposed Wilkinson SIW divider will play an important role in high performance SIW circuits involving power divisions.",
"title": ""
}
] |
scidocsrr
|
9a066557bcd5b611579ec7e39096f3c0
|
Mobile Augmented Reality : the potential for education
|
[
{
"docid": "9f01314a03290cf3d481f731648eb138",
"text": "Recent advances in hardware and software for mobile computing have enabled a new breed of mobile AR systems and applications. A new breed of computing called “augmented ubiquitous computing” has resulted from the convergence of wearable computing, wireless networking and mobile AR interfaces. In this paper we provide a survey of different mobile and wireless technologies and how they have impact AR. Our goal is to place them into different categories so that it becomes easier to understand the state of art and to help identify new directions of research.",
"title": ""
},
{
"docid": "1f543e9eeb2dbebb305220166395272b",
"text": "the study of the structure of the human body, is fundamental to medical education. In recent years, the hours devoted to anatomy are declining from the medical curriculum. This decline includes the reduction of course hours and an emphasis on early clinical experience. To adapt to those changes in anatomy education, various complementary methods with technology of three-dimensional visualization have been tried, and the explosion of image technology during the last few decades and this has brought anatomical education into a new world. In this study, we aim to use augmented reality (AR) technology to create an interactive learning system, which help medical students to understand and memorize the 3D anatomy structure easily with tangible augmented reality support. We speculate that by working directly with 3D skull model with visual support and tangible manipulate, this AR system can help young medical students to learn the complex anatomy structure better and faster than only with traditional methods.",
"title": ""
}
] |
[
{
"docid": "ce493a4a89854aba167863cd5f5d8660",
"text": "A given cell makes exchanges with its neighbors through a variety of means ranging from diffusible factors to vesicles. Cells use also tunneling nanotubes (TNTs), filamentous-actin-containing membranous structures that bridge and connect cells. First described in immune cells, TNTs facilitate HIV-1 transfer and are found in various cell types, including neurons. We show that the microtubule-associated protein Tau, a key player in Alzheimer's disease, is a bona fide constituent of TNTs. This is important because Tau appears beside filamentous actin and myosin 10 as a specific marker of these fine protrusions of membranes and cytosol that are difficult to visualize. Furthermore, we observed that exogenous Tau species increase the number of TNTs established between primary neurons, thereby facilitating the intercellular transfer of Tau fibrils. In conclusion, Tau may contribute to the formation and function of the highly dynamic TNTs that may be involved in the prion-like propagation of Tau assemblies.",
"title": ""
},
{
"docid": "6c175d7a90ed74ab3b115977c82b0ffa",
"text": "We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of connections follow power laws that indicate a scale-free pattern of connectivity, with most nodes having relatively few connections joined together through a small number of hubs with many connections. These regularities have also been found in certain other complex natural networks, such as the World Wide Web, but they are not consistent with many conventional models of semantic organization, based on inheritance hierarchies, arbitrarily structured networks, or high-dimensional vector spaces. We propose that these structures reflect the mechanisms by which semantic networks grow. We describe a simple model for semantic growth, in which each new word or concept is connected to an existing network by differentiating the connectivity pattern of an existing node. This model generates appropriate small-world statistics and power-law connectivity distributions, and it also suggests one possible mechanistic basis for the effects of learning history variables (age of acquisition, usage frequency) on behavioral performance in semantic processing tasks.",
"title": ""
},
{
"docid": "9c0cd7c0641a48dcede829a6ac3ed622",
"text": "Association rules are considered to be the best studied models for data mining. In this article, we propose their use in order to extract knowledge so that normal behavior patterns may be obtained in unlawful transactions from transactional credit card databases in order to detect and prevent fraud. The proposed methodology has been applied on data about credit card fraud in some of the most important retail companies in Chile. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6439a34daa3acd043bc2f7992d6f19ac",
"text": "We present a methodology for the direction of arrival (DOA) estimation using the induced voltages that are measured at the loads connected to electrically small tuned dipole antenna arrays illuminated by the signal of interest (SOI). The matrix pencil method is applied directly to the induced voltages to estimate the DOA of the various signals. Using electrically small tuned antennas can be advantageous as they can be placed in close proximity of each other saving the real estate and, thus, making it possible to deploy phased arrays on small footprints. When dealing with closely spaced tuned electrically small antennas, it is necessary to use a transformation matrix to compensate for the strong mutual coupling that may exist between the antenna elements. The transformation matrix converts the voltages that are induced at the loads corresponding to the feed point of the array operating in the presence of mutual coupling and other near field scatterers to an equivalent set of voltages that will be induced by the same incident wave in an uniform linear virtual array (ULVA) consisting of omnidirectional isotropic point radiators equally spaced and operating in free space. For any given incident field, the open circuit voltage developed across the feed-point of the small dipole will always be less than the open circuit voltage developed across the feed-point of the half-wavelength dipole. The difference is in the voltage developed across the loads connected to the dipole's feed-point. With the small dipole antenna, the voltage developed across the load impedance will be orders of magnitude greater than the voltage developed across the load connected to the half-wavelength dipole, even though the power captured by any conjugately matched antenna is approximately the same irrespective of their lengths. Three different scenarios are presented to illustrate the methodology. First, we consider resonant dipole elements spaced half wavelength apart, electrically small tuned antenna elements spaced half wavelength apart and electrically small tuned antenna elements placed in close proximity of each other to reduce the footprint without affecting the performance of the phased array. In addition, we consider the possibility of DOA estimation using a combination of different type of electrically small antennas both uniformly and nonuniformly spaced. Numerical examples are presented to illustrate the principles of this methodology",
"title": ""
},
{
"docid": "efe70da1a3118e26acf10aa480ad778d",
"text": "Background: Facebook (FB) is becoming an increasingly salient feature in peoples’ lives and has grown into a bastion in our current society with over 1 billion users worldwide –the majority of which are college students. However, recent studies conducted suggest that the use of Facebook may impacts individuals’ well being. Thus, this paper aimed to explore the effects of Facebook usage on adolescents’ emotional states of depression, anxiety, and stress. Method and Material: A cross sectional design was utilized in this investigation. The study population included 76 students enrolled in the Bachelor of Science in Nursing program from a government university in Samar, Philippines. Facebook Intensity Scale (FIS) and the Depression Anxiety and Stress Scale (DASS) were the primary instruments used in this study. Results: Findings indicated correlation coefficients of 0.11 (p=0.336), 0.07 (p=0.536), and 0.10 (p=0.377) between Facebook Intensity Scale (FIS) and Depression, Anxiety, and Stress scales in the DASS. Time spent on FBcorrelated significantly with depression (r=0.233, p=0.041) and anxiety (r=0.259, p=0.023). Similarly, the three emotional states (depression, anxiety, and stress) correlated significantly. Conclusions: Intensity of Facebook use is not directly related to negative emotional states. However, time spent on Facebooking increases depression and anxiety scores. Implications of the findings to the fields of counseling and psychology are discussed.",
"title": ""
},
{
"docid": "17ec5256082713e85c819bb0a0dd3453",
"text": "Scholarly documents contain multiple figures representing experimental findings. These figures are generated from data which is not reported anywhere else in the paper. We propose a modular architecture for analyzing such figures. Our architecture consists of the following modules: 1. An extractor for figures and associated metadata (figure captions and mentions) from PDF documents; 2. A Search engine on the extracted figures and metadata; 3. An image processing module for automated data extraction from the figures and 4. A natural language processing module to understand the semantics of the figure. We discuss the challenges in each step, report an extractor algorithm to extract vector graphics from scholarly documents and a classification algorithm for figures. Our extractor algorithm improves the state of the art by more than 10% and the classification process is very scalable, yet achieves 85\\% accuracy. We also describe a semi-automatic system for data extraction from figures which is integrated with our search engine to improve user experience.",
"title": ""
},
{
"docid": "ab57df7702fa8589f7d462c80d9a2598",
"text": "The Internet of Things (IoT) allows machines and devices in the world to connect with each other and generate a huge amount of data, which has a great potential to provide useful knowledge across service domains. Combining the context of IoT with semantic technologies, we can build integrated semantic systems to support semantic interoperability. In this paper, we propose an integrated semantic service platform (ISSP) to support ontological models in various IoT-based service domains of a smart city. In particular, we address three main problems for providing integrated semantic services together with IoT systems: semantic discovery, dynamic semantic representation, and semantic data repository for IoT resources. To show the feasibility of the ISSP, we develop a prototype service for a smart office using the ISSP, which can provide a preset, personalized office environment by interpreting user text input via a smartphone. We also discuss a scenario to show how the ISSP-based method would help build a smart city, where services in each service domain can discover and exploit IoT resources that are wanted across domains. We expect that our method could eventually contribute to providing people in a smart city with more integrated, comprehensive services based on semantic interoperability.",
"title": ""
},
{
"docid": "7cfd90a3c9091c296e621ff34fc471e6",
"text": "The study aimed to develop machine learning models that have strong prediction power and interpretability for diagnosis of glaucoma based on retinal nerve fiber layer (RNFL) thickness and visual field (VF). We collected various candidate features from the examination of retinal nerve fiber layer (RNFL) thickness and visual field (VF). We also developed synthesized features from original features. We then selected the best features proper for classification (diagnosis) through feature evaluation. We used 100 cases of data as a test dataset and 399 cases of data as a training and validation dataset. To develop the glaucoma prediction model, we considered four machine learning algorithms: C5.0, random forest (RF), support vector machine (SVM), and k-nearest neighbor (KNN). We repeatedly composed a learning model using the training dataset and evaluated it by using the validation dataset. Finally, we got the best learning model that produces the highest validation accuracy. We analyzed quality of the models using several measures. The random forest model shows best performance and C5.0, SVM, and KNN models show similar accuracy. In the random forest model, the classification accuracy is 0.98, sensitivity is 0.983, specificity is 0.975, and AUC is 0.979. The developed prediction models show high accuracy, sensitivity, specificity, and AUC in classifying among glaucoma and healthy eyes. It will be used for predicting glaucoma against unknown examination records. Clinicians may reference the prediction results and be able to make better decisions. We may combine multiple learning models to increase prediction accuracy. The C5.0 model includes decision rules for prediction. It can be used to explain the reasons for specific predictions.",
"title": ""
},
{
"docid": "828c54f29339e86107f1930ae2a5e77f",
"text": "Artificial bee colony (ABC) algorithm is an optimization algorithm based on a particular intelligent behaviour of honeybee swarms. This work compares the performance of ABC algorithm with that of differential evolution (DE), particle swarm optimization (PSO) and evolutionary algorithm (EA) for multi-dimensional numeric problems. The simulation results show that the performance of ABC algorithm is comparable to those of the mentioned algorithms and can be efficiently employed to solve engineering problems with high dimensionality. # 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7e43c21444af5fdb4e8ff6890742a44b",
"text": "Cellulases are the enzymes hydrolyzing cellulosic biomass and are produced by the microorganisms that grown over cellulosic matters. Bacterial cellulases possess more advantages when compared to the cellulases from other sources. Cellulase producing bacteria was isolated from Cow dung. The organism was identified using 16 SrDNA sequencing and BLAST search. Cellulase was produced and the culture conditions like temperature, pH, and Incubation time and medium components like Carbon sources, nitrogen sources and role of natural substrates were optimized. The enzyme was further purified using ethanol precipitation and chromatography. Cellulase was then characterized using SDS-PAGE analysis and Zymographic Studies. The application of Cellulase in Biostoning was then analyzed.",
"title": ""
},
{
"docid": "1d4201c3e0c86c8fad74003be243afb7",
"text": "BACKGROUND\nTwo primary factors that contribute to obesity are unhealthy eating and sedentary behavior. These behaviors are particularly difficult to change in the long-term because they are often enacted habitually. Cognitive Remediation Therapy has been modified and applied to the treatment of obesity (CRT-O) with preliminary results of a randomized controlled trial demonstrating significant weight loss and improvements in executive function. The objective of this study was to conduct a secondary data analysis of the CRT-O trial to evaluate whether CRT-O reduces unhealthy habits that contribute to obesity via improvements in executive function.\n\n\nMETHOD\nEighty participants with obesity were randomized to CRT-O or control. Measures of executive function (Wisconsin Card Sort Task and Trail Making Task) and unhealthy eating and sedentary behavior habits were administered at baseline, post-intervention and at 3 month follow-up.\n\n\nRESULTS\nParticipants receiving CRT-O demonstrated improvements in both measures of executive function and reductions in both unhealthy habit outcomes compared to control. Mediation analyses revealed that change in one element of executive function performance (Wisconsin Card Sort Task perseverance errors) mediated the effect of CRT-O on changes in both habit outcomes.\n\n\nCONCLUSION\nThese results suggest that the effectiveness of CRT-O may result from the disruption of unhealthy habits made possible by improvements in executive function. In particular, it appears that cognitive flexibility, as measured by the Wisconsin Card Sort task, is a key mechanism in this process. Improving cognitive flexibility may enable individuals to capitalise on interruptions in unhealthy habits by adjusting their behavior in line with their weight loss goals rather than persisting with an unhealthy choice.\n\n\nTRIAL REGISTRATION\nThe RCT was registered with the Australian New Zealand Registry of Clinical Trials (trial id: ACTRN12613000537752 ).",
"title": ""
},
{
"docid": "1933e3f26cae0b1a1cef204acbbb9ebd",
"text": "Should individuals include actively managed mutual funds in their investment portfolios? They should if and only if the result of active management is superior performance due to skill. This paper employs a previously ignored statistical technique to detect whether skill drives the superior performance of some mutual funds. This technique, the generalized binomial distribution, models a sequence of n Bernoulli events in which the result of each event is either success or failure (successive quarters during which funds outperform or do not outperform the market). Results display a statistically significant proportion of mutual funds, though small in number, outperform their peers on a risk–adjusted basis and do so as a result of skill, not luck. This result signifies the rationality of entrusting one’s wealth to successful and skillfully managed mutual funds. Hence, a well–designed portfolio that includes actively managed funds may trump a wholly passive index fund strategy. JEL Classifications: G10, G11, G12",
"title": ""
},
{
"docid": "4f50f9ed932635614d0f4facbaa80992",
"text": "In this paper we propose an overview of the recent academic literature devoted to the applications of Hawkes processes in finance. Hawkes processes constitute a particular class of multivariate point processes that has become very popular in empirical high frequency finance this last decade. After a reminder of the main definitions and properties that characterize Hawkes processes, we review their main empirical applications to address many different problems in high frequency finance. Because of their great flexibility and versatility, we show that they have been successfully involved in issues as diverse as estimating the volatility at the level of transaction data, estimating the market stability, accounting for systemic risk contagion, devising optimal execution strategies or capturing the dynamics of the full order book.",
"title": ""
},
{
"docid": "4e0ff4875a4dff6863734c964db54540",
"text": "We present a personalized recommender system using neural network for recommending products, such as eBooks, audio-books (“Anonymous audio book service”), Mobile Apps, Video and Music. It produces recommendations based on user consumption history: purchases, listens or watches. Our key contribution is to formulate recommendation problem as a model that encodes historical behavior to predict the future behavior using soft data split, combining predictor and autoencoder models. We introduce convolutional layer for learning the importance (time decay) of the purchases depending on their purchase date and demonstrate that the shape of the time decay function can be well approximated by a parametrical function. We present offline experimental results showing that neural networks with two hidden layers can capture seasonality changes, and at the same time outperform other modeling techniques, including our recommender in production. Most importantly, we demonstrate that our model can be scaled to all digital categories. Finally, we show online A/B test results, discuss key improvements to the neural network model, and describe our production pipeline.",
"title": ""
},
{
"docid": "824fff3f4aea6a4b4d87c7b1ec5e3e75",
"text": "This paper presents the behaviour of a hybrid system based on renewable sources (wind and solar) with their stochastic behaviour and a pre-programmed timed load profile. The goal is the analysis of a small hybrid system in the context of new services that can be offered to the grids as a power generator. Both the sources have continuous maximum power extraction where the Maximum Power Point Tracking (MPPT) of the wind generator is based on its DC power output. The structure of the system is presented and the local and global control is developed. Simulation and conclusions about the behaviour of the system are presented.",
"title": ""
},
{
"docid": "54d1e75ca60b89af7ac77a2175aafa97",
"text": "The purpose of this study was to compare the biomechanics of the traditional squat with 2 popular exercise variations commonly referred to as the powerlifting squat and box squat. Twelve male powerlifters performed the exercises with 30, 50, and 70% of their measured 1 repetition maximum (1RM), with instruction to lift the loads as fast as possible. Inverse dynamics and spatial tracking of the external resistance were used to quantify biomechanical variables. A range of significant kinematic and kinetic differences (p < 0.05) emerged between the exercises. The traditional squat was performed with a narrow stance, whereas the powerlifting squat and box squat were performed with similar wide stances (48.3 ± 3.8, 89.6 ± 4.9, 92.1 ± 5.1 cm, respectively). During the eccentric phase of the traditional squat, the knee traveled past the toes resulting in anterior displacement of the system center of mass (COM). In contrast, during the powerlifting squat and box squat, a more vertical shin position was maintained, resulting in posterior displacements of the system COM. These differences in linear displacements had a significant effect (p < 0.05) on a number of peak joint moments, with the greatest effects measured at the spine and ankle. For both joints, the largest peak moment was produced during the traditional squat, followed by the powerlifting squat, then box squat. Significant differences (p < 0.05) were also noted at the hip joint where the largest moment in all 3 planes were produced during the powerlifting squat. Coaches and athletes should be aware of the biomechanical differences between the squatting variations and select according to the kinematic and kinetic profile that best match the training goals.",
"title": ""
},
{
"docid": "82f1e278631c4ee6cd253842a7b9697a",
"text": "The introduction of smart mobile devices has radically redesigned user interaction, as these devices are equipped with numerous sensors, making applications context-aware. To further improve user experience, most mobile operating systems and service providers are gradually shipping smart devices with voice controlled intelligent personal assistants, reaching a new level of human and technology convergence. While these systems facilitate user interaction, it has been recently shown that there is a potential risk regarding devices, which have such functionality. Our independent research indicates that this threat is not merely potential, but very real and more dangerous than initially perceived, as it is augmented by the inherent mechanisms of the underlying operating systems, the increasing capabilities of these assistants, and the proximity with other devices in the Internet of Things (IoT) era. In this paper, we discuss and demonstrate how these attacks can be launched, analysing their impact in real world scenarios.",
"title": ""
},
{
"docid": "83d15f5a04066a6476f0f23bfad5553c",
"text": "Generative Adversarial Networks (GAN) are able to learn excellent representations for unlabelled data which can be applied to image generation and scene classification. Representations learned by GANs have not yet been applied to retrieval. In this paper, we show that the representations learned by GANs can indeed be used for retrieval. We consider heritage documents that contain unlabelled Merchant Marks, sketch-like symbols that are similar to hieroglyphs. We introduce a novel GAN architecture with design features that make it suitable for sketch retrieval. The performance of this sketch-GAN is compared to a modified version of the original GAN architecture with respect to simple invariance properties. Experiments suggest that sketch-GANs learn representations that are suitable for retrieval and which also have increased stability to rotation, scale and translation compared to the standard GAN architecture.",
"title": ""
},
{
"docid": "dd84b653de8b3b464c904a988a622a39",
"text": "We demonstrate that for sentence-level relation extraction it is beneficial to consider other relations in the sentential context while predicting the target relation. Our architecture uses an LSTM-based encoder to jointly learn representations for all relations in a single sentence. We combine the context representations with an attention mechanism to make the final prediction. We use the Wikidata knowledge base to construct a dataset of multiple relations per sentence and to evaluate our approach. Compared to a baseline system, our method results in an average error reduction of 24% on a held-out set of relations. The code and the dataset to replicate the experiments are made available at https://github.com/ukplab.",
"title": ""
},
{
"docid": "c85e22b314f14a453524dfe390d8f9dc",
"text": "Wide spread monitoring cameras on construction sites provide large amount of information for construction management. The emerging of computer vision and machine learning technologies enables automatic recognition of construction activities from videos. As the executors of construction, the activities of construction workers have strong impact on productivity and progress. Compared to machine work, manual work is more subjective and may differ largely in operation flow and productivity from one worker to another. Hence only a handful of work study on vision based activity recognition of construction workers. Lacking of publicly available datasets is one of the main reasons that currently hinder advancement. The paper studies manual work of construction workers comprehensively, selects 11 common types of activities and establishes a new real world video dataset with 1176 instances. For activity recognition, a cutting-edge video description method, dense trajectories, has been applied. Support vector machines are integrated with a bag-of-features pipeline for activity learning and classification. Performance on multiple types of descriptors (Histograms of Oriented Gradients HOG, Histograms of Optical Flow HOF, Motion Boundary Histogram MBH) and their combination has been evaluated. Experimental results show that the proposed system has achieved a state-of-art performance on the new dataset.",
"title": ""
}
] |
scidocsrr
|
8df92295911fab2679a6b43244e70560
|
A Framework for Guiding and Evaluating Literature Reviews
|
[
{
"docid": "2200fc48420da69c005f63428b6947ac",
"text": "Innovation diffusion theory provides a useful perspective on one of the most persistently challenging topics in the IT field, namely, how to improve technology assessment, adoption and implementation. For this reason, diffusion is growing in popularity as a reference theory for empirical studies of information technology adoption and diffusion, although no comprehensive review of this body of work has been published to date. This paper presents the results of a critical review of eighteen empirical studies published during the period 1981-1991. Conclusive results were most likely when the adoption context closely matched the contexts in which classical diffusion theory was developed (for example, individual adoption of personal-use technologies) or when researchers extended diffusion theory to account for new factors specific to the IT adoption context under study. Based on classical diffusion theory and other recent conceptual work, a framework is developed to guide future research in IT diffusion. The framework maps two classes of technology (ones that conform closely to classical diffusion assumptions versus ones that do no0 against locus of adoption (individual versus organizational), resulting in four IT adoption contexts. For each adoption context, variables impacting adoption and diffusion are identified. Additionally, directions for future research are discussed.",
"title": ""
},
{
"docid": "7eec9c40d8137670a88992d40ef52101",
"text": "Nowadays, most nurses, pre- and post-qualification, will be required to undertake a literature review at some point, either as part of a course of study, as a key step in the research process, or as part of clinical practice development or policy. For student nurses and novice researchers it is often seen as a difficult undertaking. It demands a complex range of skills, such as learning how to define topics for exploration, acquiring skills of literature searching and retrieval, developing the ability to analyse and synthesize data as well as becoming adept at writing and reporting, often within a limited time scale. The purpose of this article is to present a step-by-step guide to facilitate understanding by presenting the critical elements of the literature review process. While reference is made to different types of literature reviews, the focus is on the traditional or narrative review that is undertaken, usually either as an academic assignment or part of the research process.",
"title": ""
}
] |
[
{
"docid": "faad73e69bdead37f18f6fdc6181b129",
"text": "This paper presents a design method to determine the output ratio of a differential mechanism, which is installed in an in-pipe robot with three underactuated parallelogram crawler modules. The crawler module can automatically shift its shape to a parallelogram when encountering with obstacles. To clarify the requirements for the mechanism, two outputs of each crawler mechanism (torque of the arms and front pulley) are quasi-statically analyzed, and how the environmental and design parameters affect the system are verified.",
"title": ""
},
{
"docid": "d81edfca62048454218d742d20fe1abc",
"text": "Query-based text summarization is aimed at extracting essential information that answers the query from original text. The answer is presented in a minimal, often predefined, number of words. In this paper we introduce a new unsupervised approach for query-based extractive summarization, based on the minimum description length (MDL) principle that employs Krimp compression algorithm (Vreeken et al., 2011). The key idea of our approach is to select frequent word sets related to a given query that compress document sentences better and therefore describe the document better. A summary is extracted by selecting sentences that best cover query-related frequent word sets. The approach is evaluated based on the DUC 2005 and DUC 2006 datasets which are specifically designed for query-based summarization (DUC, 2005 2006). It competes with the best results.",
"title": ""
},
{
"docid": "c745458a3113a28cb0c7935e83b92ea1",
"text": "Reinforcement Learning (RL) has been effectively used to solve complex problems given careful design of the problem and algorithm parameters. However standard RL approaches do not scale particularly well with the size of the problem and often require extensive engineering on the part of the designer to minimize the search space. To alleviate this problem, we present a model-free policy-based approach called Exploration from Demonstration (EfD) that uses human demonstrations to guide search space exploration. We use statistical measures of RL algorithms to provide feedback to the user about the agent’s uncertainty and use this to solicit targeted demonstrations useful from the agent’s perspective. The demonstrations are used to learn an exploration policy that actively guides the agent towards important aspects of the problem. We instantiate our approach in a gridworld and a popular arcade game and validate its performance under different experimental conditions. We show how EfD scales to large problems and provides convergence speed-ups over traditional exploration and interactive learning methods.",
"title": ""
},
{
"docid": "28dfe540e7bf24c66be0a2563fb9a145",
"text": "Taxonomies are often used to look up the concepts they contain in text documents (for instance, to classify a document). The more comprehensive the taxonomy, the higher recall the application has that uses the taxonomy. In this paper, we explore automatic taxonomy augmentation with paraphrases. We compare two state-of-the-art paraphrase models based on Moses, a statistical Machine Translation system, and a sequence-to-sequence neural network, trained on a paraphrase datasets with respect to their abilities to add novel nodes to an existing taxonomy from the risk domain. We conduct component-based and task-based evaluations. Our results show that paraphrasing is a viable method to enrich a taxonomy with more terms, and that Moses consistently outperforms the sequence-to-sequence neural model. To the best of our knowledge, this is the first approach to augment taxonomies with paraphrases.",
"title": ""
},
{
"docid": "e60c295d02b87d4c88e159a3343e0dcb",
"text": "In 2163 personally interviewed female twins from a population-based registry, the pattern of age at onset and comorbidity of the simple phobias (animal and situational)--early onset and low rates of comorbidity--differed significantly from that of agoraphobia--later onset and high rates of comorbidity. Consistent with an inherited \"phobia proneness\" but not a \"social learning\" model of phobias, the familial aggregation of any phobia, agoraphobia, social phobia, and animal phobia appeared to result from genetic and not from familial-environmental factors, with estimates of heritability of liability ranging from 30% to 40%. The best-fitting multivariate genetic model indicated the existence of genetic and individual-specific environmental etiologic factors common to all four phobia subtypes and others specific for each of the individual subtypes. This model suggested that (1) environmental experiences that predisposed to all phobias were most important for agoraphobia and social phobia and relatively unimportant for the simple phobias, (2) environmental experiences that uniquely predisposed to only one phobia subtype had a major impact on simple phobias, had a modest impact on social phobia, and were unimportant for agoraphobia, and (3) genetic factors that predisposed to all phobias were most important for animal phobia and least important for agoraphobia. Simple phobias appear to arise from the joint effect of a modest genetic vulnerability and phobia-specific traumatic events in childhood, while agoraphobia and, to a somewhat lesser extent, social phobia result from the combined effect of a slightly stronger genetic influence and nonspecific environmental experiences.",
"title": ""
},
{
"docid": "c9dca9b27abe9ebabeff7c7e3814dcae",
"text": "The Internet of Things (IoT) can be defined as an environment where internet capabilities are applied to everyday objects that have earlier not been considered as computers to provide a network connectivity that will enable these objects to generate, exchange and consume data. According to a forecast given by the Ericsson Mobility report issued in June 2016, there will be as many as 16 billion connected devices that will get Internet of Things (IoT) technology-enabled by 2021. Apart from having its uses in personal well-being and comfort, IoT will be a key factor in the planning of smart cities specially in the time when governments are focussed on the development of smart cities. This technology can be implemented not just for communication networks but also for sanitation, transportation, healthcare, energy use and much more.",
"title": ""
},
{
"docid": "f422d47c2b2adc7505b61503ddfa3c48",
"text": "Distributed Denial of Service (DDoS) attacks are some of the most persistent threats on the Internet today. The evolution of DDoS attacks calls for an in-depth analysis of those attacks. A better understanding of the attackers’ behavior can provide insights to unveil patterns and strategies utilized by attackers. The prior art on the attackers’ behavior analysis often falls in two aspects: it assumes that adversaries are static, and makes certain simplifying assumptions on their behavior, which often are not supported by real attack data. In this paper, we take a data-driven approach to designing and validating three DDoS attack models from temporal (e.g., attack magnitudes), spatial (e.g., attacker origin), and spatiotemporal (e.g., attack inter-launching time) perspectives. We design these models based on the analysis of traces consisting of more than 50,000 verified DDoS attacks from industrial mitigation operations. Each model is also validated by testing its effectiveness in accurately predicting future DDoS attacks. Comparisons against simple intuitive models further show that our models can more accurately capture the essential features of DDoS attacks.",
"title": ""
},
{
"docid": "d69e8f1e75d74345a93f4899b2a0f073",
"text": "CONTEXT\nThis paper provides an overview of the contribution of medical education research which has employed focus group methodology to evaluate both undergraduate education and continuing professional development.\n\n\nPRACTICALITIES AND PROBLEMS\nIt also examines current debates about the ethics and practicalities involved in conducting focus group research. It gives guidance as to how to go about designing and planning focus group studies, highlighting common misconceptions and pitfalls, emphasising that most problems stem from researchers ignoring the central assumptions which underpin the qualitative research endeavour.\n\n\nPRESENTING AND DEVELOPING FOCUS GROUP RESEARCH\nParticular attention is paid to analysis and presentation of focus group work and the uses to which such information is put. Finally, it speculates about the future of focus group research in general and research in medical education in particular.",
"title": ""
},
{
"docid": "c0b27b81cf6475866e6e794bedfee474",
"text": "Nowadays, many e-Commerce tools support customers with automatic recommendations. Many of them are centralized and lack in ef ciency and scalability, while other ones are distributed and require a computational overhead excessive for many devices. Moreover, all the past proposals are not open and do not allow new personalized terms to be introduced into the domain ontology. In this paper, we present a distributed recommender, based on a multi-tiered agent system, trying to face the issues outlined above. The proposed system is able to generate very effective suggestions without a too onerous computational task. We show that our system introduces signi cant advantages in terms of openess, privacy and security.",
"title": ""
},
{
"docid": "300bff5036b5b4e83a4bc605020b49e3",
"text": "Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.",
"title": ""
},
{
"docid": "4a7a4db8497b0d13c8411100dab1b207",
"text": "A novel and simple resolver-to-dc converter is presented. It is shown that by appropriate processing of the sine and cosine resolver signals, the proposed converter may produce an output voltage proportional to the shaft angle. A dedicated compensation method is applied to produce an almost perfectly linear output. This enables determination of the angle with reasonable accuracy without a processor and/or a look-up table. The tests carried out under various operating conditions are satisfactory and in good agreement with theory. This paper gives the theoretical analysis, the computer simulation, the full circuit details, and experimental results of the proposed scheme.",
"title": ""
},
{
"docid": "4772fb61d2a967470bdd0e9b3f2ead07",
"text": "This study examined the relationships of three levels of reading fluency, the individual word, the syntactic unit, and the whole passage, to reading comprehension among 278 fifth graders heterogeneous in reading ability. Hierarchical regression analyses revealed that reading fluency at each level related uniquely to performance on a standardized reading comprehension test in a model including inferencing skill and background knowledge. The study supported an automaticity effect for word recognition speed and an automaticity-like effect related to syntactic processing skill. Additionally, hierarchical regressions using longitudinal data suggested that fluency and reading comprehension had a bidirectional relationship. The discussion emphasizes the theoretical expansion of reading fluency to three levels of cognitive processes and the relations of these processes to reading comprehension.",
"title": ""
},
{
"docid": "592b959fb3beef020e9dbafd804d897f",
"text": "In this paper, we study the effectiveness of phishing blacklists. We used 191 fresh phish that were less than 30 minutes old to conduct two tests on eight anti-phishing toolbars. We found that 63% of the phishing campaigns in our dataset lasted less than two hours. Blacklists were ineffective when protecting users initially, as most of them caught less than 20% of phish at hour zero. We also found that blacklists were updated at different speeds, and varied in coverage, as 47% 83% of phish appeared on blacklists 12 hours from the initial test. We found that two tools using heuristics to complement blacklists caught significantly more phish initially than those using only blacklists. However, it took a long time for phish detected by heuristics to appear on blacklists. Finally, we tested the toolbars on a set of 13,458 legitimate URLs for false positives, and did not find any instance of mislabeling for either blacklists or heuristics. We present these findings and discuss ways in which anti-phishing tools can be improved.",
"title": ""
},
{
"docid": "31d3f4eabc7706cb30cfc9e8d9c37b32",
"text": "BACKGROUND\nTestosterone can motivate human approach and avoidance behavior. Specifically, the conscious recognition of and implicit reaction to angry facial expressions is influenced by testosterone. The study tested whether exogenous testosterone modulates the personal distance (PD) humans prefer in a social threat context.\n\n\nMETHODS\n82 healthy male participants underwent either transdermal testosterone (testosterone group) or placebo application (placebo group). Each participant performed a computerized stop-distance task before (T1) and 3.5h after (T2) treatment, during which they indicated how closely they would approach a human, animal or virtual character with varying emotional expression.\n\n\nRESULTS\nMen's PD towards humans and animals varied as a function of their emotional expression. In the testosterone group, a pre-post comparison indicated that the administration of 50mg testosterone was associated with a small but significant reduction of men's PD towards aggressive individuals. Men in the placebo group did not change the initially chosen PD after placebo application independent of the condition. However comparing the testosterone and placebo group after testosterone administration did not reveal significant differences. While the behavioral effect was small and only observed as within-group effect it was repeatedly and selectively shown for men's PD choices towards an angry woman, angry man and angry dog in the testosterone group. In line with the literature, our findings in young men support the influential role of exogenous testosterone on male's approach behavior during social confrontations.",
"title": ""
},
{
"docid": "f795576f7927f8c0b4543d31c43c0675",
"text": "Computer scientists, linguists, stylometricians, and cognitive scientists have successfully divided corpora into modes, domains, genres, registers, and authors. The limitations for these successes, however, often result from insufficient indices with which their corpora are analyzed. In this paper, we use Coh-Metrix, a computational tool that analyzes text on over 200 indices of cohesion and difficulty. We demonstrate how, with the benefit of statistical analysis, texts can be analyzed for subtle, yet meaningful differences. In this paper, we report evidence that authors within the same register can be computationally distinguished despite evidence that stylistic markers can also shift significantly over time.",
"title": ""
},
{
"docid": "c5d859f1c58651cf147e00b82b87fc1d",
"text": "In this work, we apply an attention-gated network to real-time automated scan plane detection for fetal ultrasound screening. Scan plane detection in fetal ultrasound is a challenging problem due the poor image quality resulting in low interpretability for both clinicians and automated algorithms. To solve this, we propose incorporating self-gated soft-attention mechanisms. A soft-attention mechanism generates a gating signal that is end-to-end trainable, which allows the network to contextualise local information useful for prediction. The proposed attention mechanism is generic and it can be easily incorporated into any existing classification architectures, while only requiring a few additional parameters. We show that, when the base network has a high capacity, the incorporated attention mechanism can provide efficient object localisation while improving the overall performance. When the base network has a low capacity, the method greatly outperforms the baseline approach and significantly reduces false positives. Lastly, the generated attention maps allow us to understand the model’s reasoning process, which can also be used for weakly supervised object localisation.",
"title": ""
},
{
"docid": "1dac1fc798794517d8db162a9ac80007",
"text": "We describe an automated method for image colorization that learns to colorize from examples. Our method exploits a LEARCH framework to train a quadratic objective function in the chromaticity maps, comparable to a Gaussian random field. The coefficients of the objective function are conditioned on image features, using a random forest. The objective function admits correlations on long spatial scales, and can control spatial error in the colorization of the image. Images are then colorized by minimizing this objective function. We demonstrate that our method strongly outperforms a natural baseline on large-scale experiments with images of real scenes using a demanding loss function. We demonstrate that learning a model that is conditioned on scene produces improved results. We show how to incorporate a desired color histogram into the objective function, and that doing so can lead to further improvements in results.",
"title": ""
},
{
"docid": "d6146614330de1da7ae1a4842e2768c1",
"text": "Series-connected power switch provides a viable solution to implement high voltage and high frequency converters. By using the commercially available 1200V Silicon Carbide (SiC) Junction Field Effect Transistor (JFET) and Metal Oxide semiconductor Filed-effect Transistor (MOSFET), a 6 kV SiC hybrid power switch concept and its application are demonstrated. To solve the parameter deviation issue in the series device structure, an optimized voltage control method is introduced, which can guarantee the equal voltage sharing under both static and dynamic state. Without Zener diode arrays, this strategy can significantly reduce the turn-off switching loss. Moreover, this hybrid MOSFET-JFETs concept is also presented to suppress the silicon MOSFET parasitic capacitance effect. In addition, the positive gate drive voltage greatly accelerates turn-on speed and decreases the switching loss. Compared with the conventional super-JFETs, the proposed scheme is suitable for series-connected device, and can achieve better performance. The effectiveness of this method is validated by simulations and experiments, and promising results are obtained.",
"title": ""
},
{
"docid": "55a03f4c7c99b08e40aa592de7718b99",
"text": "We analyze the time pattern of the activity of a serial killer, who during 12 years had murdered 53 people. The plot of the cumulative number of murders as a function of time is of \"Devil's staircase\" type. The distribution of the intervals between murders (step length) follows a power law with the exponent of 1.4. We propose a model according to which the serial killer commits murders when neuronal excitation in his brain exceeds certain threshold. We model this neural activity as a branching process, which in turn is approximated by a random walk. As the distribution of the random walk return times is a power law with the exponent 1.5, the distribution of the inter-murder intervals is thus explained. We illustrate analytical results by numerical simulation. Time pattern activity data from two other serial killers further substantiate our analysis.",
"title": ""
}
] |
scidocsrr
|
2d1676b9355e16a8129d121c6e3826fb
|
Image Denoising Using the Higher Order Singular Value Decomposition
|
[
{
"docid": "db8325925cb9fd1ebdcf7480735f5448",
"text": "A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and thus its utility in detecting the modes of the density. The equivalence of the mean shift procedure to the Nadaraya–Watson estimator from kernel regression and the robust M-estimators of location is also established. Algorithms for two low-level vision tasks, discontinuity preserving smoothing and image segmentation are described as applications. In these algorithms the only user set parameter is the resolution of the analysis, and either gray level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.",
"title": ""
}
] |
[
{
"docid": "1b61dc674649cca3b46982a7c0b3e1d9",
"text": "Recently, with the explosive increase of automobiles in China, people park their cars on streets without following the rules and police has hard time to conduct law enforcement without introducing effective street parking system. To solve the problem, we propose a SPS (street parking system) based on wireless sensor networks. For accurately detecting a parking car, a parking algorithm based on state machine is also proposed. As the vehicle detection in SPS is absolutely critical, we use a Honeywell 3-axis magnetic sensor to detect vehicle. However, the magnetic sensor may be affected by the fluctuation of the outside temperature. To solve the problem, we introduce a moving drift factor. On the parking lot in Shenzhen Institutes of Advanced Technology (SIAT), 62 sensor devices are deployed to evaluate the performance of SPS. By running the system for several months, we observe the vehicle detection accurate rate of the SPS is nearly 99%. The proposed SPS is energy efficient. The end device can run about 7 years with one 2400mAh AA battery.",
"title": ""
},
{
"docid": "b4cc3716abcb57b45a12c31daab8a89f",
"text": "The original ImageNet dataset is a popular large-scale benchmark for training Deep Neural Networks. Since the cost of performing experiments (e.g, algorithm design, architecture search, and hyperparameter tuning) on the original dataset might be prohibitive, we propose to consider a downsampled version of ImageNet. In contrast to the CIFAR datasets and earlier downsampled versions of ImageNet, our proposed ImageNet32x32 (and its variants ImageNet64x64 and ImageNet16x16) contains exactly the same number of classes and images as ImageNet, with the only difference that the images are downsampled to 32×32 pixels per image (64×64 and 16×16 pixels for the variants, respectively). Experiments on these downsampled variants are dramatically faster than on the original ImageNet and the characteristics of the downsampled datasets with respect to optimal hyperparameters appear to remain similar. The proposed datasets and scripts to reproduce our results are available at http://image-net.org/download-images and https://github.com/PatrykChrabaszcz/Imagenet32_Scripts",
"title": ""
},
{
"docid": "c491e39bbfb38f256e770d730a50b2e1",
"text": "Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.",
"title": ""
},
{
"docid": "e5e4d7fae41e76c0464b8d6e8ba1f539",
"text": "Individual differences in personality affect users’ online activities as much as they do in the offline world. This work, based on a sample of over a third of a million users, examines how users’ behaviour in the online environment, captured by their website choices and Facebook profile features, relates to their personality, as measured by the standard Five Factor Model personality questionnaire. Results show that there are psychologically meaningful links between users’ personalities, their website preferences and Facebook profile features. We show how website audiences differ in terms of their personality, present the relationships between personality and Facebook profile features, and show how an individual’s personality can be predicted from Facebook profile features. We conclude that predicting a user’s personality profile can be applied to personalize content, optimize search results, and improve online advertising.",
"title": ""
},
{
"docid": "9832eb4b5d47267d7b99e87bf853d30e",
"text": "Generative Adversarial Networks (GANs) have recently achieved significant improvement on paired/unpaired image-to-image translation, such as photo→ sketch and artist painting style transfer. However, existing models can only be capable of transferring the low-level information (e.g. color or texture changes), but fail to edit high-level semantic meanings (e.g., geometric structure or content) of objects. On the other hand, while some researches can synthesize compelling real-world images given a class label or caption, they cannot condition on arbitrary shapes or structures, which largely limits their application scenarios and interpretive capability of model results. In this work, we focus on a more challenging semantic manipulation task, which aims to modify the semantic meaning of an object while preserving its own characteristics (e.g. viewpoints and shapes), such as cow→sheep, motor→ bicycle, cat→dog. To tackle such large semantic changes, we introduce a contrasting GAN (contrast-GAN) with a novel adversarial contrasting objective. Instead of directly making the synthesized samples close to target data as previous GANs did, our adversarial contrasting objective optimizes over the distance comparisons between samples, that is, enforcing the manipulated data be semantically closer to the real data with target category than the input data. Equipped with the new contrasting objective, a novel mask-conditional contrast-GAN architecture is proposed to enable disentangle image background with object semantic changes. Experiments on several semantic manipulation tasks on ImageNet and MSCOCO dataset show considerable performance gain by our contrast-GAN over other conditional GANs. Quantitative results further demonstrate the superiority of our model on generating manipulated results with high visual fidelity and reasonable object semantics.",
"title": ""
},
{
"docid": "116ae1a8d8d8cb5a776ab665a6fc1c8c",
"text": "A low noise transimpedance amplifier (TIA) is used in radiation detectors to transform the current pulse produced by a photo-sensitive device into an output voltage pulse with a specified amplitude and shape. We consider here the specifications of a PET (positron emission tomography) system. We review the traditional approach, feedback TIA, using an operational amplifier with feedback, and we investigate two alternative circuits: the common-gate TIA, and the regulated cascode TIA. We derive the transimpedance function (the poles of which determine the pulse shaping); we identify the transistor in each circuit that has the dominant noise source, and we obtain closed-form equations for the rms output noise voltage. We find that the common-gate TIA has high noise, but the regulated cascode TIA has the same dominant noise contribution as the feedback TIA, if the same maximum transconductance value is considered. A circuit prototype of a regulated cascode TIA is designed in a 0.35 μm CMOS technology, to validate the theoretical results by simulation and by measurement.",
"title": ""
},
{
"docid": "298df39e9b415bc1eed95ed56d3f32df",
"text": "In this work, we present a true 3D 128 Gb 2 bit/cell vertical-NAND (V-NAND) Flash product for the first time. The use of barrier-engineered materials and gate all-around structure in the 3D V-NAND cell exhibits advantages over 1 × nm planar NAND, such as small Vth shift due to small cell coupling and narrow natural Vth distribution. Also, a negative counter-pulse scheme realizes a tightly programmed cell distribution. In order to reduce the effect of a large WL coupling, a glitch-canceling discharge scheme and a pre-offset control scheme is implemented. Furthermore, an external high-voltage supply scheme along with the proper protection scheme for a high-voltage failure is used to achieve low power consumption. The chip accomplishes 50 MB/s write throughput with 3 K endurance for typical embedded applications. Also, extended endurance of 35 K is achieved with 36 MB/s of write throughput for data center and enterprise SSD applications.",
"title": ""
},
{
"docid": "b1ae4cfe9ce7a88eb0a503bfafe9606d",
"text": "The aim of Chapter 2 is to give an overview of the GPR basic principles and technology. A lot of definitions and often-used terms that will be used throughout the whole work will be explained here. Readers who are familiar with GPR and the demining application can skip parts of this chapter. Section 2.2.4 however can be interesting since a description of the hardware and the design parameters of a time domain GPR are given there. The description is far from complete, but it gives a good overview of the technological difficulties encountered in GPR systems.",
"title": ""
},
{
"docid": "147270ce3991745440473e698bb1f0a8",
"text": "In celiac disease (CD), the intestinal lesions can be patchy and partial villous atrophy may elude detection at standard endoscopy (SE). Narrow Band Imaging (NBI) system in combination with a magnifying endoscope (ME) is a simple tool able to obtain targeted biopsy specimens. The aim of the study was to assess the correlation between NBI-ME and histology in CD diagnosis and to compare diagnostic accuracy between NBI-ME and SE in detecting villous abnormalities in CD. Forty-four consecutive patients with suspected CD undergoing upper gastrointestinal endoscopy have been prospectively evaluated. Utilizing both SE and NBI-ME, observed surface patterns were compared with histological results obtained from biopsy specimens using the k-Cohen agreement coefficient. NBI-ME identified partial villous atrophy in 12 patients in whom SE was normal, with sensitivity, specificity, and accuracy of 100%, 92.6%, and 95%, respectively. The overall agreement between NBI-ME and histology was significantly higher when compared with SE and histology (kappa score: 0.90 versus 0.46; P = 0.001) in diagnosing CD. NBI-ME could help identify partial mucosal atrophy in the routine endoscopic practice, potentially reducing the need for blind biopsies. NBI-ME was superior to SE and can reliably predict in vivo the villous changes of CD.",
"title": ""
},
{
"docid": "81cd92a9984a19dc624f60efd3b68181",
"text": "S. Touzard, A. Grimm, Z. Leghtas, S. O. Mundhada, P. Reinhold, C. Axline, M. Reagor, K. Chou, J. Blumoff, K. M. Sliwa, S. Shankar, L. Frunzio, R. J. Schoelkopf, M. Mirrahimi, and M. H. Devoret Department of Applied Physics and Physics, Yale University, New Haven, Connecticut 06520, USA Yale Quantum Institute, Yale University, New Haven, Connecticut 06520, USA and QUANTIC team, INRIA de Paris, 2 Rue Simone Iff, 75012 Paris, France",
"title": ""
},
{
"docid": "aa6a22096c633072b1e362f20e18a4e4",
"text": "In this paper, we propose a new deep framework which predicts facial attributes and leverage it as a soft modality to improve face identification performance. Our model is an end to end framework which consists of a convolutional neural network (CNN) whose output is fanned out into two separate branches; the first branch predicts facial attributes while the second branch identifies face images. Contrary to the existing multi-task methods which only use a shared CNN feature space to train these two tasks jointly, we fuse the predicted attributes with the features from the face modality in order to improve the face identification performance. Experimental results show that our model brings benefits to both face identification as well as facial attribute prediction performance, especially in the case of identity facial attributes such as gender prediction. We tested our model on two standard datasets annotated by identities and face attributes. Experimental results indicate that the proposed model outperforms most of the current existing face identification and attribute prediction methods.",
"title": ""
},
{
"docid": "d6f6c34195f3213d4a6e541c75f27e9e",
"text": "While single-view human action recognition has attracted considerable research study in the last three decades, multi-view action recognition is, still, a less exploited field. This paper provides a comprehensive survey of multi-view human action recognition approaches. The approaches are reviewed following an application-based categorization: methods are categorized based on their ability to operate using a fixed or an arbitrary number of cameras. Finally, benchmark databases frequently used for evaluation of multi-view approaches are briefly described.",
"title": ""
},
{
"docid": "41d32df9d58f9c38f75010c87c0c3327",
"text": "Evidence from many countries in recent years suggests that collateral values and recovery rates on corporate defaults can be volatile and, moreover, that they tend to go down just when the number of defaults goes up in economic downturns. This link between recovery rates and default rates has traditionally been neglected by credit risk models, as most of them focused on default risk and adopted static loss assumptions, treating the recovery rate either as a constant parameter or as a stochastic variable independent from the probability of default. This traditional focus on default analysis has been partly reversed by the recent significant increase in the number of studies dedicated to the subject of recovery rate estimation and the relationship between default and recovery rates. This paper presents a detailed review of the way credit risk models, developed during the last thirty years, treat the recovery rate and, more specifically, its relationship with the probability of default of an obligor. We also review the efforts by rating agencies to formally incorporate recovery ratings into their assessment of corporate loan and bond credit risk and the recent efforts by the Basel Committee on Banking Supervision to consider “downturn LGD” in their suggested requirements under Basel II. Recent empirical evidence concerning these issues and the latest data on high-yield bond and leverage loan defaults is also presented and discussed.",
"title": ""
},
{
"docid": "e133f005e6bae09d7d67da1b4e4ec176",
"text": "Because of broad range of applications and distinctive properties of aptamer, the global market size was valued at USD 723.6 million in 2016 and is projected to grow at the compound annual growth rate (CAGR) of 28.2%,1 and expected to reach $8.91 Billion by 2025, growing rapidly. Aptamers and the derivatives are also referred to as “synthetic antibodies” or “chemical antibodies”2‒5 that are able to bind with high affinity and specificity to almost all types of molecules as well as antigens, cells. Because of their unique properties, aptamers have a wide range of applications, particularly in biological and medical sciences, including diagnosis, therapies, forensics, and biodefense.6‒9 So far, hundreds of aptamer reagents have been developed for the applications,10 which are faster, cheaper, and less or without the predictable problems associated with the production of recombinant antibodies. This review summarizes the resent technologies of modified analogous of aptamer, so called pseudo aptamers in this script.",
"title": ""
},
{
"docid": "2ec14d4544d1fcc6591b6f31140af204",
"text": "To better understand the molecular and cellular differences in brain organization between human and nonhuman primates, we performed transcriptome sequencing of 16 regions of adult human, chimpanzee, and macaque brains. Integration with human single-cell transcriptomic data revealed global, regional, and cell-type–specific species expression differences in genes representing distinct functional categories. We validated and further characterized the human specificity of genes enriched in distinct cell types through histological and functional analyses, including rare subpallial-derived interneurons expressing dopamine biosynthesis genes enriched in the human striatum and absent in the nonhuman African ape neocortex. Our integrated analysis of the generated data revealed diverse molecular and cellular features of the phylogenetic reorganization of the human brain across multiple levels, with relevance for brain function and disease.",
"title": ""
},
{
"docid": "68c988688a772b35b014700cd2d1d906",
"text": "In today’s new economy characterized by industrial change, globalization, increased intensive competition, knowledge sharing and transfer, and information technology revolution, traditional classroom education or training does not always satisfy all the needs of the new world of lifelong learning. Learning is shifting from instructor-centered to learner-centered, and is undertaken anywhere, from classrooms to homes and offices. E-Learning, referring to learning via the Internet, provides people with a flexible and personalized way to learn. It offers learning-on-demand opportunities and reduces learning cost. This paper describes the demands for e-Learning and related research, and presents a variety of enabling technologies that can facilitate the design and implementation of e-Learning systems. Armed with the advanced information and communication technologies, e-Learning is having a far-reaching impact on learning in the new millennium.",
"title": ""
},
{
"docid": "73581b5a936a75f936112747bd05003e",
"text": "We consider the problem of creating secure and resourceefficient blockchain networks i.e., enable a group of mutually distrusting participants to efficiently share state and then agree on an append-only history of valid operations on that shared state. This paper proposes a new approach to build such blockchain networks. Our key observation is that an append-only, tamper-resistant ledger (when used as a communication medium for messages sent by participants in a blockchain network) offers a powerful primitive to build a simple, flexible, and efficient consensus protocol, which in turn serves as a solid foundation for building secure and resource-efficient blockchain networks. A key ingredient in our approach is the abstraction of a blockchain service provider (BSP), which oversees creating and updating an append-only, tamper-resistant ledger, and a new distributed protocol called Caesar consensus, which leverages the BSP’s interface to enable members of a blockchain network to reach consensus on the BSP’s ledger—even when the BSP or a threshold number of members misbehave arbitrarily. By design, the BSP is untrusted, so it can run on any untrusted infrastructure and can be optimized for better performance without affecting end-to-end security. We implement our proposal in a system called VOLT. Our experimental evaluation suggests that VOLT incurs low resource costs and provides better performance compared to alternate approaches.",
"title": ""
},
{
"docid": "c3caac0233e0deb53dc46b549e280295",
"text": "Humphrey Ridley, M.D. (1653-1708), is a relatively unknown historical figure, belonging to the postmedieval era of neuroanatomical discovery. He was born in the market town of Mansfield, 14 miles from the county of Nottinghamshire, England. After studying at Merton College, Oxford, he pursued medicine at Leiden University in the Netherlands. In 1688, he was incorporated as an M.D. at Cambridge. Ridley authored the first original treatise in English language on neuroanatomy, The Anatomy of the Brain Containing its Mechanisms and Physiology: Together with Some New Discoveries and Corrections of Ancient and Modern Authors upon that Subject. Ridley described the venous anatomy of the eponymous circular sinus in connection with the parasellar compartment. His methods were novel, unique, and effective. To appreciate the venous anatomy, he preferred to perform his anatomical dissections on recently executed criminals who had been hanged. These cadavers had considerable venous engorgement, which made the skull base venous anatomy clearer. To enhance the appearance of the cerebral vasculature further, he used tinged wax and quicksilver in the injections. He set up experimental models to answer questions definitively, in proving that the arachnoid mater is a separate meningeal layer. The first description of the subarachnoid cisterns, blood-brain barrier, and the fifth cranial nerve ganglion with its branches are also attributed to Ridley. This historical vignette revisits Ridley's life and academic work that influenced neuroscience and neurosurgical understanding in its infancy. It is unfortunate that most of his novel contributions have gone unnoticed and uncited. The authors hope that this article will inform the neurosurgical community of Ridley's contributions to the field of neurosurgery.",
"title": ""
},
{
"docid": "54ebafe33f0e0cffe2431e9fb9a5bed5",
"text": "The distributed query optimization is one of the hardest problems in the database area. The great commercial success of database systems is partly due to the development of sophisticated query optimization technology where users pose queries in a declarative way using SQL or OQL and the optimizer of the database system finds a good way (i. e. plan) to execute these queries. The optimizer, for example, determines which indices should be used to execute a query and in which order the operations of a query (e. g. joins, selects, and projects) should be executed. To this end, the optimizer enumerates alternative plans, estimates the cost of every plan using a cost model, and chooses the plan with lowest cost. There has been much research into this field. In this paper, we study the problem of distributed query optimization; we focus on the basic components of the distributed query optimizer, i. e. search space, search strategy, and cost model. A survey of the available work into this field is given. Finally, some future work is highlighted based on some recent work that uses mobile agent",
"title": ""
}
] |
scidocsrr
|
42782ee40abec5a6ca06bd855073d7fb
|
Planning in Narrative Generation : A Review of Plan-Based Approaches to the Generation of Story , Discourse and Interactivity in Narratives
|
[
{
"docid": "30d119e1c2777988aab652e34fb76846",
"text": "The relationship between games and story remains a divisive question among game fans, designers, and scholars alike. At a recent academic Games Studies conference, for example, a blood feud threatened to erupt between the self-proclaimed Ludologists, who wanted to see the focus shift onto the mechanics of game play, and the Narratologists, who were interested in studying games alongside other storytelling media.(1) Consider some recent statements made on this issue:",
"title": ""
},
{
"docid": "f672af55234d85a113e45fcb65a2149f",
"text": "In recent years, the fields of Interactive Storytelling and Player Modelling have independently enjoyed increased interest in both academia and the computer games industry. The combination of these technologies, however, remains largely unexplored. In this paper, we present PaSSAGE (PlayerSpecific Stories via Automatically Generated Events), an interactive storytelling system that uses player modelling to automatically learn a model of the player’s preferred style of play, and then uses that model to dynamically select the content of an interactive story. Results from a user study evaluating the entertainment value of adaptive stories created by our system as well as two fixed, pre-authored stories indicate that automatically adapting a story based on learned player preferences can increase the enjoyment of playing a computer role-playing game for certain types of players.",
"title": ""
}
] |
[
{
"docid": "7d6e680b9a4d78753f6fed9fd0e4650b",
"text": "MGDIS SA is a software editing company that underwent a major strategic and technical change during the past three years, investing 17 300 man. Days rewriting its core business software from monolithic architecture to a Web Oriented Architecture using microservices. The paper presents technical lessons learned during and from this migration by addressing three crucial questions for a successful context-adapted migration towards a Web Oriented Architecture: how to determine (i) the most suitable granularity of micro-services, (ii) the most appropriate deployment and (iii) the most efficient orchestration?",
"title": ""
},
{
"docid": "012d69ddc3410c85d265be54ae07767f",
"text": "The family of intelligent IPS-drivers was invented to drive high power IGBT modules with blocking voltages up to 6,500V. They may be used use in industrial drives, power supplies, transportation, renewable energies as well as induction heating applications. The IPS- drivers decrease the switching losses and offer a reliable protection for high power IGBT modules. Varying the software enables an easy adaptation to specific applications. The main features of the IPS-drivers are: variable gate ON and OFF resistors, advanced desaturation and di/dt protections, active feedback clamping, high peak output current and output power, short signal transition times, and multiple soft shut down.",
"title": ""
},
{
"docid": "7fa92e07f76bcefc639ae807147b8d7b",
"text": "We present a novel method for discovering parallel sentences in comparable, non-parallel corpora. We train a maximum entropy classifier that, given a pair of sentences, can reliably determine whether or not they are translations of each other. Using this approach, we extract parallel data from large Chinese, Arabic, and English non-parallel newspaper corpora. We evaluate the quality of the extracted data by showing that it improves the performance of a state-of-the-art statistical machine translation system. We also show that a good-quality MT system can be built from scratch by starting with a very small parallel corpus (100,000 words) and exploiting a large non-parallel corpus. Thus, our method can be applied with great benefit to language pairs for which only scarce resources are available.",
"title": ""
},
{
"docid": "09eb96a9be1c8ee56503881e0fd936d5",
"text": "Essential oils are volatile, natural, complex mixtures of compounds characterized by a strong odour and formed by aromatic plants as secondary metabolites. The chemical composition of the essential oil obtained by hydrodistillation from the whole plant of Pulicaria inuloides grown in Yemen and collected at full flowering stage were analyzed by Gas chromatography-Mass spectrometry (GC-MS). Several oil components were identified based upon comparison of their mass spectral data with those of reference compounds. The main components identified in the oil were 47.34% of 2-Cyclohexen-1-one, 2-methyl-5-(1-methyl with Hexadecanoic acid (CAS) (12.82%) and Ethane, 1,2-diethoxy(9.613%). In this study, mineral contents of whole plant of P. inuloides were determined by atomic absorption spectroscopy. Highest level of K, Mg, Na, Fe and Ca of 159.5, 29.5, 14.2, 13.875 and 5.225 mg/100 g were found in P. inuloides.",
"title": ""
},
{
"docid": "e3fea0684301694ff0af780b96dfa226",
"text": "This paper presents a novel noise robust front-end algorithm and evaluates its performance on the Aurora 2 database. Most algorithms aimed at improving the performance of recognisers in background noise make an estimate of the noise spectrum that is then used to obtain an improved estimate of the spectrum of the underlying speech. In the case of stationary noises it is sufficient to take an average noise spectrum from the period before the speech utterance and/or to use a speech/non-speech detector to update this estimate using the noise sampled from any gaps in an utterance. For nonstationary noises where the noise spectrum changes faster than the duration of a typical utterance (e.g. within 0.5s) then there can be substantial differences between the estimated and actual noise spectrum for a particular frame, leading to poor performance. The algorithm presented here provides an improved estimate of the noise that can be tracked throughout the duration of the speech utterance, by making use of the harmonic structure of the voiced speech spectrum. This running estimate of the noise is obtained by sampling the noise spectrum in the gaps (or “tunnels”) between the harmonic spectral peaks. Compared to the ETSI standard MFCC front-end [1], the proposed algorithm delivers an average improvement in performance of 43.93% on the Aurora 2 database [2]. 1. Front-end Algorithm Overview A summary of the algorithm is shown in figure 1 and the details of the processing performed by each block described in the section 2. Figure 1: Block diagram of front-end algorithm The spectrum of the signal is first obtained by taking an FFT. The peaks in this spectrum are then determined from spectral derivatives. Each of these candidate peaks are analysed to categorise them as a peak coming from either a voiced speech harmonic or noise. The noise spectrum at a peak categorised as speech is estimated by interpolation from the adjacent noise spectra in the surrounding “tunnels”. These frame based noise measurements contribute to the running average of the noise spectrum in the Mel domain. Whilst this noise estimate could be used in many alternative algorithms, in this implementation it is used for an SNR dependent spectral subtraction. The remaining processing blocks are spectral normalisation performed in the Mel domain, normalising with the long term average of the spectrum and also by the frame energy. Finally a cube root compression is performed followed by cosine transform to produce 12 cepstral coefficients and a log energy measure. 2. Front-end Algorithm Details",
"title": ""
},
{
"docid": "6308f5032a7d3b13ae2553d025d26311",
"text": "It is generally a challenging task to tell apart malware from benign applications: obfuscation and string encryption, used by malware as well as goodware, often render static analyses ineffective. In addition, malware frequently tricks dynamic analyses by detecting the execution environment emulated by the analysis tool and then refraining from malicious behavior. In this work, however, we present HARVESTER, a novel approach that combines a variation of program slicing with dynamic execution, and show that it can be highly effective in the triage of current mobile malware families. For this malware, HARVESTER allows a fully automatic extraction of runtime values from any position in the Android bytecode. Target phone numbers and messages of SMS messages, decryption keys or concrete URLs that are called inside an Android application can usually be extracted even if the application is highly obfuscated, and even if the application uses anti-analysis techniques (e.g., emulator detection or delayed execution / “time bombs”), dynamic code loading and native method calls for string decryption. As we show, HARVESTER not only aids human malware analysts, but also acts as an automatic deobfuscation tool that reverts the introduction of encrypted strings and reflective method calls as they are often introduced by obfuscators such as DexGuard. We will make available HARVESTER as open source. Experiments on 13,502 current malware samples show that HARVESTER can extract many sensitive values from applications, usually in under one minute, and this fully automatically and without requiring the simulation of UI actions. Our results further show that HARVESTER’s deobfuscation can enhance existing static and dynamic analyses, for instance with FlowDroid and TaintDroid.",
"title": ""
},
{
"docid": "b0c4b345063e729d67396dce77e677a6",
"text": "Work done on the implementation of a fuzzy logic controller in a single intersection of two one-way streets is presented. The model of the intersection is described and validated, and the use of the theory of fuzzy sets in constructing a controller based on linguistic control instructions is introduced. The results obtained from the implementation of the fuzzy logic controller are tabulated against those corresponding to a conventional effective vehicle-actuated controller. With the performance criterion being the average delay of vehicles, it is shown that the use of a fuzzy logic controller results in a better performance.",
"title": ""
},
{
"docid": "3231eedb6c06d3ce428f3c20dac5c37d",
"text": "In this study, differential evolution algorithm (DE) is proposed to train a wavelet neural network (WNN). The resulting network is named as differential evolution trained wavelet neural network (DEWNN). The efficacy of DEWNN is tested on bankruptcy prediction datasets viz. US banks, Turkish banks and Spanish banks. Further, its efficacy is also tested on benchmark datasets such as Iris, Wine and Wisconsin Breast Cancer. Moreover, Garson’s algorithm for feature selection in multi layer perceptron is adapted in the case of DEWNN. The performance of DEWNN is compared with that of threshold accepting trained wavelet neural network (TAWNN) [Vinay Kumar, K., Ravi, V., Mahil Carr, & Raj Kiran, N. (2008). Software cost estimation using wavelet neural networks. Journal of Systems and Software] and the original wavelet neural network (WNN) in the case of all data sets without feature selection and also in the case of four data sets where feature selection was performed. The whole experimentation is conducted using 10-fold cross validation method. Results show that soft computing hybrids viz., DEWNN and TAWNN outperformed the original WNN in terms of accuracy and sensitivity across all problems. Furthermore, DEWNN outscored TAWNN in terms of accuracy and sensitivity across all problems except Turkish banks dataset. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5c95665b5608a40d1dc2499c6fd6d21e",
"text": "Camber is one of the most significant defects in the first stages of hot rolling of steel plates. This kind of defect may cause the clogging of finishing mills, but it is also the visible effect of alterations in the process. In this paper we describe the design and implementation of a computer vision system for real-time measurement of camber in a hot rolling mill. Our goal is to provide valuable feedback information to improve AGC operation. As ground truth values are almost impossible to obtain, we have analyzed the relationship among measured camber and other process variables in order to validate our results. The system has proved to be robust, and at the same time there is a strong relationship between known problems in the mill and system readings.",
"title": ""
},
{
"docid": "079381f51f703b6e6d7035d328ee4207",
"text": "Over the last few years, the way people express their opinions has changed dramatically with the progress of social networks, web communities, blogs, wikis, and other online collaborative media. Now, people buy a product and express their opinion in social media so that other people can acquire knowledge about that product before they proceed to buy it. On the other hand, for the companies it has become necessary to keep track of the public opinions on their products to achieve customer satisfaction. Therefore, nowadays opinion mining is a routine task for every company for developing a widely acceptable product or providing satisfactory service. Concept-based opinion mining is a new area of research. The key parts of this research involve extraction of concepts from the text, determining product aspects, and identifying sentiment associated with these aspects. In this paper, we address each one of these tasks using a novel approach that takes text as input and use dependency parse tree-based rules to extract concepts and aspects and identify the associated sentiment. On the benchmark datasets, our method outperforms all existing state-of-the-art systems.",
"title": ""
},
{
"docid": "dbfbdd4866d7fd5e34620c82b8124c3a",
"text": "Searle (1989) posits a set of adequacy criteria for any account of the meaning and use of performative verbs, such as order or promise. Central among them are: (a) performative utterances are performances of the act named by the performative verb; (b) performative utterances are self-verifying; (c) performative utterances achieve (a) and (b) in virtue of their literal meaning. He then argues that the fundamental problem with assertoric accounts of performatives is that they fail (b), and hence (a), because being committed to having an intention does not guarantee having that intention. Relying on a uniform meaning for verbs on their reportative and performative uses, we propose an assertoric analysis of performative utterances that does not require an actual intention for deriving (b), and hence can meet (a) and (c). Explicit performative utterances are those whose illocutionary force is made explicit by the verbs appearing in them (Austin 1962): (1) I (hereby) promise you to be there at five. (is a promise) (2) I (hereby) order you to be there at five. (is an order) (3) You are (hereby) ordered to report to jury duty. (is an order) (1)–(3) look and behave syntactically like declarative sentences in every way. Hence there is no grammatical basis for the once popular claim that I promise/ order spells out a ‘performative prefix’ that is silent in all other declaratives. Such an analysis, in any case, leaves unanswered the question of how illocutionary force is related to compositional meaning and, consequently, does not explain how the first person and present tense are special, so that first-person present tense forms can spell out performative prefixes, while others cannot. Minimal variations in person or tense remove the ‘performative effect’: (4) I promised you to be there at five. (is not a promise) (5) He promises to be there at five. (is not a promise) An attractive idea is that utterances of sentences like those in (1)–(3) are asser∗ The names of the authors appear in alphabetical order. 150 Condoravdi & Lauer tions, just like utterances of other declaratives, whose truth is somehow guaranteed. In one form or another, this basic strategy has been pursued by a large number of authors ever since Austin (1962) (Lemmon 1962; Hedenius 1963; Bach & Harnish 1979; Ginet 1979; Bierwisch 1980; Leech 1983; among others). One type of account attributes self-verification to meaning proper. Another type, most prominently exemplified by Bach & Harnish (1979), tries to derive the performative effect by means of an implicature-like inference that the hearer may draw based on the utterance of the explicit performative. Searle’s (1989) Challenge Searle (1989) mounts an argument against analyses of explicit performative utterances as self-verifying assertions. He takes the argument to show that an assertoric account is impossible. Instead, we take it to pose a challenge that can be met, provided one supplies the right semantics for the verbs involved. Searle’s argument is based on the following desiderata he posits for any theory of explicit performatives: (a) performative utterances are performances of the act named by the performative verb; (b) performative utterances are self-guaranteeing; (c) performative utterances achieve (a) and (b) in virtue of their literal meaning, which, in turn, ought to be based on a uniform lexical meaning of the verb across performative and reportative uses. According to Searle’s speech act theory, making a promise requires that the promiser intend to do so, and similarly for other performative verbs (the sincerity condition). It follows that no assertoric account can meet (a-c): An assertion cannot ensure that the speaker has the necessary intention. “Such an assertion does indeed commit the speaker to the existence of the intention, but the commitment to having the intention doesn’t guarantee the actual presence of the intention.” Searle (1989: 546) Hence assertoric accounts must fail on (b), and, a forteriori, on (a) and (c).1 Although Searle’s argument is valid, his premise that for truth to be guaranteed the speaker must have a particular intention is questionable. In the following, we give an assertoric account that delivers on (a-c). We aim for an 1 It should be immediately clear that inference-based accounts cannot meet (a-c) above. If the occurrence of the performative effect depends on the hearer drawing an inference, then such sentences could not be self-verifying, for the hearer may well fail to draw the inference. Performative Verbs and Performative Acts 151 account on which the assertion of the explicit performative is the performance of the act named by the performative verb. No hearer inferences are necessary. 1 Reportative and Performative Uses What is the meaning of the word order, then, so that it can have both reportative uses – as in (6) – and performative uses – as in (7)? (6) A ordered B to sign the report. (7) [A to B] I order you to sign the report now. The general strategy in this paper will be to ask what the truth conditions of reportative uses of performative verbs are, and then see what happens if these verbs are put in the first person singular present tense. The reason to start with the reportative uses is that speakers have intuitions about their truth conditions. This is not true for performative uses, because these are always true when uttered, obscuring the truth-conditional content of the declarative sentence.2 An assertion of (6) takes for granted that A presumed to have authority over B and implies that there was a communicative act from A to B. But what kind of communicative act? (7) or, in the right context, (8a-c) would suffice. (8) a. Sign the report now! b. You must sign the report now! c. I want you to sign the report now! What do these sentences have in common? We claim it is this: In the right context they commit A to a particular kind of preference for B signing the report immediately. If B accepts the utterance, he takes on a commitment to act as though he, too, prefers signing the report. If the report is co-present with A and B, he will sign it, if the report is in his office, he will leave to go there immediately, and so on. To comply with an order to p is to act as though one prefers p. One need not actually prefer it, but one has to act as if one did. The authority mentioned above amounts to this acceptance being socially or institutionally mandated. Of course, B has the option to refuse to take on this commitment, in either of two ways: (i) he can deny A’s authority, (ii) while accepting the authority, he can refuse to abide by it, thereby violating the institutional or social mandate. Crucially, in either case, (6) will still be true, as witnessed by the felicity of: 2 Szabolcsi (1982), in one of the earliest proposals for a compositional semantics of performative utterances, already pointed out the importance of reportative uses. 152 Condoravdi & Lauer (9) a. (6), but B refused to do it. b. (6), but B questioned his authority. Not even uptake by the addressee is necessary for order to be appropriate, as seen in (10) and the naturally occurring (11):3 (10) (6), but B did not hear him. (11) He ordered Kornilov to desist but either the message failed to reach the general or he ignored it.4 What is necessary is that the speaker expected uptake to happen, arguably a minimal requirement for an act to count as a communicative event. To sum up, all that is needed for (6) to be true and appropriate is that (i) there is a communicative act from A to B which commits A to a preference for B signing the report immediately and (ii) A presumes to have authority over B. The performative effect arises precisely when the utterance itself is a witness for the existential claim in (i). There are two main ingredients in the meaning of order informally outlined above: the notion of a preference, in particular a special kind of preference that guides action, and the notion of a commitment. The next two sections lay some conceptual groundwork before we spell out our analysis in section 4. 2 Representing Preferences To represent preferences that guide action, we need a way to represent preferences of different strength. Kratzer’s (1981) theory of modality is not suitable for this purpose. Suppose, for instance, that Sven desires to finish his paper and that he also wants to lie around all day, doing nothing. Modeling his preferences in the style of Kratzer, the propositions expressed by (12) and (13) would have to be part of Sven’s bouletic ordering source assigned to the actual world: (12) Sven finishes his paper. (13) Sven lies around all day, doing nothing. But then, Sven should be equally happy if he does nothing as he is if he finishes his paper. We want to be able to explain why, given his knowledge that (12) and (13) are incompatible, he works on his paper. Intuitively, it is because the preference expressed by (12) is more important than that expressed by (13). 3 We owe this observation to Lauri Karttunen. 4 https://tspace.library.utoronto.ca/citd/RussianHeritage/12.NR/NR.12.html Performative Verbs and Performative Acts 153 Preference Structures Definition 1. A preference structure relative to an information state W is a pair 〈P,≤〉, where P⊆℘(W ) and ≤ is a (weak) partial order on P. We can now define a notion of consistency that is weaker than requiring that all propositions in the preference structure be compatible: Definition 2. A preference structure 〈P,≤〉 is consistent iff for any p,q ∈ P such that p∩q = / 0, either p < q or q < p. Since preference structures are defined relative to an information state W , consistency will require not only logically but also contextually incompatible propositions to be strictly ranked. For example, if W is Sven’s doxastic state, and he knows that (12) and (13) are incompatible, for a bouletic preference structure of his to be consistent it must strictly rank the two propositions. In general, bouletic preference ",
"title": ""
},
{
"docid": "6fe9aaaa0033d3322e989588df3105fe",
"text": "Set-valued data, in which a set of values are associated with an individual, is common in databases ranging from market basket data, to medical databases of patients’ symptoms and behaviors, to query engine search logs. Anonymizing this data is important if we are to reconcile the conflicting demands arising from the desire to release the data for study and the desire to protect the privacy of individuals represented in the data. Unfortunately, the bulk of existing anonymization techniques, which were developed for scenarios in which each individual is associated with only one sensitive value, are not well-suited for set-valued data. In this paper we propose a top-down, partition-based approach to anonymizing set-valued data that scales linearly with the input size and scores well on an information-loss data quality metric. We further note that our technique can be applied to anonymize the infamous AOL query logs, and discuss the merits and challenges in anonymizing query logs using our approach.",
"title": ""
},
{
"docid": "5350ffea7a4187f0df11fd71562aba43",
"text": "The presence of buried landmines is a serious threat in many areas around the World. Despite various techniques have been proposed in the literature to detect and recognize buried objects, automatic and easy to use systems providing accurate performance are still under research. Given the incredible results achieved by deep learning in many detection tasks, in this paper we propose a pipeline for buried landmine detection based on convolutional neural networks (CNNs) applied to ground-penetrating radar (GPR) images. The proposed algorithm is capable of recognizing whether a B-scan profile obtained from GPR acquisitions contains traces of buried mines. Validation of the presented system is carried out on real GPR acquisitions, albeit system training can be performed simply relying on synthetically generated data. Results show that it is possible to reach 95% of detection accuracy without training in real acquisition of landmine profiles.",
"title": ""
},
{
"docid": "aa3be1c132e741d2c945213cfb0d96ad",
"text": "Collaborative filtering (CF) is one of the most successful recommendation approaches. It typically associates a user with a group of like-minded users based on their preferences over all the items, and recommends to the user those items enjoyed by others in the group. However we find that two users with similar tastes on one item subset may have totally different tastes on another set. In other words, there exist many user-item subgroups each consisting of a subset of items and a group of like-minded users on these items. It is more natural to make preference predictions for a user via the correlated subgroups than the entire user-item matrix. In this paper, to find meaningful subgroups, we formulate the Multiclass Co-Clustering (MCoC) problem and propose an effective solution to it. Then we propose an unified framework to extend the traditional CF algorithms by utilizing the subgroups information for improving their top-N recommendation performance. Our approach can be seen as an extension of traditional clustering CF models. Systematic experiments on three real world data sets have demonstrated the effectiveness of our proposed approach.",
"title": ""
},
{
"docid": "a8cad81570a7391175acdcf82bc9040b",
"text": "A model of Convolutional Fuzzy Neural Network for real world objects and scenes images classification is proposed. The Convolutional Fuzzy Neural Network consists of convolutional, pooling and fully-connected layers and a Fuzzy Self Organization Layer. The model combines the power of convolutional neural networks and fuzzy logic and is capable of handling uncertainty and impreciseness in the input pattern representation. The Training of The Convolutional Fuzzy Neural Network consists of three independent steps for three components of the net.",
"title": ""
},
{
"docid": "23ccd8ed9c230aa39b7511d95fdb17da",
"text": "Executive Summary In the rapidly changing field of Information Systems, educational programs must be continually reevaluated and revised. This can be a daunting task. To make this process more manageable and to create programs that more accurately reflect the demands of the marketplace, a curriculum revision process is presented. As part of the curriculum revision process, a study was conducted to determine the expected skills and knowledge required for Information Systems professionals in three general staffing groups: programmers, analysts, and end-user support. A survey instrument was developed asking respondents to rate the importance of each knowledge/skill area three years from now for each of the staffing groups. The results show that Information Systems knowledge relating to the entire organization and overall business knowledge will be important with less emphasis on advanced IS applications. The so-called ‘soft skills’ such as teamwork and collaboration, planning and leading projects, presentation delivery, and writing skills will be critical for success in the Information Systems profession. More importance will be placed on web-based languages rather than more traditional languages such as COBOL. Based on the analysis, a skills matrix is presented that can be used as a foundation for developing courses. This paper also describes a curriculum development model that can be used as a guide for curriculum revision.",
"title": ""
},
{
"docid": "eaa3284dbe2bbd5c72df99d76d4909a7",
"text": "BACKGROUND\nWorldwide, depression is rated as the fourth leading cause of disease burden and is projected to be the second leading cause of disability by 2020. Annual depression-related costs in the United States are estimated at US $210.5 billion, with employers bearing over 50% of these costs in productivity loss, absenteeism, and disability. Because most adults with depression never receive treatment, there is a need to develop effective interventions that can be more widely disseminated through new channels, such as employee assistance programs (EAPs), and directly to individuals who will not seek face-to-face care.\n\n\nOBJECTIVE\nThis study evaluated a self-guided intervention, using the MoodHacker mobile Web app to activate the use of cognitive behavioral therapy (CBT) skills in working adults with mild-to-moderate depression. It was hypothesized that MoodHacker users would experience reduced depression symptoms and negative cognitions, and increased behavioral activation, knowledge of depression, and functioning in the workplace.\n\n\nMETHODS\nA parallel two-group randomized controlled trial was conducted with 300 employed adults exhibiting mild-to-moderate depression. Participants were recruited from August 2012 through April 2013 in partnership with an EAP and with outreach through a variety of additional non-EAP organizations. Participants were blocked on race/ethnicity and then randomly assigned within each block to receive, without clinical support, either the MoodHacker intervention (n=150) or alternative care consisting of links to vetted websites on depression (n=150). Participants in both groups completed online self-assessment surveys at baseline, 6 weeks after baseline, and 10 weeks after baseline. Surveys assessed (1) depression symptoms, (2) behavioral activation, (3) negative thoughts, (4) worksite outcomes, (5) depression knowledge, and (6) user satisfaction and usability. After randomization, all interactions with subjects were automated with the exception of safety-related follow-up calls to subjects reporting current suicidal ideation and/or severe depression symptoms.\n\n\nRESULTS\nAt 6-week follow-up, significant effects were found on depression, behavioral activation, negative thoughts, knowledge, work productivity, work absence, and workplace distress. MoodHacker yielded significant effects on depression symptoms, work productivity, work absence, and workplace distress for those who reported access to an EAP, but no significant effects on these outcome measures for those without EAP access. Participants in the treatment arm used the MoodHacker app an average of 16.0 times (SD 13.3), totaling an average of 1.3 hours (SD 1.3) of use between pretest and 6-week follow-up. Significant effects on work absence in those with EAP access persisted at 10-week follow-up.\n\n\nCONCLUSIONS\nThis randomized effectiveness trial found that the MoodHacker app produced significant effects on depression symptoms (partial eta(2) = .021) among employed adults at 6-week follow-up when compared to subjects with access to relevant depression Internet sites. The app had stronger effects for individuals with access to an EAP (partial eta(2) = .093). For all users, the MoodHacker program also yielded greater improvement on work absence, as well as the mediating factors of behavioral activation, negative thoughts, and knowledge of depression self-care. Significant effects were maintained at 10-week follow-up for work absence. General attenuation of effects at 10-week follow-up underscores the importance of extending program contacts to maintain user engagement. This study suggests that light-touch, CBT-based mobile interventions like MoodHacker may be appropriate for implementation within EAPs and similar environments. In addition, it seems likely that supporting MoodHacker users with guidance from counselors may improve effectiveness for those who seek in-person support.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02335554; https://clinicaltrials.gov/ct2/show/NCT02335554 (Archived by WebCite at http://www.webcitation.org/6dGXKWjWE).",
"title": ""
},
{
"docid": "cb92003c6e6344fcb2c735d3e93801b9",
"text": "Recommendation systems play a vital role to keep users engaged with personalized content in modern online platforms. Deep learning has revolutionized many research fields and there is a recent surge of interest in applying it to collaborative filtering (CF). However, existing methods compose deep learning architectures with the latent factor model ignoring a major class of CF models, neighborhood or memory-based approaches. We propose Collaborative Memory Networks (CMN), a deep architecture to unify the two classes of CF models capitalizing on the strengths of the global structure of latent factor model and local neighborhood-based structure in a nonlinear fashion. Motivated by the success of Memory Networks, we fuse a memory component and neural attention mechanism as the neighborhood component. The associative addressing scheme with the user and item memories in the memory module encodes complex user-item relations coupled with the neural attention mechanism to learn a user-item specific neighborhood. Finally, the output module jointly exploits the neighborhood with the user and item memories to produce the ranking score. Stacking multiple memory modules together yield deeper architectures capturing increasingly complex user-item relations. Furthermore, we show strong connections between CMN components, memory networks and the three classes of CF models. Comprehensive experimental results demonstrate the effectiveness of CMN on three public datasets outperforming competitive baselines. Qualitative visualization of the attention weights provide insight into the model's recommendation process and suggest the presence of higher order interactions.",
"title": ""
},
{
"docid": "36867b8478a8bd6be79902efd5e9d929",
"text": "Most state-of-the-art commercial storage virtualization systems focus only on one particular storage attribute, capacity. This paper describes the design, implementation and evaluation of a multi-dimensional storage virtualization system called Stonehenge, which is able to virtualize a cluster-based physical storage system along multiple dimensions, including bandwidth, capacity, and latency. As a result, Stonehenge is able to multiplex multiple virtual disks, each with a distinct bandwidth, capacity, and latency attribute, on a single physical storage system as if they are separate physical disks. A key enabling technology for Stonehenge is an efficiency-aware real-time disk scheduling algorithm called dual-queue disk scheduling, which maximizes disk utilization efficiency while providing Quality of Service (QoS) guarantees. To optimize disk utilization efficiency, Stonehenge exploits run-time measurements extensively, for admission control, computing latency-derived bandwidth requirement, and predicting disk service time.",
"title": ""
},
{
"docid": "7c3e356ab0f200a93e1284f763e1039d",
"text": "Ordered sets (and maps when data is associated with each key) are one of the most important and useful data types. The set-set functions union, intersection and difference are particularly useful in certain applications. Brown and Tarjan first described an algorithm for these functions, based on 2-3 trees, that meet the optimal Θ(m log (n/m+1)) time bounds in the comparison model (n and m ≤ n are the input sizes). Later Adams showed very elegant algorithms for the functions, and others, based on weight-balanced trees. They only require a single function that is specific to the balancing scheme---a function that joins two balanced trees---and hence can be applied to other balancing schemes. Furthermore the algorithms are naturally parallel. However, in the twenty-four years since, no one has shown that the algorithms, sequential or parallel are asymptotically work optimal. In this paper we show that Adams' algorithms are both work efficient and highly parallel (polylog span) across four different balancing schemes---AVL trees, red-black trees, weight balanced trees and treaps. To do this we use careful, but simple, algorithms for Join that maintain certain invariants, and our proof is (mostly) generic across the schemes.\n To understand how the algorithms perform in practice we have also implemented them (all code except Join is generic across the balancing schemes). Interestingly the implementations on all four balancing schemes and three set functions perform similarly in time and speedup (more than 45x on 64 cores). We also compare the performance of our implementation to other existing libraries and algorithms.",
"title": ""
}
] |
scidocsrr
|
c56ce9e1b973d67ef74b9949a32bb61d
|
The collaborative filtering recommendation based on SOM cluster-indexing CBR
|
[
{
"docid": "bd9f584e7dbc715327b791e20cd20aa9",
"text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.",
"title": ""
},
{
"docid": "a1c0670a27313de144451adff35ca83f",
"text": "Fab is a recommendation system designed to help users sift through the enormous amount of information available in the World Wide Web. Operational since Dec. 1994, this system combines the content-based and collaborative methods of recommendation in a way that exploits the advantages of the two approaches while avoiding their shortcomings. Fab’s hybrid structure allows for automatic recognition of emergent issues relevant to various groups of users. It also enables two scaling problems, pertaining to the rising number of users and documents, to be addressed.",
"title": ""
}
] |
[
{
"docid": "055071ff6809eaea4eeb0a9f64e49274",
"text": "Compressed bitmap indexes are used in systems such as Git or Oracle to accelerate queries. They represent sets and often support operations such as unions, intersections, differences, and symmetric differences. Several important systems such as Elasticsearch, Apache Spark, Netflix’s Atlas, LinkedIn’s Pivot, Metamarkets’ Druid, Pilosa, Apache Hive, Apache Tez, Microsoft Visual Studio Team Services and Apache Kylin rely on a specific type of compressed bitmap index called Roaring. We present an optimized software library written in C implementing Roaring bitmaps: CRoaring. It benefits from several algorithms designed for the single-instruction-multiple-data (SIMD) instructions available on commodity processors. In particular, we present vectorized algorithms to compute the intersection, union, difference and symmetric difference between arrays. We benchmark the library against a wide range of competitive alternatives, identifying weaknesses and strengths in our software. Our work is available under a liberal open-source license.",
"title": ""
},
{
"docid": "012b42c01cebf0840a429ab0e7db2914",
"text": "Silicon single-photon avalanche diodes (SPADs) are nowadays a solid-state alternative to photomultiplier tubes (PMTs) in single-photon counting (SPC) and time-correlated single-photon counting (TCSPC) over the visible spectral range up to 1-mum wavelength. SPADs implemented in planar technology compatible with CMOS circuits offer typical advantages of microelectronic devices (small size, ruggedness, low voltage, low power, etc.). Furthermore, they have inherently higher photon detection efficiency, since they do not rely on electron emission in vacuum from a photocathode as do PMTs, but instead on the internal photoelectric effect. However, PMTs offer much wider sensitive area, which greatly simplifies the design of optical systems; they also attain remarkable performance at high counting rate, and offer picosecond timing resolution with microchannel plate models. In order to make SPAD detectors more competitive in a broader range of SPC and TCSPC applications, it is necessary to face several issues in the semiconductor device design and technology. Such issues will be discussed in the context of the two possible approaches to such a challenge: employing a standard industrial high-voltage CMOS technology or developing a dedicated CMOS-compatible technology. Advances recently attained in the development of SPAD detectors will be outlined and discussed with reference to both single-element detectors and integrated detector arrays.",
"title": ""
},
{
"docid": "7411ae149016be794566261d7362f7d3",
"text": "BACKGROUND\nProcrastination, to voluntarily delay an intended course of action despite expecting to be worse-off for the delay, is a persistent behavior pattern that can cause major psychological suffering. Approximately half of the student population and 15%-20% of the adult population are presumed having substantial difficulties due to chronic and recurrent procrastination in their everyday life. However, preconceptions and a lack of knowledge restrict the availability of adequate care. Cognitive behavior therapy (CBT) is often considered treatment of choice, although no clinical trials have previously been carried out.\n\n\nOBJECTIVE\nThe aim of this study will be to test the effects of CBT for procrastination, and to investigate whether it can be delivered via the Internet.\n\n\nMETHODS\nParticipants will be recruited through advertisements in newspapers, other media, and the Internet. Only people residing in Sweden with access to the Internet and suffering from procrastination will be included in the study. A randomized controlled trial with a sample size of 150 participants divided into three groups will be utilized. The treatment group will consist of 50 participants receiving a 10-week CBT intervention with weekly therapist contact. A second treatment group with 50 participants receiving the same treatment, but without therapist contact, will also be employed. The intervention being used for the current study is derived from a self-help book for procrastination written by one of the authors (AR). It includes several CBT techniques commonly used for the treatment of procrastination (eg, behavioral activation, behavioral experiments, stimulus control, and psychoeducation on motivation and different work methods). A control group consisting of 50 participants on a wait-list control will be used to evaluate the effects of the CBT intervention. For ethical reasons, the participants in the control group will gain access to the same intervention following the 10-week treatment period, albeit without therapist contact.\n\n\nRESULTS\nThe current study is believed to result in three important findings. First, a CBT intervention is assumed to be beneficial for people suffering from problems caused by procrastination. Second, the degree of therapist contact will have a positive effect on treatment outcome as procrastination can be partially explained as a self-regulatory failure. Third, an Internet based CBT intervention is presumed to be an effective way to administer treatment for procrastination, which is considered highly important, as the availability of adequate care is limited. The current study is therefore believed to render significant knowledge on the treatment of procrastination, as well as providing support for the use of Internet based CBT for difficulties due to delayed tasks and commitments.\n\n\nCONCLUSIONS\nTo our knowledge, the current study is the first clinical trial to examine the effects of CBT for procrastination, and is assumed to render significant knowledge on the treatment of procrastination, as well as investigating whether it can be delivered via the Internet.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov: NCT01842945; http://clinicaltrials.gov/show/NCT01842945 (Archived by WebCite at http://www.webcitation.org/6KSmaXewC).",
"title": ""
},
{
"docid": "c9bab6f494d8c01e47449141526daeab",
"text": "In this letter, we propose a conceptually simple and intuitive learning objective function, i.e., additive margin softmax, for face verification. In general, face verification tasks can be viewed as metric learning problems, even though lots of face verification models are trained in classification schemes. It is possible when a large-margin strategy is introduced into the classification model to encourage intraclass variance minimization. As one alternative, angular softmax has been proposed to incorporate the margin. In this letter, we introduce another kind of margin to the softmax loss function, which is more intuitive and interpretable. Experiments on LFW and MegaFace show that our algorithm performs better when the evaluation criteria are designed for very low false alarm rate.",
"title": ""
},
{
"docid": "bdf81fccbfa77dadcad43699f815475e",
"text": "The objective of this paper is classifying images by the object categories they contain, for example motorbikes or dolphins. There are three areas of novelty. First, we introduce a descriptor that represents local image shape and its spatial layout, together with a spatial pyramid kernel. These are designed so that the shape correspondence between two images can be measured by the distance between their descriptors using the kernel. Second, we generalize the spatial pyramid kernel, and learn its level weighting parameters (on a validation set). This significantly improves classification performance. Third, we show that shape and appearance kernels may be combined (again by learning parameters on a validation set).\n Results are reported for classification on Caltech-101 and retrieval on the TRECVID 2006 data sets. For Caltech-101 it is shown that the class specific optimization that we introduce exceeds the state of the art performance by more than 10%.",
"title": ""
},
{
"docid": "e06e917918a60a6452ee0b0037d3f284",
"text": "In this paper, we examine what types of reputation information users find valuable when selecting someone to interact with in online environments. In an online experiment, we asked users to imagine that they were looking for a partner for a social chat. We found that similarity to the user and ratings from the user's friends were the most valuable pieces of reputation information when selecting chat partners. The context in which reputations were used (social chat, game or newsgroup) affected the self-reported utility of the pieces of reputation information",
"title": ""
},
{
"docid": "cf056b44b0e93ad4fcbc529437cfbec3",
"text": "Many advances in the treatment of cancer have been driven by the development of targeted therapies that inhibit oncogenic signaling pathways and tumor-associated angiogenesis, as well as by the recent development of therapies that activate a patient's immune system to unleash antitumor immunity. Some targeted therapies can have effects on host immune responses, in addition to their effects on tumor biology. These immune-modulating effects, such as increasing tumor antigenicity or promoting intratumoral T cell infiltration, provide a rationale for combining these targeted therapies with immunotherapies. Here, we discuss the immune-modulating effects of targeted therapies against the MAPK and VEGF signaling pathways, and how they may synergize with immunomodulatory antibodies that target PD1/PDL1 and CTLA4. We critically examine the rationale in support of these combinations in light of the current understanding of the underlying mechanisms of action of these therapies. We also discuss the available preclinical and clinical data for these combination approaches and their implications regarding mechanisms of action. Insights from these studies provide a framework for considering additional combinations of targeted therapies and immunotherapies for the treatment of cancer.",
"title": ""
},
{
"docid": "3301a0cf26af8d4d8c7b2b9d56cec292",
"text": "Reading comprehension (RC)—in contrast to information retrieval—requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess RC ability, in both artificial agents and children learning to read. However, existing RC datasets and tasks are dominated by questions that can be solved by selecting answers using superficial information (e.g., local context similarity or global term frequency); they thus fail to test for the essential integrative aspect of RC. To encourage progress on deeper comprehension of language, we present a new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts. These tasks are designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience. We show that although humans solve the tasks easily, standard RC models struggle on the tasks presented here. We provide an analysis of the dataset and the challenges it presents.",
"title": ""
},
{
"docid": "a604527951768b088fe2e40104fa78bb",
"text": "In this study, the Multi-Layer Perceptron (MLP)with Back-Propagation learning algorithm are used to classify to effective diagnosis Parkinsons disease(PD).It’s a challenging problem for medical community.Typically characterized by tremor, PD occurs due to the loss of dopamine in the brains thalamic region that results in involuntary or oscillatory movement in the body. A feature selection algorithm along with biomedical test values to diagnose Parkinson disease.Clinical diagnosis is done mostly by doctor’s expertise and experience.But still cases are reported of wrong diagnosis and treatment.Patients are asked to take number of tests for diagnosis.In many cases,not all the tests contribute towards effective diagnosis of a disease.Our work is to classify the presence of Parkinson disease with reduced number of attributes.Original,22 attributes are involved in classify.We use Information Gain to determine the attributes which reduced the number of attributes which is need to be taken from patients.The Artificial neural networks is used to classify the diagnosis of patients.Twenty-Two attributes are reduced to sixteen attributes.The accuracy is in training data set is 82.051% and in the validation data set is 83.333%. Keywords—Data mining , classification , Parkinson disease , Artificial neural networks , Feature Selection , Information Gain",
"title": ""
},
{
"docid": "ed656b22008f1c045e9a91079e6b5be8",
"text": "Vision-based unstructured road following is a challenging task due to the nature of the scene. This paper describes a novel algorithm to improve the accuracy and robustness of vanishing point estimation with very low computational cost. The novelties of this paper are three aspects: 1) We use joint activities of four Gabor filters and confidence measure for speeding up the process of texture orientation estimation. 2) Misidentification chances and computational complexity of the algorithm are reduced by using a particle filter. It limits vanishing point search range and reduces the number of pixels to be voted. The algorithm combines the peakedness measure of vote accumulator space with the displacements of moving average of observations to regulate the distribution of vanishing point candidates. 3) Attributed to the design of a noise-insensitive observation model, the proposed system still has high detection accuracy even when less than 60 sparsely distributed vanishing point candidates are used for voting as the influence introduced by the stochastic measurement noise of vote function and the sparsity of the vanishing point candidates is reduced. The method has been implemented and tested over 20 000 video frames. Experimental results demonstrate that the algorithm achieves better performance than some state-of-the-art texture-based vanishing point detection methods in terms of detection accuracy and speed.",
"title": ""
},
{
"docid": "73c7c4ddfa01fb2b14c6a180c3357a55",
"text": "Neurodevelopmental treatment according to Dr. K. and B. Bobath can be supplemented by hippotherapy. At proper control and guidance, an improvement in posture tone, inhibition of pathological movement patterns, facilitation of normal automatical reactions and the promotion of sensorimotor perceptions is achieved. By adjustment to the swaying movements of the horse, the child feels how to retain straightening alignment, symmetry and balance. By pleasure in this therapy, the child can be motivated to satisfactory cooperation and accepts the therapy horse as its friend. The results of hippotherapy for 27 children afflicted with cerebral palsy permit a conclusion as to the value of this treatment for movement and behaviour disturbance to the drawn.",
"title": ""
},
{
"docid": "b0c91e6f8d1d6d41693800e1253b414f",
"text": "Tightly coupling GNSS pseudorange and Doppler measurements with other sensors is known to increase the accuracy and consistency of positioning information. Nowadays, high-accuracy geo-referenced lane marking maps are seen as key information sources in autonomous vehicle navigation. When an exteroceptive sensor such as a video camera or a lidar is used to detect them, lane markings provide positioning information which can be merged with GNSS data. In this paper, measurements from a forward-looking video camera are merged with raw GNSS pseudoranges and Dopplers on visible satellites. To create a localization system that provides pose estimates with high availability, dead reckoning sensors are also integrated. The data fusion problem is then formulated as sequential filtering. A reduced-order state space modeling of the observation problem is proposed to give a real-time system that is easy to implement. A Kalman filter with measured input and correlated noises is developed using a suitable error model of the GNSS pseudoranges. Our experimental results show that this tightly coupled approach performs better, in terms of accuracy and consistency, than a loosely coupled method using GNSS fixes as inputs.",
"title": ""
},
{
"docid": "52c8e39a4d6d11a36e46d655cc032a24",
"text": "Hundreds of bacterial species make up the mammalian intestinal microbiota. Following perturbations by antibiotics, diet, immune deficiency or infection, this ecosystem can shift to a state of dysbiosis. This can involve overgrowth (blooming) of otherwise under-represented or potentially harmful bacteria (for example, pathobionts). Here, we present evidence suggesting that dysbiosis fuels horizontal gene transfer between members of this ecosystem, facilitating the transfer of virulence and antibiotic resistance genes and thereby promoting pathogen evolution.",
"title": ""
},
{
"docid": "5feb2cb23eb86e71f3dec43da996d9cb",
"text": "Active power decoupling methods are developed to deal with the inherent ripple power at twice the grid frequency in single-phase systems generally by adding active switches and energy storage units. They have obtained a wide range of applications, such as photovoltaic (PV) systems, light-emitting diodes (LEDs) drivers, fuel cell (FC) power systems, and electric vehicle (EV) battery chargers, etc. This paper provides a comprehensive review of active power decoupling circuit topologies. They are categorized into two groups in terms of the structure characteristics: independent and dependent decoupling circuit topologies. The former operates independently with the original converter, and the latter, however, shares the power semiconductor devices with the original converter partially and even completely. The development laws for the active power decoupling topologies are revealed from the view of “duality principle,” “switches sharing,” and “differential connection.” In addition, the exceptions and special cases are also briefly introduced. This paper is targeted to help researchers, engineers, and designers to construct some new decoupling circuit topologies and properly select existing ones according to the specific application.",
"title": ""
},
{
"docid": "26aad391498670aee81e6b705c11e3b7",
"text": "BACKGROUND\nAn aging population means that chronic illnesses, such as diabetes, are becoming more prevalent and demands for care are rising. Members of primary care teams should organize and coordinate patient care with a view to improving quality of care and impartial adherence to evidence-based practices for all patients. The aims of the present study were: to ascertain the prevalence of diabetes in an Italian population, stratified by age, gender and citizenship; and to identify the rate of compliance with recommended guidelines for monitoring diabetes, to see whether disparities exist in the quality of diabetes patient management.\n\n\nMETHODS\nA population-based analysis was performed on a dataset obtained by processing public health administration databases. The presence of diabetes and compliance with standards of care were estimated using appropriate algorithms. A multilevel logistic regression analysis was applied to assess factors affecting compliance with standards of care.\n\n\nRESULTS\n1,948,622 Italians aged 16+ were included in the study. In this population, 105,987 subjects were identified as having diabetes on January 1st, 2009. The prevalence of diabetes was 5.43% (95% CI 5.33-5.54) overall, 5.87% (95% CI 5.82-5.92) among males, and 5.05% (95% CI 5.00-5.09) among females. HbA1c levels had been tested in 60.50% of our diabetic subjects, LDL cholesterol levels in 57.50%, and creatinine levels in 63.27%, but only 44.19% of the diabetic individuals had undergone a comprehensive assessment during one year of care. Statistical differences in diabetes care management emerged relating to gender, age, diagnostic latency period, comorbidity and citizenship.\n\n\nCONCLUSIONS\nProcess management indicators need to be used not only for the overall assessment of health care processes, but also to monitor disparities in the provision of health care.",
"title": ""
},
{
"docid": "b4d92c6573f587c60d135b8fa579aade",
"text": "Knowing the structure of criminal and terrorist networks could provide the technical insight needed to disrupt their activities.",
"title": ""
},
{
"docid": "af25bc1266003202d3448c098628aee8",
"text": "Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR10, CIFAR-100, and SVHN datasets, yielding new state-ofthe-art results of 2.56%, 15.20%, and 1.30% test error respectively. Code available at https://github.com/ uoguelph-mlrg/Cutout.",
"title": ""
},
{
"docid": "cf3c1e626d2b94a4648a88509619dcfd",
"text": "This paper aims to compare the differences between English and Japanese vowels in order to explain why it is difficult for Japanese speakers to pronounce English vowels. Since Japanese has only five vowels, and each vowel covers more than one English vowel, Japanese learners of English often mispronounce English vowels by substituting them with Japanese vowels. In addition, Japanese speakers are not aware of how they should move their mouths, lips, and jaws when they speak English because Japanese does not require large facial expression or movement when speaking. It is very important for learners to recognize the differences because recognizing is the first step to learn pronunciation of foreign language. In order to teach Japanese speakers to pronounce English vowels correctly, teachers should also be aware of these differences and incorporate this knowledge in teaching.",
"title": ""
},
{
"docid": "10446bc5c79ccf5d9cf29b3d62278e3b",
"text": "A novel compact tri-band printed antenna for WLAN and WiMAX applications is presented. The proposed antenna consists of a modified rectangular slot, a pair of symmetrical inverted-L strips, and a Y-shaped monopole radiator with a meandering split-ring slot. Tuning the locations and the sizes of these structures, three distinct current paths can be produced at three independent frequency bands, respectively. Based on this concept, a prototype of the tri-band antenna is further fabricated and measured. The experimental and numerical results show that the antenna has impedance bandwidth (for return loss less than 10 dB) of 430 MHz (2.33-2.76 GHz), 730 MHz (3.05-3.88 GHz), and 310 MHz (5.57-5.88 GHz), which can cover both the WLAN 2.4/5.8-GHz bands and the WiMAX 2.5/3.5-GHz bands.",
"title": ""
},
{
"docid": "d984ad1af6b56e515157375c94f62fe5",
"text": "In this paper, we present a novel packet delivery mechanism called Multi-Path and Multi-SPEED Routing Protocol (MMSPEED) for probabilistic QoS guarantee in wireless sensor networks. The QoS provisioning is performed in two quality domains, namely, timeliness and reliability. Multiple QoS levels are provided in the timeliness domain by guaranteeing multiple packet delivery speed options. In the reliability domain, various reliability requirements are supported by probabilistic multipath forwarding. These mechanisms for QoS provisioning are realized in a localized way without global network information by employing localized geographic packet forwarding augmented with dynamic compensation, which compensates for local decision inaccuracies as a packet travels towards its destination. This way, MMSPEED can guarantee end-to-end requirements in a localized way, which is desirable for scalability and adaptability to large scale dynamic sensor networks. Simulation results show that MMSPEED provides QoS differentiation in both reliability and timeliness domains and, as a result, significantly improves the effective capacity of a sensor network in terms of number of flows that meet both reliability and timeliness requirements up to 50 percent (12 flows versus 18 flows).",
"title": ""
}
] |
scidocsrr
|
912ce990055ec29d0da81f515d867cc3
|
What drives mobile commerce?: An empirical evaluation of the revised technology acceptance model
|
[
{
"docid": "0209132c7623c540c125a222552f33ac",
"text": "This paper reviews the criticism on the 4Ps Marketing Mix framework, the most popular tool of traditional marketing management, and categorizes the main objections of using the model as the foundation of physical marketing. It argues that applying the traditional approach, based on the 4Ps paradigm, is also a poor choice in the case of virtual marketing and identifies two main limitations of the framework in online environments: the drastically diminished role of the Ps and the lack of any strategic elements in the model. Next to identifying the critical factors of the Web marketing, the paper argues that the basis for successful E-Commerce is the full integration of the virtual activities into the company’s physical strategy, marketing plan and organisational processes. The four S elements of the Web-Marketing Mix framework present a sound and functional conceptual basis for designing, developing and commercialising Business-to-Consumer online projects. The model was originally developed for educational purposes and has been tested and refined by means of field projects; two of them are presented as case studies in the paper. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "a5ed1ebf973e3ed7ea106e55795e3249",
"text": "The variable reluctance (VR) resolver is generally used instead of an optical encoder as a position sensor on motors for hybrid electric vehicles or electric vehicles owing to its reliability, low cost, and ease of installation. The commonly used conventional winding method for the VR resolver has disadvantages, such as complicated winding and unsuitability for mass production. This paper proposes an improved winding method that leads to simpler winding and better suitability for mass production than the conventional method. In this paper, through the design and finite element analysis for two types of output winding methods, the advantages and disadvantages of each method are presented, and the validity of the proposed winding method is verified. In addition, experiments with the VR resolver using the proposed winding method have been performed to verify its performance.",
"title": ""
},
{
"docid": "c99d2914a5da4bb66ab2d3c335e3dc3b",
"text": "A traditional paper-based passport contains a MachineReadable Zone (MRZ) and a Visual Inspection Zone (VIZ). The MRZ has two lines of the holder’s personal data, some document data, and verification characters encoded using the Optical Character Recognition font B (OCRB). The encoded data includes the holder’s name, date of birth, and other identifying information for the holder or the document. The VIZ contains the holder’s photo and signature, usually on the data page. However, the MRZ and VIZ can be easily duplicated with normal document reproduction technology to produce a fake passport which can pass traditional verification. Neither of these features actively verify the holder’s identity; nor do they bind the holder’s identity to the document. A passport also contains pages for stamps of visas and of country entry and exit dates, which can be easily altered to produce fake permissions and travel records. The electronic passport, supporting authentication using secure credentials on a tamper-resistant chip, is an attempt to improve on the security of the paper-based passport at minimum cost. This paper surveys the security mechanisms built into the firstgeneration of authentication mechanisms and compares them with second-generation passports. It analyzes and describes the cryptographic protocols used in Basic Access Control (BAC) and Extended Access Control (EAC).",
"title": ""
},
{
"docid": "9747e2be285a5739bd7ee3b074a20ffc",
"text": "While software metrics are a generally desirable feature in the software management functions of project planning and project evaluation, they are of especial importance with a new technology such as the object-oriented approach. This is due to the significant need to train software engineers in generally accepted object-oriented principles. This paper presents theoretical work that builds a suite of metrics for object-oriented design. In particular, these metrics are based upon measurement theory and are informed by the insights of experienced object-oriented software developers. The proposed metrics are formally evaluated against a widelyaccepted list of software metric evaluation criteria.",
"title": ""
},
{
"docid": "bf563ecfc0dbb9a8a1b20356bde3dcad",
"text": "This paper presents a parallel architecture of an QR decomposition systolic array based on the Givens rotations algorithm on FPGA. The proposed architecture adopts a direct mapping by 21 fixed-point CORDIC-based process units that can compute the QR decomposition for an 4×4 real matrix. In order to achieve a comprehensive resource and performance evaluation, the computational error analysis, the resource utilized, and speed achieved on Virtex5 XC5VTX150T FPGA, are evaluated with the different precision of the intermediate word lengthes. The evaluation results show that 1) the proposed systolic array satisfies 99.9% correct 4×4 QR decomposition for the 2-13 accuracy requirement when the word length of the data path is lager than 25-bit, 2) occupies about 2, 810 (13%) slices, and achieves about 2.06 M/sec updates by running at the maximum frequency 111 MHz.",
"title": ""
},
{
"docid": "1d3cfb2e17360dac69705760b1ee7335",
"text": "Anaerobic and aerobic-anaerobic threshold (4 mmol/l lactate), as well as maximal capacity, were determined in seven cross country skiers of national level. All of them ran in a treadmill exercise for at least 30 min at constant heart rates as well as at constant running speed, both as previously determined for the aerobic-anaerobic threshold. During the exercise performed with a constant speed, lactate concentration initially rose to values of nearly 4 mmol/l and then remained essentially constant during the rest of the exercise. Heart rate displayed a slight but permanent increase and was on the average above 170 beats/min. A new arrangement of concepts for the anaerobic and aerobic-anaerobic threshold (as derived from energy metabolism) is suggested, that will make possible the determination of optimal work load intensities during endurance training by regulating heart rate.",
"title": ""
},
{
"docid": "dc4d11c0478872f3882946580bb10572",
"text": "An increasing number of neural implantable devices will become available in the near future due to advances in neural engineering. This discipline holds the potential to improve many patients' lives dramatically by offering improved-and in some cases entirely new-forms of rehabilitation for conditions ranging from missing limbs to degenerative cognitive diseases. The use of standard engineering practices, medical trials, and neuroethical evaluations during the design process can create systems that are safe and that follow ethical guidelines; unfortunately, none of these disciplines currently ensure that neural devices are robust against adversarial entities trying to exploit these devices to alter, block, or eavesdrop on neural signals. The authors define \"neurosecurity\"-a version of computer science security principles and methods applied to neural engineering-and discuss why neurosecurity should be a critical consideration in the design of future neural devices.",
"title": ""
},
{
"docid": "e54a0387984553346cf718a6fbe72452",
"text": "Learning distributed representations for relation instances is a central technique in downstream NLP applications. In order to address semantic modeling of relational patterns, this paper constructs a new dataset that provides multiple similarity ratings for every pair of relational patterns on the existing dataset (Zeichner et al., 2012). In addition, we conduct a comparative study of different encoders including additive composition, RNN, LSTM, and GRU for composing distributed representations of relational patterns. We also present Gated Additive Composition, which is an enhancement of additive composition with the gating mechanism. Experiments show that the new dataset does not only enable detailed analyses of the different encoders, but also provides a gauge to predict successes of distributed representations of relational patterns in the relation classification task.",
"title": ""
},
{
"docid": "bdaa430fe9c0de23f9f1d7efa60d04e5",
"text": "BACKGROUND\nChronic thromboembolic pulmonary hypertension (CTPH) is associated with considerable morbidity and mortality. Its incidence after pulmonary embolism and associated risk factors are not well documented.\n\n\nMETHODS\nWe conducted a prospective, long-term, follow-up study to assess the incidence of symptomatic CTPH in consecutive patients with an acute episode of pulmonary embolism but without prior venous thromboembolism. Patients with unexplained persistent dyspnea during follow-up underwent transthoracic echocardiography and, if supportive findings were present, ventilation-perfusion lung scanning and pulmonary angiography. CTPH was considered to be present if systolic and mean pulmonary-artery pressures exceeded 40 mm Hg and 25 mm Hg, respectively; pulmonary-capillary wedge pressure was normal; and there was angiographic evidence of disease.\n\n\nRESULTS\nThe cumulative incidence of symptomatic CTPH was 1.0 percent (95 percent confidence interval, 0.0 to 2.4) at six months, 3.1 percent (95 percent confidence interval, 0.7 to 5.5) at one year, and 3.8 percent (95 percent confidence interval, 1.1 to 6.5) at two years. No cases occurred after two years among the patients with more than two years of follow-up data. The following increased the risk of CTPH: a previous pulmonary embolism (odds ratio, 19.0), younger age (odds ratio, 1.79 per decade), a larger perfusion defect (odds ratio, 2.22 per decile decrement in perfusion), and idiopathic pulmonary embolism at presentation (odds ratio, 5.70).\n\n\nCONCLUSIONS\nCTPH is a relatively common, serious complication of pulmonary embolism. Diagnostic and therapeutic strategies for the early identification and prevention of CTPH are needed.",
"title": ""
},
{
"docid": "2a13609a94050c4477d94cf0d89cbdd3",
"text": "In this work, we introduce the average top-k (ATk) loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the ATk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based methods. We provide an intuitive interpretation of the ATk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of MATk learning on the classification calibration of the ATk loss and the error bounds of ATk-SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets.",
"title": ""
},
{
"docid": "a93833a6ad41bdc5011a992509e77c9a",
"text": "We present the implementation of a largevocabulary continuous speech recognition (LVCSR) system on NVIDIA’s Tegra K1 hyprid GPU-CPU embedded platform. The system is trained on a standard 1000hour corpus, LibriSpeech, features a trigram WFST-based language model, and achieves state-of-the-art recognition accuracy. The fact that the system is realtime-able and consumes less than 7.5 watts peak makes the system perfectly suitable for fast, but precise, offline spoken dialog applications, such as in robotics, portable gaming devices, or in-car systems.",
"title": ""
},
{
"docid": "7774017a3468e3e390753ebbe98af4d0",
"text": "We show that there exists an inherent tension between the goal of adversarial robustness and that of standard generalization. Specifically, training robust models may not only be more resource-consuming, but also lead to a reduction of standard accuracy. We demonstrate that this trade-off between the standard accuracy of a model and its robustness to adversarial perturbations provably exists even in a fairly simple and natural setting. These findings also corroborate a similar phenomenon observed in practice. Further, we argue that this phenomenon is a consequence of robust classifiers learning fundamentally different feature representations than standard classifiers. These differences, in particular, seem to result in unexpected benefits: the representations learned by robust models tend to align better with salient data characteristics and human perception.",
"title": ""
},
{
"docid": "3e0741fb69ee9bdd3cc455577aab4409",
"text": "Recurrent neural network architectures have been shown to efficiently model long term temporal dependencies between acoustic events. However the training time of recurrent networks is higher than feedforward networks due to the sequential nature of the learning algorithm. In this paper we propose a time delay neural network architecture which models long term temporal dependencies with training times comparable to standard feed-forward DNNs. The network uses sub-sampling to reduce computation during training. On the Switchboard task we show a relative improvement of 6% over the baseline DNN model. We present results on several LVCSR tasks with training data ranging from 3 to 1800 hours to show the effectiveness of the TDNN architecture in learning wider temporal dependencies in both small and large data scenarios.",
"title": ""
},
{
"docid": "6f5b3f2d2ebb46a993124242af8a50b8",
"text": "We present the SemEval-2018 Task 1: Affect in Tweets, which includes an array of subtasks on inferring the affectual state of a person from their tweet. For each task, we created labeled data from English, Arabic, and Spanish tweets. The individual tasks are: 1. emotion intensity regression, 2. emotion intensity ordinal classification, 3. valence (sentiment) regression, 4. valence ordinal classification, and 5. emotion classification. Seventy-five teams (about 200 team members) participated in the shared task. We summarize the methods, resources, and tools used by the participating teams, with a focus on the techniques and resources that are particularly useful. We also analyze systems for consistent bias towards a particular race or gender. The data is made freely available to further improve our understanding of how people convey emotions through language.",
"title": ""
},
{
"docid": "5e04372f08336da5b8ab4d41d69d3533",
"text": "Purpose – This research aims at investigating the role of certain factors in organizational culture in the success of knowledge sharing. Such factors as interpersonal trust, communication between staff, information systems, rewards and organization structure play an important role in defining the relationships between staff and in turn, providing possibilities to break obstacles to knowledge sharing. This research is intended to contribute in helping businesses understand the essential role of organizational culture in nourishing knowledge and spreading it in order to become leaders in utilizing their know-how and enjoying prosperity thereafter. Design/methodology/approach – The conclusions of this study are based on interpreting the results of a survey and a number of interviews with staff from various organizations in Bahrain from the public and private sectors. Findings – The research findings indicate that trust, communication, information systems, rewards and organization structure are positively related to knowledge sharing in organizations. Research limitations/implications – The authors believe that further research is required to address governmental sector institutions, where organizational politics dominate a role in hoarding knowledge, through such methods as case studies and observation. Originality/value – Previous research indicated that the Bahraini society is influenced by traditions of household, tribe, and especially religion of the Arab and Islamic world. These factors define people’s beliefs and behaviours, and thus exercise strong influence in the performance of business organizations. This study is motivated by the desire to explore the role of the national organizational culture on knowledge sharing, which may be different from previous studies conducted abroad.",
"title": ""
},
{
"docid": "fcbf97bfbcf63ee76f588a05f82de11e",
"text": "The Deliberation without Attention (DWA) effect refers to apparent improvements in decision-making following a period of distraction. It has been presented as evidence for beneficial unconscious cognitive processes. We identify two major concerns with this claim: first, as these demonstrations typically involve subjective preferences, the effects of distraction cannot be objectively assessed as beneficial; second, there is no direct evidence that the DWA manipulation promotes unconscious decision processes. We describe two tasks based on the DWA paradigm in which we found no evidence that the distraction manipulation led to decision processes that are subjectively unconscious, nor that it reduced the influence of presentation order upon performance. Crucially, we found that a lack of awareness of decision process was associated with poorer performance, both in terms of subjective preference measures used in traditional DWA paradigm and in an equivalent task where performance can be objectively assessed. Therefore, we argue that reliance on conscious memory itself can explain the data. Thus the DWA paradigm is not an adequate method of assessing beneficial unconscious thought.",
"title": ""
},
{
"docid": "4c03c0fc33f8941a7769644b5dfb62ef",
"text": "A multiband MIMO antenna for a 4G mobile terminal is proposed. The antenna structure consists of a multiband main antenna element, a printed inverted-L subantenna element operating in the higher 2.5 GHz bands, and a wideband loop sub-antenna element working in lower 0.9 GHz band. In order to improve the isolation and ECC characteristics of the proposed MIMO antenna, each element is located at a different corner of the ground plane. In addition, the inductive coils are employed to reduce the antenna volume and realize the wideband property of the loop sub-antenna element. Finally, the proposed antenna covers LTE band 7/8, PCS, WiMAX, and WLAN service, simultaneously. The MIMO antenna has ECC lower than 0.15 and isolation higher than 12 dB in both lower and higher frequency bands.",
"title": ""
},
{
"docid": "6a2d7b29a0549e99cdd31dbd2a66fc0a",
"text": "We consider data transmissions in a full duplex (FD) multiuser multiple-input multiple-output (MU-MIMO) system, where a base station (BS) bidirectionally communicates with multiple users in the downlink (DL) and uplink (UL) channels on the same system resources. The system model of consideration has been thought to be impractical due to the self-interference (SI) between transmit and receive antennas at the BS. Interestingly, recent advanced techniques in hardware design have demonstrated that the SI can be suppressed to a degree that possibly allows for FD transmission. This paper goes one step further in exploring the potential gains in terms of the spectral efficiency (SE) and energy efficiency (EE) that can be brought by the FD MU-MIMO model. Toward this end, we propose low-complexity designs for maximizing the SE and EE, and evaluate their performance numerically. For the SE maximization problem, we present an iterative design that obtains a locally optimal solution based on a sequential convex approximation method. In this way, the nonconvex precoder design problem is approximated by a convex program at each iteration. Then, we propose a numerical algorithm to solve the resulting convex program based on the alternating and dual decomposition approaches, where analytical expressions for precoders are derived. For the EE maximization problem, using the same method, we first transform it into a concave-convex fractional program, which then can be reformulated as a convex program using the parametric approach. We will show that the resulting problem can be solved similarly to the SE maximization problem. Numerical results demonstrate that, compared to a half duplex system, the FD system of interest with the proposed designs achieves a better SE and a slightly smaller EE when the SI is small.",
"title": ""
},
{
"docid": "fbee148ef2de028cc53a371c27b4d2be",
"text": "Desalination is a water-treatment process that separates salts from saline water to produce potable water or water that is low in total dissolved solids (TDS). Globally, the total installed capacity of desalination plants was 61 million m3 per day in 2008 [1]. Seawater desalination accounts for 67% of production, followed by brackish water at 19%, river water at 8%, and wastewater at 6%. Figure 1 show the worldwide feed-water percentage used in desalination. The most prolific users of desalinated water are in the Arab region, namely, Saudi Arabia, Kuwait, United Arab Emirates, Qatar, Oman, and Bahrain [2].",
"title": ""
},
{
"docid": "af254a16b14a3880c9b8fe5b13f1a695",
"text": "MOOCs or Massive Online Open Courses based on Open Educational Resources (OER) might be one of the most versatile ways to offer access to quality education, especially for those residing in far or disadvantaged areas. This article analyzes the state of the art on MOOCs, exploring open research questions and setting interesting topics and goals for further research. Finally, it proposes a framework that includes the use of software agents with the aim to improve and personalize management, delivery, efficiency and evaluation of massive online courses on an individual level basis.",
"title": ""
}
] |
scidocsrr
|
7459e0c8a32530a2615a218484e8a04d
|
Meta-analysis of the heritability of human traits based on fifty years of twin studies
|
[
{
"docid": "b51fcfa32dbcdcbcc49f1635b44601ed",
"text": "An adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations. The test statistic is a direct statistical analogue of the popular \"funnel-graph.\" The number of component studies in the meta-analysis, the nature of the selection mechanism, the range of variances of the effect size estimates, and the true underlying effect size are all observed to be influential in determining the power of the test. The test is fairly powerful for large meta-analyses with 75 component studies, but has only moderate power for meta-analyses with 25 component studies. However, in many of the configurations in which there is low power, there is also relatively little bias in the summary effect size estimate. Nonetheless, the test must be interpreted with caution in small meta-analyses. In particular, bias cannot be ruled out if the test is not significant. The proposed technique has potential utility as an exploratory tool for meta-analysts, as a formal procedure to complement the funnel-graph.",
"title": ""
},
{
"docid": "f4e73a0c766ce1ead78b2b770e641f61",
"text": "Epistasis, or interactions between genes, has long been recognized as fundamentally important to understanding the structure and function of genetic pathways and the evolutionary dynamics of complex genetic systems. With the advent of high-throughput functional genomics and the emergence of systems approaches to biology, as well as a new-found ability to pursue the genetic basis of evolution down to specific molecular changes, there is a renewed appreciation both for the importance of studying gene interactions and for addressing these questions in a unified, quantitative manner.",
"title": ""
}
] |
[
{
"docid": "07cbbb184a627456922a1e66ae54d3d2",
"text": "A maximum likelihood (ML) acoustic source location estimation method is presented for the application in a wireless ad hoc sensor network. This method uses acoustic signal energy measurements taken at individual sensors of an ad hoc wireless sensor network to estimate the locations of multiple acoustic sources. Compared to the existing acoustic energy based source localization methods, this proposed ML method delivers more accurate results and offers the enhanced capability of multiple source localization. A multiresolution search algorithm and an expectation-maximization (EM) like iterative algorithm are proposed to expedite the computation of source locations. The Crame/spl acute/r-Rao Bound (CRB) of the ML source location estimate has been derived. The CRB is used to analyze the impacts of sensor placement to the accuracy of location estimates for single target scenario. Extensive simulations have been conducted. It is observed that the proposed ML method consistently outperforms existing acoustic energy based source localization methods. An example applying this method to track military vehicles using real world experiment data also demonstrates the performance advantage of this proposed method over a previously proposed acoustic energy source localization method.",
"title": ""
},
{
"docid": "1a154992369fc30c36613fc811df53ac",
"text": "Speech recognition is a subjective phenomenon. Despite being a huge research in this field, this process still faces a lot of problem. Different techniques are used for different purposes. This paper gives an overview of speech recognition process. Various progresses have been done in this field. In this work of project, it is shown that how the speech signals are recognized using back propagation algorithm in neural network. Voices of different persons of various ages",
"title": ""
},
{
"docid": "fce49da5560a89cef5738cbcb41ad2bd",
"text": "This paper conceptualizes IT service management (ITSM) capability, a key competence of today’s IT provider organizations, and presents a survey instrument to facilitate the measurement of an ITSM capability for research and practice. Based on the review of four existing ITSM maturity models (CMMISVC, COBIT 4.1, SPICE, ITIL v3), we first develop a multi-attributive scale to assess maturity on an ITSM process level. We then use this scale in a survey with 205 ITSM key informants who assessed IT provider organizations along a set of 26 established ITSM processes. Our exploratory factor analysis and measurement model assessment results support the validity of an operationalization of ITSM capability as a second-order construct formed by ITSM processes that span three dimensions: service planning, service transition, and service operation. The practical utility of our survey instrument and avenues for future research on ITSM capability are outlined.",
"title": ""
},
{
"docid": "c4282486dad6f0fef06964bd3fa45272",
"text": "In recent years, deep neural models have been widely adopted for text matching tasks, such as question answering and information retrieval, showing improved performance as compared with previous methods. In this paper, we introduce the MatchZoo toolkit that aims to facilitate the designing, comparing and sharing of deep text matching models. Specically, the toolkit provides a unied data preparation module for dierent text matching problems, a exible layer-based model construction process, and a variety of training objectives and evaluation metrics. In addition, the toolkit has implemented two schools of representative deep text matching models, namely representation-focused models and interactionfocused models. Finally, users can easily modify existing models, create and share their own models for text matching in MatchZoo.",
"title": ""
},
{
"docid": "07d0009e53d2ccdfe7888b12ac173cd0",
"text": "This paper presents a training method that encodes each word into a different vector in semantic space and its relation to low entropy coding. Elman network is employed in the method to process word sequences from literary works. The trained codes possess reduced entropy and are used in ranking, indexing, and categorizing literary works. A modification of the method to train the multi-vector for each polysemous word is also presented where each vector represents a different meaning of its word. These multiple vectors can accommodate several different meanings of their word. This method is applied to the stylish analyses of two Chinese novels, Dream of the Red Chamber and Romance of the Three Kingdoms. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d509cb384ecddafa0c4f866882af2c77",
"text": "On 9 January 1857, a large earthquake of magnitude 7.9 occurred on the San Andreas fault, with rupture initiating at Parkfield in central California and propagating in a southeasterly direction over a distance of more than 360 km. Such a unilateral rupture produces significant directivity toward the San Fernando and Los Angeles basins. Indeed, newspaper reports of sloshing observed in the Los Angeles river point to long-duration (1–2 min) and long-period (2–8 sec) shaking. If such an earthquake were to happen today, it could impose significant seismic demand on present-day tall buildings. Using state-of-the-art computational tools in seismology and structural engineering, validated using data from the 17 January 1994, magnitude 6.7 Northridge earthquake, we determine the damage to an existing and a new 18story steel moment-frame building in southern California due to ground motion from two hypothetical magnitude 7.9 earthquakes on the San Andreas fault. Our study indicates that serious damage occurs in these buildings at many locations in the region in one of the two scenarios. For a north-to-south rupture scenario, the peak velocity is of the order of 1 m • sec 1 in the Los Angeles basin, including downtown Los Angeles, and 2 m • sec 1 in the San Fernando valley, while the peak displacements are of the order of 1 m and 2 m in the Los Angeles basin and San Fernando valley, respectively. For a south-to-north rupture scenario the peak velocities and displacements are reduced by a factor of roughly 2.",
"title": ""
},
{
"docid": "9039058c93aeaa99dae15617e5032b33",
"text": "Data sparsity is one of the most challenging problems for recommender systems. One promising solution to this problem is cross-domain recommendation, i.e., leveraging feedbacks or ratings from multiple domains to improve recommendation performance in a collective manner. In this paper, we propose an Embedding and Mapping framework for Cross-Domain Recommendation, called EMCDR. The proposed EMCDR framework distinguishes itself from existing crossdomain recommendation models in two aspects. First, a multi-layer perceptron is used to capture the nonlinear mapping function across domains, which offers high flexibility for learning domain-specific features of entities in each domain. Second, only the entities with sufficient data are used to learn the mapping function, guaranteeing its robustness to noise caused by data sparsity in single domain. Extensive experiments on two cross-domain recommendation scenarios demonstrate that EMCDR significantly outperforms stateof-the-art cross-domain recommendation methods.",
"title": ""
},
{
"docid": "25c25864ac5584b99aacbda88bda6203",
"text": "Our goal is to be able to build a generative model from a deep neural network architecture to try to create music that has both harmony and melody and is passable as music composed by humans. Previous work in music generation has mainly been focused on creating a single melody. More recent work on polyphonic music modeling, centered around time series probability density estimation, has met some partial success. In particular, there has been a lot of work based off of Recurrent Neural Networks combined with Restricted Boltzmann Machines (RNNRBM) and other similar recurrent energy based models. Our approach, however, is to perform end-to-end learning and generation with deep neural nets alone.",
"title": ""
},
{
"docid": "dda8427a6630411fc11e6d95dbff08b9",
"text": "Text representations using neural word embeddings have proven effective in many NLP applications. Recent researches adapt the traditional word embedding models to learn vectors of multiword expressions (concepts/entities). However, these methods are limited to textual knowledge bases (e.g., Wikipedia). In this paper, we propose a novel and simple technique for integrating the knowledge about concepts from two large scale knowledge bases of different structure (Wikipedia and Probase) in order to learn concept representations. We adapt the efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph. We evaluate our concept embedding models on two tasks: (1) analogical reasoning, where we achieve a state-of-the-art performance of 91% on semantic analogies, (2) concept categorization, where we achieve a state-of-the-art performance on two benchmark datasets achieving categorization accuracy of 100% on one and 98% on the other. Additionally, we present a case study to evaluate our model on unsupervised argument type identification for neural semantic parsing. We demonstrate the competitive accuracy of our unsupervised method and its ability to better generalize to out of vocabulary entity mentions compared to the tedious and error prone methods which depend on gazetteers and regular expressions.",
"title": ""
},
{
"docid": "88976f137ea43b1be8d133ddc4124af2",
"text": "Real-time stereo vision is attractive in many areas such as outdoor mapping and navigation. As a popular accelerator in the image processing field, GPU is widely used for the studies of the stereo vision algorithms. Recently, many stereo vision systems on GPU have achieved low error rate, as a result of the development of deep learning. However, their processing speed is normally far from the real-time requirement. In this paper, we propose a real-time stereo vision system on GPU for the high-resolution images. This system also maintains a low error rate compared with other fast systems. In our approach, the image is resized to reduce the computational complexity and to realize the real-time processing. The low error rate is kept by using the cost aggregation with multiple blocks, secondary matching and sub-pixel estimation. Its processing speed is 41 fps for $2888\\times 1920$ pixels images when the maximum disparity is 760.",
"title": ""
},
{
"docid": "01ee0af8491087a7c50002c7d6b7411e",
"text": "The way that information propagates in neural networks is of great importance. In this paper, we propose Path Aggregation Network (PANet) aiming at boosting information flow in proposal-based instance segmentation framework. Specifically, we enhance the entire feature hierarchy with accurate localization signals in lower layers by bottom-up path augmentation, which shortens the information path between lower layers and topmost feature. We present adaptive feature pooling, which links feature grid and all feature levels to make useful information in each level propagate directly to following proposal subnetworks. A complementary branch capturing different views for each proposal is created to further improve mask prediction. These improvements are simple to implement, with subtle extra computational overhead. Yet they are useful and make our PANet reach the 1st place in the COCO 2017 Challenge Instance Segmentation task and the 2nd place in Object Detection task without large-batch training. PANet is also state-of-the-art on MVD and Cityscapes.",
"title": ""
},
{
"docid": "e59f53449783b3b7aceef8ae3b43dae1",
"text": "W E use the definitions of (11). However, in deference to some recent attempts to unify the terminology of graph theory we replace the term 'circuit' by 'polygon', and 'degree' by 'valency'. A graph G is 3-connected (nodally 3-connected) if it is simple and non-separable and satisfies the following condition; if G is the union of two proper subgraphs H and K such that HnK consists solely of two vertices u and v, then one of H and K is a link-graph (arc-graph) with ends u and v. It should be noted that the union of two proper subgraphs H and K of G can be the whole of G only if each of H and K includes at least one edge or vertex not belonging to the other. In this paper we are concerned mainly with nodally 3-connected graphs, but a specialization to 3-connected graphs is made in § 12. In § 3 we discuss conditions for a nodally 3-connected graph to be planar, and in § 5 we discuss conditions for the existence of Kuratowski subgraphs of a given graph. In §§ 6-9 we show how to obtain a convex representation of a nodally 3-connected graph, without Kuratowski subgraphs, by solving a set of linear equations. Some extensions of these results to general graphs, with a proof of Kuratowski's theorem, are given in §§ 10-11. In § 12 we discuss the representation in the plane of a pair of dual graphs, and in § 13 we draw attention to some unsolved problems.",
"title": ""
},
{
"docid": "ea5a455bca9ff0dbb1996bd97d89dfe5",
"text": "Single exon genes (SEG) are archetypical of prokaryotes. Hence, their presence in intron-rich, multi-cellular eukaryotic genomes is perplexing. Consequently, a study on SEG origin and evolution is important. Towards this goal, we took the first initiative of identifying and counting SEG in nine completely sequenced eukaryotic organisms--four of which are unicellular (E. cuniculi, S. cerevisiae, S. pombe, P. falciparum) and five of which are multi-cellular (C. elegans, A. thaliana, D. melanogaster, M. musculus, H. sapiens). This exercise enabled us to compare their proportion in unicellular and multi-cellular genomes. The comparison suggests that the SEG fraction decreases with gene count (r = -0.80) and increases with gene density (r = 0.88) in these genomes. We also examined the distribution patterns of their protein lengths in different genomes.",
"title": ""
},
{
"docid": "c28ee3a41d05654eedfd379baf2d5f24",
"text": "The problem of classifying subjects into disease categories is of common occurrence in medical research. Machine learning tools such as Artificial Neural Network (ANN), Support Vector Machine (SVM) and Logistic Regression (LR) and Fisher’s Linear Discriminant Analysis (LDA) are widely used in the areas of prediction and classification. The main objective of these competing classification strategies is to predict a dichotomous outcome (e.g. disease/healthy) based on several features.",
"title": ""
},
{
"docid": "c5cfe386f6561eab1003d5572443612e",
"text": "Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over {\\pounds}108bn p.a., with 3.9m employees in a truly international industry and exports {\\pounds}20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a Wave 2 Industrial Challenge Fund Investment (\"Transforming Food Production: from Farm to Fork\"). Robotics and Autonomous Systems (RAS) and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, this white paper reviews the state of the art in the application of RAS in Agri-Food production and explores research and innovation needs to ensure these technologies reach their full potential and deliver the necessary impacts in the Agri-Food sector.",
"title": ""
},
{
"docid": "51e2f490072820230d71f648d70babcb",
"text": "Classification and regression trees are becoming increasingly popular for partitioning data and identifying local structure in small and large datasets. Classification trees include those models in which the dependent variable (the predicted variable) is categorical. Regression trees include those in which it is continuous. This paper discusses pitfalls in the use of these methods and highlights where they are especially suitable. Paper presented at the 1992 Sun Valley, ID, Sawtooth/SYSTAT Joint Software Conference.",
"title": ""
},
{
"docid": "269c1cb7fe42fd6403733fdbd9f109e3",
"text": "Myofibroblasts are the key players in extracellular matrix remodeling, a core phenomenon in numerous devastating fibrotic diseases. Not only in organ fibrosis, but also the pivotal role of myofibroblasts in tumor progression, invasion and metastasis has recently been highlighted. Myofibroblast targeting has gained tremendous attention in order to inhibit the progression of incurable fibrotic diseases, or to limit the myofibroblast-induced tumor progression and metastasis. In this review, we outline the origin of myofibroblasts, their general characteristics and functions during fibrosis progression in three major organs: liver, kidneys and lungs as well as in cancer. We will then discuss the state-of-the art drug targeting technologies to myofibroblasts in context of the above-mentioned organs and tumor microenvironment. The overall objective of this review is therefore to advance our understanding in drug targeting to myofibroblasts, and concurrently identify opportunities and challenges for designing new strategies to develop novel diagnostics and therapeutics against fibrosis and cancer.",
"title": ""
},
{
"docid": "af7f83599c163d0f519f1e2636ae8d44",
"text": "There is a set of characterological attributes thought to be associated with developing success at critical thinking (CT). This paper explores the disposition toward CT theoretically, and then as it appears to be manifest in college students. Factor analytic research grounded in a consensus-based conceptual analysis of CT described seven aspects of the overall disposition toward CT: truth-seeking, open-mindedness, analyticity, systematicity, CTconfidence, inquisitiveness, and cognitive maturity. The California Critical Thinking Disposition Inventory (CCTDI), developed in 1992, was used to sample college students at two comprehensive universities. Entering college freshman students showed strengths in openmindedness and inquisitiveness, weaknesses in systematicity and opposition to truth-seeking. Additional research indicates the disposition toward CT is highly correlated with the psychological constructs of absorption and openness to experience, and strongly predictive of ego-resiliency. A preliminary study explores the interesting and potentially complex interrelationship between the disposition toward CT and CT abilities. In addition to the significance of this work for psychological studies of human development, empirical research on the disposition toward CT promises important implications for all levels of education. 1 This essay appeared as Facione, PA, Sánchez, (Giancarlo) CA, Facione, NC & Gainen, J., (1995). The disposition toward critical thinking. Journal of General Education. Volume 44, Number(1). 1-25.",
"title": ""
},
{
"docid": "023285cbd5d356266831fc0e8c176d4f",
"text": "The two authorsLakoff, a linguist and Nunez, a psychologistpurport to introduce a new field of study, i.e. \"mathematical idea analysis\", with this book. By \"mathematical idea analysis\", they mean to give a scientifically plausible account of mathematical concepts using the apparatus of cognitive science. This approach is meant to be a contribution to academics and possibly education as it helps to illuminate how we cognitise mathematical concepts, which are supposedly undecipherable and abstruse to laymen. The analysis of mathematical ideas, the authors claim, cannot be done within mathematics, for even metamathematicsrecursive theory, model theory, set theory, higherorder logic still requires mathematical idea analysis in itself! Formalism, by its very nature, voids symbols of their meanings and thus cognition is required to imbue meaning. Thus, there is a need for this new field, in which the authors, if successful, would become pioneers.",
"title": ""
},
{
"docid": "0824992bb506ac7c8a631664bf608086",
"text": "There are many image fusion methods that can be used to produce high-resolution multispectral images from a high-resolution panchromatic image and low-resolution multispectral images. Starting from the physical principle of image formation, this paper presents a comprehensive framework, the general image fusion (GIF) method, which makes it possible to categorize, compare, and evaluate the existing image fusion methods. Using the GIF method, it is shown that the pixel values of the high-resolution multispectral images are determined by the corresponding pixel values of the low-resolution panchromatic image, the approximation of the high-resolution panchromatic image at the low-resolution level. Many of the existing image fusion methods, including, but not limited to, intensity-hue-saturation, Brovey transform, principal component analysis, high-pass filtering, high-pass modulation, the a/spl grave/ trous algorithm-based wavelet transform, and multiresolution analysis-based intensity modulation (MRAIM), are evaluated and found to be particular cases of the GIF method. The performance of each image fusion method is theoretically analyzed based on how the corresponding low-resolution panchromatic image is computed and how the modulation coefficients are set. An experiment based on IKONOS images shows that there is consistency between the theoretical analysis and the experimental results and that the MRAIM method synthesizes the images closest to those the corresponding multisensors would observe at the high-resolution level.",
"title": ""
}
] |
scidocsrr
|
f62cdeba036f02492a67177753b54404
|
A Comparative Study of Ontology building Tools in Semantic Web Applications
|
[
{
"docid": "db483f6aab0361ce5a3ad1a89508541b",
"text": "In this paper, we describe Swoop, a hypermedia inspired Ontology Browser and Editor based on OWL, the recently standardized Web-oriented ontology language. After discussing the design rationale and architecture of Swoop, we focus mainly on its features, using illustrative examples to highlight its use. We demonstrate that with its web-metaphor, adherence to OWL recommendations and key unique features such as Collaborative Annotation using Annotea, Swoop acts as a useful and efficient web ontology development tool. We conclude with a list of future plans for Swoop, that should further increase its overall appeal and accessibility.",
"title": ""
}
] |
[
{
"docid": "0f1f3718892a25094918ce9685c5ab78",
"text": "The present study was conducted to determine effects of different forms of yeast (Saccharomyces cerevisiae, strain Y200007) on the growth performance, intestinal development, and systemic immunity in early-weaned piglets. A total of 96 piglets (14-d old, initial average body weight of 4.5 kg) were assigned to 4 dietary treatments: (1) basal diet without yeast (Control); (2) basal diet supplemented with 3.00 g/kg live yeast (LY); (3) basal diet supplemented with 2.66 g/kg heat-killed whole yeast (HKY); and (4) basal diet supplemented with 3.00 g/kg superfine yeast powders (SFY). Diets and water were provided ad libitum to the piglets during 3-week experiment. Growth performance of piglets was measured weekly. Samples of blood and small intestine were collected at days 7 and 21 of experiment. Dietary supplementation with LY and SFY improved G:F of piglets at days 1-21 of the experiment (P < 0.05) compared to Control group. Serum concentrations of growth hormone (GH), triiodothyronine (T3), tetraiodothyronine (T4), and insulin growth factor 1 (IGF-1) in piglets at day 21 of the experiment were higher when fed diets supplemented with LY and SFY than those in Control group (P < 0.05). Compared to Control group, contents of serum urea nitrogen of piglets were reduced by the 3 yeast-supplemented diets (P < 0.05). Diets supplemented with LY increased villus height and villus-to-crypt ratio in duodenum and jejunum of piglets (P < 0.05) compared to other two groups at day 7 of the experiment. Feeding diets supplemented with LY and SFY increased (P < 0.05) serum concentrations of IgA, IL-2, and IL-6 levels in piglets compared to Control. The CD4+/CD8+ ratio and proliferation of T-lymphocytes in piglets fed diets supplemented with LY were increased compared to that of Control group at day 7 of the experiment (P < 0.05). In conclusion, dietary supplementation with both LY and SFY enhanced feed conversion, small intestinal development, and systemic immunity in early-weaned piglets, with better improvement in feed conversion by dietary supplementation with LY, while dietary supplementation with SFY was more effective in increasing systemic immune functions in early-weaned piglets.",
"title": ""
},
{
"docid": "a7607444b58f0e86000c7f2d09551fcc",
"text": "Background modeling is a critical component for various vision-based applications. Most traditional methods tend to be inefficient when solving large-scale problems. In this paper, we introduce sparse representation into the task of large-scale stable-background modeling, and reduce the video size by exploring its discriminative frames. A cyclic iteration process is then proposed to extract the background from the discriminative frame set. The two parts combine to form our sparse outlier iterative removal (SOIR) algorithm. The algorithm operates in tensor space to obey the natural data structure of videos. Experimental results show that a few discriminative frames determine the performance of the background extraction. Furthermore, SOIR can achieve high accuracy and high speed simultaneously when dealing with real video sequences. Thus, SOIR has an advantage in solving large-scale tasks.",
"title": ""
},
{
"docid": "825888e4befcbf6b492143a13928a34e",
"text": "Sentiment analysis is one of the prominent fields of data mining that deals with the identification and analysis of sentimental contents generally available at social media. Twitter is one of such social medias used by many users about some topics in the form of tweets. These tweets can be analyzed to find the viewpoints and sentiments of the users by using clustering-based methods. However, due to the subjective nature of the Twitter datasets, metaheuristic-based clustering methods outperforms the traditional methods for sentiment analysis. Therefore, this paper proposes a novel metaheuristic method (CSK) which is based on K-means and cuckoo search. The proposed method has been used to find the optimum cluster-heads from the sentimental contents of Twitter dataset. The efficacy of proposed method has been tested on different Twitter datasets and compared with particle swarm optimization, differential evolution, cuckoo search, improved cuckoo search, gauss-based cuckoo search, and two n-grams methods. Experimental results and statistical analysis validate that the proposed method outperforms the existing methods. The proposed method has theoretical implications for the future research to analyze the data generated through social networks/medias. This method has also very generalized practical implications for designing a system that can provide conclusive reviews on any social issues. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1dc4a8f02dfe105220db5daae06c2229",
"text": "Photosynthesis begins with light harvesting, where specialized pigment-protein complexes transform sunlight into electronic excitations delivered to reaction centres to initiate charge separation. There is evidence that quantum coherence between electronic excited states plays a role in energy transfer. In this review, we discuss how quantum coherence manifests in photosynthetic light harvesting and its implications. We begin by examining the concept of an exciton, an excited electronic state delocalized over several spatially separated molecules, which is the most widely available signature of quantum coherence in light harvesting. We then discuss recent results concerning the possibility that quantum coherence between electronically excited states of donors and acceptors may give rise to a quantum coherent evolution of excitations, modifying the traditional incoherent picture of energy transfer. Key to this (partially) coherent energy transfer appears to be the structure of the environment, in particular the participation of non-equilibrium vibrational modes. We discuss the open questions and controversies regarding quantum coherent energy transfer and how these can be addressed using new experimental techniques.",
"title": ""
},
{
"docid": "6fb48ddc2f14cdb9371aad67e9c8abe0",
"text": "Being able to predict the course of arbitrary chemical react ions is essential to the theory and applications of organic chemistry. Previous app roaches are not highthroughput, are not generalizable or scalable, or lack suffi cient data to be effective. We describe single mechanistic reactions as concerted elec tron movements from an electron orbital source to an electron orbital sink. We us e an existing rule-based expert system to derive a dataset consisting of 2,989 productive mechanistic steps and6.14 million non-productive mechanistic steps. We then pose ide nt fying productive mechanistic steps as a ranking problem: rank potent ial orbital interactions such that the top ranked interactions yield the major produc ts. The machine learning implementation follows a two-stage approach, in which w e first train atom level reactivity filters to prune94.0% of non-productive reactions with less than a 0.1% false negative rate. Then, we train an ensemble of ranking mo dels n pairs of interacting orbitals to learn a relative productivity func tion over single mechanistic reactions in a given system. Without the use of explicit t ransformation patterns, the ensemble perfectly ranks the productive mechanisms at t he top89.1% of the time, rising to99.9% of the time when top ranked lists with at most four nonproductive reactions are considered. The final system allow s multi-step reaction prediction. Furthermore, it is generalizable, making reas on ble predictions over reactants and conditions which the rule-based expert syste m does not handle.",
"title": ""
},
{
"docid": "bf55a5d7a6d75ae9405fc19ffbeb5b91",
"text": "A modified isoflux pattern antenna suitable for S-band satellite communication is introduced and investigated with respect to its beam pattern and axial ratios. For uniform coverage over the Earth's surface, an isoflux pattern antenna is required because of its uniform power density over a wide coverage area. A final optimization for the parameter values is carried out from the parametric studies of the antenna, and a simple analysis of the combination of horizontal dipoles and vertically attached top-hat elements is also carried out. The electrical performances of the proposed antenna are verified in terms of the reflection coefficients, radiation pattern, and axial ratios according to a wide beamwidth rather than at a fixed observation angle such as a 0° main beam direction.",
"title": ""
},
{
"docid": "51c4dd282e85db5741b65ae4386f6c48",
"text": "In this paper, we present an end-to-end approach to simultaneously learn spatio-temporal features and corresponding similarity metric for video-based person re-identification. Given the video sequence of a person, features from each frame that are extracted from all levels of a deep convolutional network can preserve a higher spatial resolution from which we can model finer motion patterns. These lowlevel visual percepts are leveraged into a variant of recurrent model to characterize the temporal variation between time-steps. Features from all time-steps are then summarized using temporal pooling to produce an overall feature representation for the complete sequence. The deep convolutional network, recurrent layer, and the temporal pooling are jointly trained to extract comparable hidden-unit representations from input pair of time series to compute their corresponding similarity value. The proposed framework combines time series modeling and metric learning to jointly learn relevant features and a good similarity measure between time sequences of person. Experiments demonstrate that our approach achieves the state-of-the-art performance for video-based person re-identification on iLIDS-VID and PRID 2011, the two primary public datasets for this purpose.",
"title": ""
},
{
"docid": "1d483a47ff5c735fd0ee78dfdb9bd4f0",
"text": "This paper is concerned with graphical criteria that can be used to solve the problem of identifying casual effects from nonexperimental data in a causal Bayesian network structure, i.e., a directed acyclic graph that represents causal relationships. We first review Pearl’s work on this topic [Pearl, 1995], in which several useful graphical criteria are presented. Then we present a complete algorithm [Huang and Valtorta, 2006b] for the identifiability problem. By exploiting the completeness of this algorithm, we prove that the three basicdo-calculus rulesthat Pearl presents are complete, in the sense that, if a causal effect is identifiable, there exists a sequence of applications of the rules of the do-calculus that transforms the causal effect formula into a formula that only includes observational quantities.",
"title": ""
},
{
"docid": "d79125db077fdde79653feaf987eb6a0",
"text": "This paper focuses on the overall task of recommending to the chemist candidate molecules (reactants) necessary to synthesize a given target molecule (product), which is a novel application as well as an important step for the chemist to find a synthesis route to generate the product. We formulate this task as a link-prediction problem over a so-called Network of Organic Chemistry (NOC) that we have constructed from 8 million chemical reactions described in the US patent literature between 1976 and 2013. We leverage state-of-the-art factorization algorithms for recommender systems to solve this task. Our empirical evaluation demonstrates that Factorization Machines, trained with chemistry-specific knowledge, outperforms current methods based on similarity of chemical structures.",
"title": ""
},
{
"docid": "9ecf20a9df11e008ddd01c9dea38b942",
"text": "A n interest rate swap is a contractual agreement between two parties to exchange a series of interest rate payments without exchanging the underlying debt. The interest rate swap represents one example of a general category of financial instruments known as derivative instruments. In the most general terms, a derivative instrument is an agreement whose value derives from some underlying market return, market price, or price index. The rapid growth of the market for swaps and other derivatives in recent years has spurred considerable controversy over the economic rationale for these instruments. Many observers have expressed alarm over the growth and size of the market, arguing that interest rate swaps and other derivative instruments threaten the stability of financial markets. Recently, such fears have led both legislators and bank regulators to consider measures to curb the growth of the market. Several legislators have begun to promote initiatives to create an entirely new regulatory agency to supervise derivatives trading activity. Underlying these initiatives is the premise that derivative instruments increase aggregate risk in the economy, either by encouraging speculation or by burdening firms with risks that management does not understand fully and is incapable of controlling.1 To be certain, much of this criticism is aimed at many of the more exotic derivative instruments that have begun to appear recently. Nevertheless, it is difficult, if not impossible, to appreciate the economic role of these more exotic instruments without an understanding of the role of the interest rate swap, the most basic of the new generation of financial derivatives.",
"title": ""
},
{
"docid": "86749ba424002d4b007cb3942dca225a",
"text": "BACKGROUND\nWet needling uses hollow-bore needles to deliver corticosteroids, anesthetics, sclerosants, botulinum toxins, or other agents. In contrast, dry needling requires the insertion of thin monofilament needles, as used in the practice of acupuncture, without the use of injectate into muscles, ligaments, tendons, subcutaneous fascia, and scar tissue. Dry needles may also be inserted in the vicinity of peripheral nerves and/or neurovascular bundles in order to manage a variety of neuromusculoskeletal pain syndromes. Nevertheless, some position statements by several US State Boards of Physical Therapy have narrowly defined dry needling as an 'intramuscular' procedure involving the isolated treatment of 'myofascial trigger points' (MTrPs).\n\n\nOBJECTIVES\nTo operationalize an appropriate definition for dry needling based on the existing literature and to further investigate the optimal frequency, duration, and intensity of dry needling for both spinal and extremity neuromusculoskeletal conditions.\n\n\nMAJOR FINDINGS\nAccording to recent findings in the literature, the needle tip touches, taps, or pricks tiny nerve endings or neural tissue (i.e. 'sensitive loci' or 'nociceptors') when it is inserted into a MTrP. To date, there is a paucity of high-quality evidence to underpin the use of direct dry needling into MTrPs for the purpose of short and long-term pain and disability reduction in patients with musculoskeletal pain syndromes. Furthermore, there is a lack of robust evidence validating the clinical diagnostic criteria for trigger point identification or diagnosis. High-quality studies have also demonstrated that manual examination for the identification and localization of a trigger point is neither valid nor reliable between-examiners.\n\n\nCONCLUSIONS\nSeveral studies have demonstrated immediate or short-term improvements in pain and/or disability by targeting trigger points (TrPs) using in-and-out techniques such as 'pistoning' or 'sparrow pecking'; however, to date, no high-quality, long-term trials supporting in-and-out needling techniques at exclusively muscular TrPs exist, and the practice should therefore be questioned. The insertion of dry needles into asymptomatic body areas proximal and/or distal to the primary source of pain is supported by the myofascial pain syndrome literature. Physical therapists should not ignore the findings of the Western or biomedical 'acupuncture' literature that have used the very same 'dry needles' to treat patients with a variety of neuromusculoskeletal conditions in numerous, large scale randomized controlled trials. Although the optimal frequency, duration, and intensity of dry needling has yet to be determined for many neuromusculoskeletal conditions, the vast majority of dry needling randomized controlled trials have manually stimulated the needles and left them in situ for between 10 and 30 minute durations. Position statements and clinical practice guidelines for dry needling should be based on the best available literature, not a single paradigm or school of thought; therefore, physical therapy associations and state boards of physical therapy should consider broadening the definition of dry needling to encompass the stimulation of neural, muscular, and connective tissues, not just 'TrPs'.",
"title": ""
},
{
"docid": "667a2ea2b8ed7d2c709f04d8cd6617c6",
"text": "Knowledge centric activities of developing new products and services are becoming the primary source of sustainable competitive advantage in an era characterized by short product life cycles, dynamic markets and complex processes. We Ž . view new product development NPD as a knowledge-intensive activity. Based on a case study in the consumer electronics Ž . industry, we identify problems associated with knowledge management KM in the context of NPD by cross-functional collaborative teams. We map these problems to broad Information Technology enabled solutions and subsequently translate these into specific system characteristics and requirements. A prototype system that meets these requirements developed to capture and manage tacit and explicit process knowledge is further discussed. The functionalities of the system include functions for representing context with informal components, easy access to process knowledge, assumption surfacing, review of past knowledge, and management of dependencies. We demonstrate the validity our proposed solutions using scenarios drawn from our case study. q 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "dd5883895261ad581858381bec1b92eb",
"text": "PURPOSE\nTo establish the validity and reliability of a new vertical jump force test (VJFT) for the assessment of bilateral strength asymmetry in a total of 451 athletes.\n\n\nMETHODS\nThe VJFT consists of countermovement jumps with both legs simultaneously: one on a single force platform, the other on a leveled wooden platform. Jumps with the right or the left leg on the force platform were alternated. Bilateral strength asymmetry was calculated as [(stronger leg - weaker leg)/stronger leg] x 100. A positive sign indicates a stronger right leg; a negative sign indicates a stronger left leg. Studies 1 (N = 59) and 2 (N = 41) examined the correlation between the VJFT and other tests of lower-limb bilateral strength asymmetry in male athletes. In study 3, VJFT reliability was assessed in 60 male athletes. In study 4, the effect of rehabilitation on bilateral strength asymmetry was examined in seven male and female athletes 8-12 wk after unilateral knee surgery. In study 5, normative data were determined in 313 male soccer players.\n\n\nRESULTS\nSignificant correlations were found between VJFT and both the isokinetic leg extension test (r = 0.48; 95% confidence interval, 0.26-0.66) and the isometric leg press test (r = 0.83; 0.70-0.91). VJFT test-retest intraclass correlation coefficient was 0.91 (0.85-0.94), and typical error was 2.4%. The change in mean [-0.40% (-1.25 to 0.46%)] was not substantial. Rehabilitation decreased bilateral strength asymmetry (mean +/- SD) of the athletes recovering from unilateral knee surgery from 23 +/- 3 to 10 +/- 4% (P < 0.01). The range of normal bilateral strength asymmetry (2.5th to 97.5th percentiles) was -15 to 15%.\n\n\nCONCLUSIONS\nThe assessment of bilateral strength asymmetry with the VJFT is valid and reliable, and it may be useful in sports medicine.",
"title": ""
},
{
"docid": "be5e1336187b80bc418b2eb83601fbd4",
"text": "Pedestrian detection has been an important problem for decades, given its relevance to a number of applications in robotics, including driver assistance systems, road scene understanding and surveillance systems. The two main practical requirements for fielding such systems are very high accuracy and real-time speed: we need pedestrian detectors that are accurate enough to be relied on and are fast enough to run on systems with limited compute power. This paper addresses both of these requirements by combining very accurate deep-learning-based classifiers within very efficient cascade classifier frameworks. Deep neural networks (DNN) have been shown to excel at classification tasks [5], and their ability to operate on raw pixel input without the need to design special features is very appealing. However, deep nets are notoriously slow at inference time. In this paper, we propose an approach that cascades deep nets and fast features, that is both very fast and accurate. We apply it to the challenging task of pedestrian detection. Our algorithm runs in real-time at 15 frames per second (FPS). The resulting approach achieves a 26.2% average miss rate on the Caltech Pedestrian detection benchmark, which is the first work we are aware of that achieves high accuracy while running in real-time. To achieve this, we combine a fast cascade [2] with a cascade of classifiers, which we propose to be DNNs. Our approach is unique, as it is the only one to produce a pedestrian detector at real-time speeds (15 FPS) that is also very accurate. Figure 1 visualizes existing methods as plotted on the accuracy computational time axis, measured on the challenging Caltech pedestrian detection benchmark [4]. As can be seen in this figure, our approach is the only one to reside in the high accuracy, high speed region of space, which makes it particularly appealing for practical applications. Fast Deep Network Cascade. Our main architecture is a cascade structure in which we take advantage of the fast features for elimination, VeryFast [2] as an initial stage and combine it with small and large deep networks [1, 5] for high accuracy. The VeryFast algorithm is a cascade itself, but of boosting classifiers. It reduces recall with each stage, producing a high average miss rate in the end. Since the goal is eliminate many non-pedestrian patches and at the same time keep the recall high, we used only 10% of the stages in that cascade. Namely, we use a cascade of only 200 stages, instead of the 2000 in the original work. The first stage of our deep cascade processes all image patches that have high confidence values and pass through the VeryFast classifier. We here utilize the idea of a tiny convolutional network proposed by our prior work [1]. The tiny deep network has three layers only and features a 5x5 convolution, a 1x1 convolution and a very shallow fully-connected layer of 512 units. It reduces the massive computational time that is needed to evaluate a full DNN at all candidate locations filtered by the previous stage. The speedup produced by the tiny network, is a crucial component in achieving real-time performance in our fast cascade method. The baseline deep neural network is based on the original deep network of Krizhevsky et al [5]. As mentioned, this network in general is extremely slow to be applied alone. To achieve real-time speeds, we first apply it to only the remaining filtered patches from the previous two stages. Another key difference is that we reduced the depths of some of the convolutional layers and the sizes of the receptive fields, which is specifically done to gain speed advantage. Runtime. Our deep cascade works at 67ms on a standard NVIDIA K20 Tesla GPU per 640x480 image, which is a runtime of 15 FPS. The time breakdown is as follows. The soft-cascade takes about 7 milliseconds (ms). About 1400 patches are passed through per image from the fast cascade. The tiny DNN runs at 0.67 ms per batch of 128, so it can process the patches in 7.3 ms. The final stage of the cascade (which is the baseline classifier) takes about 53ms. This is an overall runtime of 67ms. Experimental evaluation. We evaluate the performance of the Fast Deep Network Cascade using the training and test protocols established in the Caltech pedestrian benchmark [4]. We tested several scenarios by training on the Caltech data only, denoted as DeepCascade, on an indeFigure 1: Performance of pedestrian detection methods on the accuracy vs speed axis. Our DeepCascade method achieves both smaller missrates and real-time speeds. Methods for which the runtime is more than 5 seconds per image, or is unknown, are plotted on the left hand side. The SpatialPooling+/Katamari methods use additional motion information.",
"title": ""
},
{
"docid": "99a5e184a10ebf1bb07fb51799edf085",
"text": "Anthropogenic debris contaminates marine habitats globally, leading to several perceived ecological impacts. Here, we critically and systematically review the literature regarding impacts of debris from several scientific fields to understand the weight of evidence regarding the ecological impacts of marine debris. We quantified perceived and demonstrated impacts across several levels of biological organization that make up the ecosystem and found 366 perceived threats of debris across all levels. Two hundred and ninety-six of these perceived threats were tested, 83% of which were demonstrated. The majority (82%) of demonstrated impacts were due to plastic, relative to other materials (e.g., metals, glass) and largely (89%) at suborganismal levels (e.g., molecular, cellular, tissue). The remaining impacts, demonstrated at higher levels of organization (i.e., death to individual organisms, changes in assemblages), were largely due to plastic marine debris (> 1 mm; e.g., rope, straws, and fragments). Thus, we show evidence of ecological impacts from marine debris, but conclude that the quantity and quality of research requires improvement to allow the risk of ecological impacts of marine debris to be determined with precision. Still, our systematic review suggests that sufficient evidence exists for decision makers to begin to mitigate problematic plastic debris now, to avoid risk of irreversible harm.",
"title": ""
},
{
"docid": "8a708ec1187ecb2fe9fa929b46208b34",
"text": "This paper proposes a new face verification method that uses multiple deep convolutional neural networks (DCNNs) and a deep ensemble, that extracts two types of low dimensional but discriminative and high-level abstracted features from each DCNN, then combines them as a descriptor for face verification. Our DCNNs are built from stacked multi-scale convolutional layer blocks to present multi-scale abstraction. To train our DCNNs, we use different resolutions of triplets that consist of reference images, positive images, and negative images, and triplet-based loss function that maximize the ratio of distances between negative pairs and positive pairs and minimize the absolute distances between positive face images. A deep ensemble is generated from features extracted by each DCNN, and used as a descriptor to train the joint Bayesian learning and its transfer learning method. On the LFW, although we use only 198,018 images and only four different types of networks, the proposed method with the joint Bayesian learning and its transfer learning method achieved 98.33% accuracy. In addition to further increase the accuracy, we combine the proposed method and high dimensional LBP based joint Bayesian method, and achieved 99.08% accuracy on the LFW. Therefore, the proposed method helps to improve the accuracy of face verification when training data is insufficient to train DCNNs.",
"title": ""
},
{
"docid": "bf14f996f9013351aca1e9935157c0e3",
"text": "Attributed graphs are becoming important tools for modeling information networks, such as the Web and various social networks (e.g. Facebook, LinkedIn, Twitter). However, it is computationally challenging to manage and analyze attributed graphs to support effective decision making. In this paper, we propose, Pagrol, a parallel graph OLAP (Online Analytical Processing) system over attributed graphs. In particular, Pagrol introduces a new conceptual Hyper Graph Cube model (which is an attributed-graph analogue of the data cube model for relational DBMS) to aggregate attributed graphs at different granularities and levels. The proposed model supports different queries as well as a new set of graph OLAP Roll-Up/Drill-Down operations. Furthermore, on the basis of Hyper Graph Cube, Pagrol provides an efficient MapReduce-based parallel graph cubing algorithm, MRGraph-Cubing, to compute the graph cube for an attributed graph. Pagrol employs numerous optimization techniques: (a) a self-contained join strategy to minimize I/O cost; (b) a scheme that groups cuboids into batches so as to minimize redundant computations; (c) a cost-based scheme to allocate the batches into bags (each with a small number of batches); and (d) an efficient scheme to process a bag using a single MapReduce job. Results of extensive experimental studies using both real Facebook and synthetic datasets on a 128-node cluster show that Pagrol is effective, efficient and scalable.",
"title": ""
},
{
"docid": "2050510516064ec497ee853ac119b402",
"text": "Individuals with high math anxiety demonstrated smaller working memory spans, especially when assessed with a computation-based span task. This reduced working memory capacity led to a pronounced increase in reaction time and errors when mental addition was performed concurrently with a memory load task. The effects of the reduction also generalized to a working memory-intensive transformation task. Overall, the results demonstrated that an individual difference variable, math anxiety, affects on-line performance in math-related tasks and that this effect is a transitory disruption of working memory. The authors consider a possible mechanism underlying this effect--disruption of central executive processes--and suggest that individual difference variables like math anxiety deserve greater empirical attention, especially on assessments of working memory capacity and functioning.",
"title": ""
},
{
"docid": "84d8d6ebd899950712003a5567899f75",
"text": "Despite the benefits of information technology for corporations in staff recruitment (reduced time and costs per hire) the increased use also led to glut of applications especially in major enterprises. Therefore the companies forced to find the best candidate in times of a \"War for Talent\" need help to find this needle in a haystack. This help could be provided by recommender systems predominately used in e-commerce to recommend products or services to customers purchasing specific products. Recommender systems could assist the recruiter to find the adequate candidate within the applicant's database. In order to support this search and selection process we conduct a design science approach to integrate recommender systems in a holistic e-recruiting architecture and therewith provide a complete and new solution for IT support in staff recruitment.",
"title": ""
},
{
"docid": "d703e4a88b4af2b9d1fc86b448977b9f",
"text": "BACKGROUND\nThe diagnostic value of procalcitonin (PCT) for patients with renal impairment is unclear.\n\n\nMETHODS\nWe searched multiple databases for studies published through December 2011 that evaluated the diagnostic performance of PCT among patients with renal impairment and suspected systemic bacterial infection. We summarized test performance characteristics with the use of forest plots, hierarchical summary receiver operating characteristic (HSROC) curves, and bivariate random effects models.\n\n\nRESULTS\nOur search identified 201 citations, of which seven diagnostic studies evaluated 803 patients and 255 bacterial infection episodes. HSROC-bivariate pooled sensitivity estimates were 73% [95% confidence interval (95% CI) 54-86%] for PCT tests and 78% (95% CI 52-92%) for CRP tests. Pooled specificity estimates were higher for both PCT and CRP tests [PCT, 88% (95% CI 79-93%); CRP, 84% (95% CI, 52-96%)]. The positive likelihood ratio for PCT [likelihood (LR)+ 6.02, 95% CI 3.16-11.47] was sufficiently high to be qualified as a rule-in diagnostic tool, while the negative likelihood ratio was not low enough to be used as a rule-out diagnostic tool (LR- 0.31, 95% CI 0.17-0.57). There was no consistent evidence that PCT was more accurate than CRP test for the diagnosis of systemic infection among patients with renal impairment.\n\n\nCONCLUSIONS\nBoth PCT and CRP tests have poor sensitivity but acceptable specificity in diagnosing bacterial infection among patients with renal impairment. Given the poor negative likelihood ratio, its role as a rule-out test is questionable.",
"title": ""
}
] |
scidocsrr
|
79578da2b4d0b0d33e71dcd684eed31e
|
Architecture design and implementation of image based autonomous car: THUNDER-1
|
[
{
"docid": "dee922c700479ea808e59fd323193e48",
"text": "In this article we present a novel mapping system that robustly generates highly accurate 3D maps using an RGB-D camera. Our approach does not require any further sensors or odometry. With the availability of low-cost and light-weight RGB-D sensors such as the Microsoft Kinect, our approach applies to small domestic robots such as vacuum cleaners as well as flying robots such as quadrocopters. Furthermore, our system can also be used for free-hand reconstruction of detailed 3D models. In addition to the system itself, we present a thorough experimental evaluation on a publicly available benchmark dataset. We analyze and discuss the influence of several parameters such as the choice of the feature descriptor, the number of visual features, and validation methods. The results of the experiments demonstrate that our system can robustly deal with challenging scenarios such as fast cameras motions and feature-poor environments while being fast enough for online operation. Our system is fully available as open-source and has already been widely adopted by the robotics community.",
"title": ""
}
] |
[
{
"docid": "8538dea1bed2a699e99e5d89a91c5297",
"text": "Friction is primary disturbance in motion control. Different types of friction cause diminution of original torque in a DC motor, such as static friction, viscous friction etc. By some means if those can be determined and compensated, the friction effect from the DC motor can be neglected. It would be a great advantage for control systems. Authors have determined the types of frictions as well as frictional coefficients and suggested a unique way of compensating the friction in a DC motor using Disturbance Observer Method which is used to determine the disturbance torques acting on a DC motor. In simulation approach, the method is modelled using MATLAB and the results have been obtained and analysed. The block diagram consists with DC motor model with DOB and RTOB. Practical approach of the implemented block diagram is shown by the obtained results. It is discussed the possibility of applying this to real life applications.",
"title": ""
},
{
"docid": "35117070e1140f41b87f4849ddbbd3f2",
"text": "Iris recognition is a promising method by which to accurately identify a person. During the iris recognition stage, the features of the iris are extracted, including the unique, individual texture of the iris. The ability to extract the texture of the iris in non-cooperative environments from eye images captured at different distances, containing reflections, and under visible wavelength illumination will lead to increased iris recognition performance. A method that combined multiscale sparse representation of local Radon transform was proposed to down sample a normalized iris into different lengths of scales and different orientations of angles to form an iris feature vector. This research was tested using 1000 eye images from the UBIRIS.v2 database. The results showed that the proposed method performed better than existing methods when dealing with iris images captured at different distances. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cc77040acb144d0a49557c9ddc852ef3",
"text": "The training time for SVMs to compute the maximal marginal hyper-plane is at least O(N/sup 2/) with the data set size N, which makes it nonfavorable for large data sets. This work presents a study for enhancing the training time of SVMs, specifically when dealing with large data sets, using hierarchical clustering analysis. We use the dynamically growing self-organizing tree (DGSOT) algorithm for clustering because it has proved to overcome the drawbacks of traditional hierarchical clustering algorithms. Clustering analysis helps find the boundary points, which are the most qualified data points to train SVMs, between two classes. We present a new approach of combination of SVMs and DGSOT, which starts with an initial training set and expands it gradually using the clustering structure produced by the DGSOT algorithm. We compare our approach with the Rocchio Bundling technique in terms of accuracy loss and training time gain using two benchmark real data sets.",
"title": ""
},
{
"docid": "56ff8aa7934ed264908f42025d4c175b",
"text": "The identification of design patterns as part of the reengineering process can convey important information to the designer. However, existing pattern detection methodologies generally have problems in dealing with one or more of the following issues: identification of modified pattern versions, search space explosion for large systems and extensibility to novel patterns. In this paper, a design pattern detection methodology is proposed that is based on similarity scoring between graph vertices. Due to the nature of the underlying graph algorithm, this approach has the ability to also recognize patterns that are modified from their standard representation. Moreover, the approach exploits the fact that patterns reside in one or more inheritance hierarchies, reducing the size of the graphs to which the algorithm is applied. Finally, the algorithm does not rely on any pattern-specific heuristic, facilitating the extension to novel design structures. Evaluation on three open-source projects demonstrated the accuracy and the efficiency of the proposed method",
"title": ""
},
{
"docid": "c158249f32680238b4b81e27ee430233",
"text": "We introduce a smart antenna (SA) module to be used in OMNeT++ for wireless sensor networks (WSN). This module is a collection of tools in OMNeT++ for adding smart antenna capability to a central node in WSNs. It is based on sectoral sweeper (SS) scheme which was shown to provide efficient task management and easy localization scheme. The number of sensing nodes, energy consumption of nodes, and data traffic carried in the network are reduced with the SS. OMNeT++ is an open source object-oriented modular discrete event network simulator consisting of hierarchically nested modules. With the SA module, desired task region can be specified by a task beam with changing beam width and beam direction parameters in the OMNeT++ configuration file. In addition, each node in the network model that uses SA module has capability of Mobility framework which is intended to support wireless and mobile simulations within OMNeT++. The simulation results provide performance evaluation using the developed module in OMNeT++ for WSNs.",
"title": ""
},
{
"docid": "e0c83197770752c9fdfe5e51edcd3d46",
"text": "In the last decade, it has become obvious that Alzheimer's disease (AD) is closely linked to changes in lipids or lipid metabolism. One of the main pathological hallmarks of AD is amyloid-β (Aβ) deposition. Aβ is derived from sequential proteolytic processing of the amyloid precursor protein (APP). Interestingly, both, the APP and all APP secretases are transmembrane proteins that cleave APP close to and in the lipid bilayer. Moreover, apoE4 has been identified as the most prevalent genetic risk factor for AD. ApoE is the main lipoprotein in the brain, which has an abundant role in the transport of lipids and brain lipid metabolism. Several lipidomic approaches revealed changes in the lipid levels of cerebrospinal fluid or in post mortem AD brains. Here, we review the impact of apoE and lipids in AD, focusing on the major brain lipid classes, sphingomyelin, plasmalogens, gangliosides, sulfatides, DHA, and EPA, as well as on lipid signaling molecules, like ceramide and sphingosine-1-phosphate. As nutritional approaches showed limited beneficial effects in clinical studies, the opportunities of combining different supplements in multi-nutritional approaches are discussed and summarized.",
"title": ""
},
{
"docid": "48a45f03f31d8fc0daede6603f3b693a",
"text": "This paper presents GelClust, a new software that is designed for processing gel electrophoresis images and generating the corresponding phylogenetic trees. Unlike the most of commercial and non-commercial related softwares, we found that GelClust is very user-friendly and guides the user from image toward dendrogram through seven simple steps. Furthermore, the software, which is implemented in C# programming language under Windows operating system, is more accurate than similar software regarding image processing and is the only software able to detect and correct gel 'smile' effects completely automatically. These claims are supported with experiments.",
"title": ""
},
{
"docid": "66876eb3710afda075b62b915a2e6032",
"text": "In this paper we analyze the CS Principles project, a proposed Advanced Placement course, by focusing on the second pilot that took place in 2011-2012. In a previous publication the first pilot of the course was explained, but not in a context related to relevant educational research and philosophy. In this paper we analyze the content and the pedagogical approaches used in the second pilot of the project. We include information about the third pilot being conducted in 2012-2013 and the portfolio exam that is part of that pilot. Both the second and third pilots provide evidence that the CS Principles course is succeeding in changing how computer science is taught and to whom it is taught.",
"title": ""
},
{
"docid": "0b70a4a44a26ff9218224727fbba823c",
"text": "Recently, DNN model compression based on network architecture design, e.g., SqueezeNet, attracted a lot attention. No accuracy drop on image classification is observed on these extremely compact networks, compared to well-known models. An emerging question, however, is whether these model compression techniques hurt DNNs learning ability other than classifying images on a single dataset. Our preliminary experiment shows that these compression methods could degrade domain adaptation (DA) ability, though the classification performance is preserved. Therefore, we propose a new compact network architecture and unsupervised DA method in this paper. The DNN is built on a new basic module Conv-M which provides more diverse feature extractors without significantly increasing parameters. The unified framework of our DA method will simultaneously learn invariance across domains, reduce divergence of feature representations, and adapt label prediction. Our DNN has 4.1M parameters, which is only 6.7% of AlexNet or 59% of GoogLeNet. Experiments show that our DNN obtains GoogLeNet-level accuracy both on classification and DA, and our DA method slightly outperforms previous competitive ones. Put all together, our DA strategy based on our DNN achieves state-of-the-art on sixteen of total eighteen DA tasks on popular Office-31 and Office-Caltech datasets.",
"title": ""
},
{
"docid": "28fc20058052e2f4288465f618be8641",
"text": "One means to support for design-by-analogy (DbA) in practice involves giving designers efficient access to source analogies as inspiration to solve problems. The patent database has been used for many DbA support efforts, as it is a preexisting repository of catalogued technology. Latent Semantic Analysis (LSA) has been shown to be an effective computational text processing method for extracting meaningful similarities between patents for useful functional exploration during DbA. However, this has only been shown to be useful at a small-scale (100 patents). Considering the vastness of the patent database and realistic exploration at a largescale, it is important to consider how these computational analyses change with orders of magnitude more data. We present analysis of 1,000 random mechanical patents, comparing the ability of LSA to Latent Dirichlet Allocation (LDA) to categorize patents into meaningful groups. Resulting implications for large(r) scale data mining of patents for DbA support are detailed.",
"title": ""
},
{
"docid": "bc83ea7c70a901d4b22c3aa13386e522",
"text": "Code-switching (CS) refers to a linguistic phenomenon where a speaker uses different languages in an utterance or between alternating utterances. In this work, we study end-to-end (E2E) approaches to the Mandarin-English code-switching speech recognition (CSSR) task. We first examine the effectiveness of using data augmentation and byte-pair encoding (BPE) subword units. More importantly, we propose a multitask learning recipe, where a language identification task is explicitly learned in addition to the E2E speech recognition task. Furthermore, we introduce an efficient word vocabulary expansion method for language modeling to alleviate data sparsity issues under the code-switching scenario. Experimental results on the SEAME data, a Mandarin-English CS corpus, demonstrate the effectiveness of the proposed methods.",
"title": ""
},
{
"docid": "f4490447bf8a43de95d61e1626d365ae",
"text": "The connective tissue of the skin is composed mostly of collagen and elastin. Collagen makes up 70-80% of the dry weight of the skin and gives the dermis its mechanical and structural integrity. Elastin is a minor component of the dermis, but it has an important function in providing the elasticity of the skin. During aging, the synthesis of collagen gradually declines, and the skin thus becomes thinner in protected skin, especially after the seventh decade. Several factors contribute to the aging of the skin. In several hereditary disorders collagen or elastin are deficient, leading to accelerated aging. In cutis laxa, for example, elastin fibers are deficient or completely lacking, leading to sagging of the skin. Solar irradiation causes skin to look prematurely aged. Especially ultraviolet radiation induces an accumulation of abnormal elastotic material. These changes are usually observed after 60 years of age, but excessive exposure to the sun may cause severe photoaging as early as the second decade of life. The different biochemical and mechanical parameters of the dermis can be studied by modern techniques. The applications of these techniques to study the aging of dermal connective tissue are described in detail.",
"title": ""
},
{
"docid": "cbe9d6b2fadd67fdb5dbe6cbd944988d",
"text": "Data centers demand high-current, high-efficiency, and low-cost power solutions. The high-voltage dc distribution power architecture has been drawing attention due to its lower conduction loss on cables and harnesses. In this structure, the 380-12 V high output current isolated converter is the key stage. This paper presents a 1-MHz 1-kW LLC resonant converter using GaN devices and planar matrix transformers that are designed and optimized for this application. The transformer design and the optimization of the output capacitor termination are performed and verified. Finally, this cost-effective converter achieves above 97% peak efficiency and 700-W/in3 power density.",
"title": ""
},
{
"docid": "708d024f7fccc00dd3961ecc9aca1893",
"text": "Transportation networks play a crucial role in human mobility, the exchange of goods and the spread of invasive species. With 90 per cent of world trade carried by sea, the global network of merchant ships provides one of the most important modes of transportation. Here, we use information about the itineraries of 16 363 cargo ships during the year 2007 to construct a network of links between ports. We show that the network has several features that set it apart from other transportation networks. In particular, most ships can be classified into three categories: bulk dry carriers, container ships and oil tankers. These three categories do not only differ in the ships' physical characteristics, but also in their mobility patterns and networks. Container ships follow regularly repeating paths whereas bulk dry carriers and oil tankers move less predictably between ports. The network of all ship movements possesses a heavy-tailed distribution for the connectivity of ports and for the loads transported on the links with systematic differences between ship types. The data analysed in this paper improve current assumptions based on gravity models of ship movements, an important step towards understanding patterns of global trade and bioinvasion.",
"title": ""
},
{
"docid": "127434902fe337d104929cd95db42def",
"text": "Formal concepts and closed itemsets proved to be of big importance for knowledge discovery, both as a tool for concise representation of association rules and a tool for clustering and constructing domain taxonomies and ontologies. Exponential explosion makes it difficult to consider the whole concept lattice arising from data, one needs to select most useful and interesting concepts. In this paper interestingness measures of concepts are considered and compared with respect to various aspects, such as efficiency of computation and applicability to noisy data and performing ranking correlation. Formal Concept Analysis intrestingess measures closed itemsets",
"title": ""
},
{
"docid": "312bfca90e57468622e6b3cd2b48a10b",
"text": "Faciogenital dysplasia or Aarskog–Scott syndrome (AAS) is a genetically heterogeneous developmental disorder. The X-linked form of AAS has been ascribed to mutations in the FGD1 gene. However, although AAS may be considered as a relatively frequent clinical diagnosis, mutations have been established in few patients. Genetic heterogeneity and the clinical overlap with a number of other syndromes might explain this discrepancy. In this study, we have conducted a single-strand conformation polymorphism (SSCP) analysis of the entire coding region of FGD1 in 46 AAS patients and identified eight novel mutations, including one insertion, four deletions and three missense mutations (19.56% detection rate). One mutation (528insC) was found in two independent families. The mutations are scattered all along the coding sequence. Phenotypically, all affected males present with the characteristic AAS phenotype. FGD1 mutations were not associated with severe mental retardation. However, neuropsychiatric disorders, mainly behavioural and learning problems in childhood, were observed in five out of 12 mutated individuals. The current study provides further evidence that mutations of FGD1 may cause AAS and expands the spectrum of disease-causing mutations. The importance of considering the neuropsychological phenotype of AAS patients is discussed.",
"title": ""
},
{
"docid": "c6f173f75917ee0632a934103ca7566c",
"text": "Mersenne Twister (MT) is a widely-used fast pseudorandom number generator (PRNG) with a long period of 2 − 1, designed 10 years ago based on 32-bit operations. In this decade, CPUs for personal computers have acquired new features, such as Single Instruction Multiple Data (SIMD) operations (i.e., 128bit operations) and multi-stage pipelines. Here we propose a 128-bit based PRNG, named SIMD-oriented Fast Mersenne Twister (SFMT), which is analogous to MT but making full use of these features. Its recursion fits pipeline processing better than MT, and it is roughly twice as fast as optimised MT using SIMD operations. Moreover, the dimension of equidistribution of SFMT is better than MT. We also introduce a block-generation function, which fills an array of 32-bit integers in one call. It speeds up the generation by a factor of two. A speed comparison with other modern generators, such as multiplicative recursive generators, shows an advantage of SFMT. The implemented C-codes are downloadable from http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html.",
"title": ""
},
{
"docid": "589396a7c9dae0567f0bcd4d83461a6f",
"text": "The risk of inadequate hand hygiene in food handling settings is exacerbated when water is limited or unavailable, thereby making washing with soap and water difficult. The SaniTwice method involves application of excess alcohol-based hand sanitizer (ABHS), hand \"washing\" for 15 s, and thorough cleaning with paper towels while hands are still wet, followed by a standard application of ABHS. This study investigated the effectiveness of the SaniTwice methodology as an alternative to hand washing for cleaning and removal of microorganisms. On hands moderately soiled with beef broth containing Escherichia coli (ATCC 11229), washing with a nonantimicrobial hand washing product achieved a 2.86 (±0.64)-log reduction in microbial contamination compared with the baseline, whereas the SaniTwice method with 62 % ethanol (EtOH) gel, 62 % EtOH foam, and 70 % EtOH advanced formula gel achieved reductions of 2.64 ± 0.89, 3.64 ± 0.57, and 4.61 ± 0.33 log units, respectively. When hands were heavily soiled from handling raw hamburger containing E. coli, washing with nonantimicrobial hand washing product and antimicrobial hand washing product achieved reductions of 2.65 ± 0.33 and 2.69 ± 0.32 log units, respectively, whereas SaniTwice with 62 % EtOH foam, 70 % EtOH gel, and 70 % EtOH advanced formula gel achieved reductions of 2.87 ± 0.42, 2.99 ± 0.51, and 3.92 ± 0.65 log units, respectively. These results clearly demonstrate that the in vivo antibacterial efficacy of the SaniTwice regimen with various ABHS is equivalent to or exceeds that of the standard hand washing approach as specified in the U.S. Food and Drug Administration Food Code. Implementation of the SaniTwice regimen in food handling settings with limited water availability should significantly reduce the risk of foodborne infections resulting from inadequate hand hygiene.",
"title": ""
},
{
"docid": "f861b693a060d8da8df2d680d68566de",
"text": "Density-based clustering algorithms are attractive for the task of class identification in spatial database. However, in many cases, very different local-density clusters exist in different regions of data space, therefore, DBSCAN [Ester, M. et al., A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In E. Simoudis, J. Han, & U. M. Fayyad (Eds.), Proc. 2nd Int. Conf. on Knowledge Discovery and Data Mining (pp. 226-231). Portland, OR: AAAI.] using a global density parameter is not suitable. As an improvement, OPTICS [Ankerst, M. et al,(1999). OPTICS: Ordering Points To Identify the Clustering Structure. In A. Delis, C. Faloutsos, & S. Ghandeharizadeh (Eds.), Proc. ACM SIGMOD Int. Conf. on Management of Data (pp. 49-60). Philadelphia, PA: ACM.] creates an augmented ordering of the database representing its density-based clustering structure, but it only generates the clusters whose local-density exceeds some threshold instead of similar local-density clusters and doesn't produce a clustering of a data set explicitly. Furthermore the parameters required by almost all the well-known clustering algorithms are hard to determine but have a significant influence on the clustering result. In this paper, a new clustering algorithm LDBSCAN relying on a local-density-based notion of clusters is proposed to solve those problems and, what is more, it is very easy for us to pick the appropriate parameters and takes the advantage of the LOF [Breunig, M. M., et al.,(2000). LOF: Identifying Density-Based Local Outliers. In W. Chen, J. F. Naughton, & P. A. Bernstein (Eds.), Proc. ACM SIGMOD Int. Conf. on Management of Data (pp. 93-104). Dalles, TX: ACM.] to detect the noises comparing with other density-based clustering algorithms. The proposed algorithm has potential applications in business intelligence and enterprise information systems.",
"title": ""
},
{
"docid": "8f494ce7965747ab0f90c1543dd3c02e",
"text": "The world is becoming urban. The UN predicts that the world's urban population will almost double from 3·3 billion in 2007 to 6·3 billion in 2050. Most of this increase will be in developing countries. Exponential urban growth is having a profound effect on global health. Because of international travel and migration, cities are becoming important hubs for the transmission of infectious diseases, as shown by recent pandemics. Physicians in urban environments in developing and developed countries need to be aware of the changes in infectious diseases associated with urbanisation. Furthermore, health should be a major consideration in town planning to ensure urbanisation works to reduce the burden of infectious diseases in the future.",
"title": ""
}
] |
scidocsrr
|
8cc22c2e569fdc5c1f7ed74dec3fff9a
|
FACTORIE: Probabilistic Programming via Imperatively Defined Factor Graphs
|
[
{
"docid": "bc0def2cdcb570feaee55293cea0c97f",
"text": "Inductive Logic Programming (ILP) is a new discipline which investigates the inductive construction of rst-order clausal theories from examples and background knowledge. We survey the most important theories and methods of this new eld. Firstly, various problem speciications of ILP are formalised in semantic settings for ILP, yielding a \\model-theory\" for ILP. Secondly, a generic ILP algorithm is presented. Thirdly, the inference rules and corresponding operators used in ILP are presented, resulting in a \\proof-theory\" for ILP. Fourthly, since inductive inference does not produce statements which are assured to follow from what is given, inductive inferences require an alternative form of justiication. This can take the form of either probabilistic support or logical constraints on the hypothesis language. Information compression techniques used within ILP are presented within a unifying Bayesian approach to connrmation and corroboration of hypotheses. Also, diierent ways to constrain the hypothesis language, or specify the declarative bias are presented. Fifthly, some advanced topics in ILP are addressed. These include aspects of computational learning theory as applied to ILP, and the issue of predicate invention. Finally, we survey some applications and implementations of ILP. ILP applications fall under two diierent categories: rstly scientiic discovery and knowledge acquisition, and secondly programming assistants.",
"title": ""
}
] |
[
{
"docid": "5fe43f0b23b0cfd82b414608e60db211",
"text": "The Distress Analysis Interview Corpus (DAIC) contains clinical interviews designed to support the diagnosis of psychological distress conditions such as anxiety, depression, and post traumatic stress disorder. The interviews are conducted by humans, human controlled agents and autonomous agents, and the participants include both distressed and non-distressed individuals. Data collected include audio and video recordings and extensive questionnaire responses; parts of the corpus have been transcribed and annotated for a variety of verbal and non-verbal features. The corpus has been used to support the creation of an automated interviewer agent, and for research on the automatic identification of psychological distress.",
"title": ""
},
{
"docid": "7b96cba9b115d842f0e6948434b40b37",
"text": "A broadband printed microstrip antenna having cross polarization level >; 15 dB with improved gain in the entire frequency band is presented. Principle of stacking is implemented on a strip loaded slotted broadband patch antenna for enhancing the gain without affecting the broadband impedance matching characteristics and offsetting the position of the upper patch excites a lower resonance which enhances the bandwidth further. The antenna has a dimension of 42 × 55 × 4.8 mm3 when printed on a substrate of dielectric constant 4.2 and has a 2:1 VSWR bandwidth of 34.9%. The antenna exhibits a peak gain of 8.07 dBi and a good front to back ratio better than 12 dB is observed throughout the entire operating band. Simulated and experimental reflection characteristics of the antenna with and without stacking along with offset variation studies, radiation patterns and gain of the final antenna are presented.",
"title": ""
},
{
"docid": "333b21433d17a9d271868e203c8a9481",
"text": "The aim of stock prediction is to effectively predict future stock market trends (or stock prices), which can lead to increased profit. One major stock analysis method is the use of candlestick charts. However, candlestick chart analysis has usually been based on the utilization of numerical formulas. There has been no work taking advantage of an image processing technique to directly analyze the visual content of the candlestick charts for stock prediction. Therefore, in this study we apply the concept of image retrieval to extract seven different wavelet-based texture features from candlestick charts. Then, similar historical candlestick charts are retrieved based on different texture features related to the query chart, and the “future” stock movements of the retrieved charts are used for stock prediction. To assess the applicability of this approach to stock prediction, two datasets are used, containing 5-year and 10-year training and testing sets, collected from the Dow Jones Industrial Average Index (INDU) for the period between 1990 and 2009. Moreover, two datasets (2010 and 2011) are used to further validate the proposed approach. The experimental results show that visual content extraction and similarity matching of candlestick charts is a new and useful analytical method for stock prediction. More specifically, we found that the extracted feature vectors of 30, 90, and 120, the number of textual features extracted from the candlestick charts in the BMP format, are more suitable for predicting stock movements, while the 90 feature vector offers the best performance for predicting short- and medium-term stock movements. That is, using the 90 feature vector provides the lowest MAPE (3.031%) and Theil’s U (1.988%) rates in the twenty-year dataset, and the best MAPE (2.625%, 2.945%) and Theil’s U (1.622%, 1.972%) rates in the two validation datasets (2010 and 2011).",
"title": ""
},
{
"docid": "faad1a2863986f31f26f1e261d75096a",
"text": "Multilabel classification is rapidly developing as an important aspect of modern predictive modeling, motivating study of its theoretical aspects. To this end, we propose a framework for constructing and analyzing multilabel classification metrics which reveals novel results on a parametric form for population optimal classifiers, and additional insight into the role of label correlations. In particular, we show that for multilabel metrics constructed as instance-, microand macroaverages, the population optimal classifier can be decomposed into binary classifiers based on the marginal instance-conditional distribution of each label, with a weak association between labels via the threshold. Thus, our analysis extends the state of the art from a few known multilabel classification metrics such as Hamming loss, to a general framework applicable to many of the classification metrics in common use. Based on the population-optimal classifier, we propose a computationally efficient and general-purpose plug-in classification algorithm, and prove its consistency with respect to the metric of interest. Empirical results on synthetic and benchmark datasets are supportive of our theoretical findings.",
"title": ""
},
{
"docid": "6356a0272b95ade100ad7ececade9e36",
"text": "We describe a browser extension, PwdHash, that transparently produces a different password for each site, improving web password security and defending against password phishing and other attacks. Since the browser extension applies a cryptographic hash function to a combination of the plaintext password entered by the user, data associated with the web site, and (optionally) a private salt stored on the client machine, theft of the password received at one site will not yield a password that is useful at another site. While the scheme requires no changes on the server side, implementing this password method securely and transparently in a web browser extension turns out to be quite difficult. We describe the challenges we faced in implementing PwdHash and some techniques that may be useful to anyone facing similar security issues in a browser environment.",
"title": ""
},
{
"docid": "737231466c50ac647f247b60852026e2",
"text": "The proliferation of wearable devices, e.g., smartwatches and activity trackers, with embedded sensors has already shown its great potential on monitoring and inferring human daily activities. This paper reveals a serious security breach of wearable devices in the context of divulging secret information (i.e., key entries) while people are accessing key-based security systems. Existing methods of obtaining such secret information rely on installations of dedicated hardware (e.g., video camera or fake keypad), or training with labeled data from body sensors, which restrict use cases in practical adversary scenarios. In this work, we show that a wearable device can be exploited to discriminate mm-level distances and directions of the user’s fine-grained hand movements, which enable attackers to reproduce the trajectories of the user’s hand and further to recover the secret key entries. In particular, our system confirms the possibility of using embedded sensors in wearable devices, i.e., accelerometers, gyroscopes, and magnetometers, to derive the moving distance of the user’s hand between consecutive key entries regardless of the pose of the hand. Our Backward PIN-Sequence Inference algorithm exploits the inherent physical constraints between key entries to infer the complete user key entry sequence. Extensive experiments are conducted with over 7,000 key entry traces collected from 20 adults for key-based security systems (i.e., ATM keypads and regular keyboards) through testing on different kinds of wearables. Results demonstrate that such a technique can achieve 80 percent accuracy with only one try and more than 90 percent accuracy with three tries. Moreover, the performance of our system is consistently good even under low sampling rate and when inferring long PIN sequences. To the best of our knowledge, this is the first technique that reveals personal PINs leveraging wearable devices without the need for labeled training data and contextual information.",
"title": ""
},
{
"docid": "436657862080e0c37966ddba3df0c4b5",
"text": "Scholarly digital libraries increasingly provide analytics to information within documents themselves. This includes information about the logical document structure of use to downstream components, such as search, navigation, and summarization. In this paper, the authors describe SectLabel, a module that further develops existing software to detect the logical structure of a document from existing PDF files, using the formalism of conditional random fields. While previous work has assumed access only to the raw text representation of the document, a key aspect of this work is to integrate the use of a richer representation of the document that includes features from optical character recognition (OCR), such as font size and text position. Experiments reveal that using such rich features improves logical structure detection by a significant 9 F1 points, over a suitable baseline, motivating the use of richer document representations in other digital library applications. DOI: 10.4018/978-1-4666-0900-6.ch014",
"title": ""
},
{
"docid": "9a1bb9370031cbe9b6b3175b216aeea5",
"text": "The area of an image multi-label classification is increase continuously in last few years, in machine learning and computer vision. Multi-label classification has attracted significant attention from researchers and has been applied to an image annotation. In multi-label classification, each instance is assigned to multiple classes; it is a common problem in data analysis. In this paper, represent general survey on the research work is going on in the field of multi-label classification. Finally, paper is concluded towards challenges in multi-label classification for images for future research.",
"title": ""
},
{
"docid": "cf8d4be65f988bd45dc56dc8dc3988d2",
"text": "In this paper, we deal with several aspects related to the control of tendon-based actuation systems for robotic devices. In particular, the problems that are considered in this paper are related to the modeling, identification, and control of tendons sliding on curved pathways, subject to friction and viscoelastic effects. Tendons made in polymeric materials are considered, and therefore, hysteresis in the transmission system characteristic must be taken into account as an additional nonlinear effect because of the plasticity and creep phenomena typical of these materials. With the aim of reproducing these behaviors, a viscoelastic model is used to model the tendon compliance. Particular attention has been given to the friction effects arising from the interaction between the tendon pathway and the tendon itself. This phenomenon has been characterized by means of a LuGre-like dynamic friction model to consider the effects that cannot be reproduced by employing a static friction model. A specific setup able to measure the tendon's tension in different points along its path has been designed in order to verify the tension distribution and identify the proper parameters. Finally, a simple control strategy for the compensation of these nonlinear effects and the control of the force that is applied by the tendon to the load is proposed and experimentally verified.",
"title": ""
},
{
"docid": "3072c5458a075e6643a7679ccceb1417",
"text": "A novel interleaved flyback converter with leakage energy recycled is proposed. The proposed converter is combined with dual-switch dual-transformer flyback topology. Two clamping diodes are used to reduce the voltage stress on power switches to the input voltage level and also to recycle leakage inductance energy to the input voltage and capacitor. Besides, the interleaved control is implemented to reduce the output current ripple. In addition, the voltage on the primary windings is reduced to the half of the input voltage and thus reducing the turns ratio of transformers to improve efficiency. The operating principle and the steady state analysis of the proposed converter are discussed in detail. Finally, an experimental prototype is implemented with 400V input voltage, 24V/300W output to verify the feasibility of the proposed converter. The experimental results reveals that the highest efficiency of the proposed converter is 94.42%, the full load efficiency is 92.7%, and the 10% load efficiency is 92.61%.",
"title": ""
},
{
"docid": "3d7eb095e68a9500674493ee58418789",
"text": "Hundreds of scholarly studies have investigated various aspects of the immensely popular Wikipedia. Although a number of literature reviews have provided overviews of this vast body of research, none of them has specifically focused on the readers of Wikipedia and issues concerning its readership. In this systematic literature review, we review 99 studies to synthesize current knowledge regarding the readership of Wikipedia and also provide an analysis of research methods employed. The scholarly research has found that Wikipedia is popular not only for lighter topics such as entertainment, but also for more serious topics such as health information and legal background. Scholars, librarians and students are common users of Wikipedia, and it provides a unique opportunity for educating students in digital",
"title": ""
},
{
"docid": "aadc952471ecd67d0c0731fa5a375872",
"text": "As the aircraft industry is moving towards the all electric and More Electric Aircraft (MEA), there is increase demand for electrical power in the aircraft. The trend in the aircraft industry is to replace hydraulic and pneumatic systems with electrical systems achieving more comfort and monitoring features. Moreover, the structure of MEA distribution system improves aircraft maintainability, reliability, flight safety and efficiency. Detailed descriptions of the modern MEA generation and distribution systems as well as the power converters and load types are explained and outlined. MEA electrical distribution systems are mainly in the form of multi-converter power electronic system.",
"title": ""
},
{
"docid": "1c80fdc30b2b37443367dae187fbb376",
"text": "The web is a catalyst for drawing people together around shared goals, but many groups never reach critical mass. It can thus be risky to commit time or effort to a goal: participants show up only to discover that nobody else did, and organizers devote significant effort to causes that never get off the ground. Crowdfunding has lessened some of this risk by only calling in donations when an effort reaches a collective monetary goal. However, it leaves unsolved the harder problem of mobilizing effort, time and participation. We generalize the concept into activation thresholds, commitments that are conditioned on others' participation. With activation thresholds, supporters only need to show up for an event if enough other people commit as well. Catalyst is a platform that introduces activation thresholds for on-demand events. For more complex coordination needs, Catalyst also provides thresholds based on time or role (e.g., a bake sale requiring commitments for bakers, decorators, and sellers). In a multi-month field deployment, Catalyst helped users organize events including food bank volunteering, on-demand study groups, and mass participation events like a human chess game. Our results suggest that activation thresholds can indeed catalyze a large class of new collective efforts.",
"title": ""
},
{
"docid": "44f257275a36308ce088881fafc92d7c",
"text": "Frauds related to the ATM (Automatic Teller Machine) are increasing day by day which is a serious issue. ATM security is used to provide protection against these frauds. Though security is provided for ATM machine, cases of robberies are increasing. Previous technologies provide security within machines for secure transaction, but machine is not neatly protected. The ATM machines are not safe since security provided traditionally were either by using RFID reader or by using security guard outside the ATM. This security is not sufficient because RFID card can be stolen and can be misused for robbery as well as watchman can be blackmailed by the thief. So there is a need to propose new technology which can overcome this problem. This paper proposes a system which aims to design real-time monitoring and controlling system. The system is implemented using Raspberry Pi and fingerprint module which make the system more secure, cost effective and stand alone. For controlling purpose, Embedded Web Server (EWS) is designed using Raspberry Pi which serves web page on which video footage of ATM center is seen and controlled. So the proposed system removes the drawback of manual controlling camera module and door also this system is stand alone and cost effective.",
"title": ""
},
{
"docid": "65ecfef85ae09603afddde09a2c65bf4",
"text": "We outline a representation for discrete multivariate distributions in terms of interventional potential functions that are globally normalized. This representation can be used to model the effects of interventions, and the independence properties encoded in this model can be represented as a directed graph that allows cycles. In addition to discussing inference and sampling with this representation, we give an exponential family parametrization that allows parameter estimation to be stated as a convex optimization problem; we also give a convex relaxation of the task of simultaneous parameter and structure learning using group `1regularization. The model is evaluated on simulated data and intracellular flow cytometry data.",
"title": ""
},
{
"docid": "854d3759757b3e335dac88adbea9734c",
"text": "Micro Hotplate (MHP) is the key component in micro-sensors particularly gas sensors. In this paper, we have presented the design and simulation results of a meander micro heater based on platinum material. A comparative study by simulating two different heater thicknesses has also been presented in this paper. The membrane size is 1.4mm × 1.6mm and a thickness of 1.4μm. Above the membrane, a platinum film was deposed with a size of 1.1 × 1.1 mm and a various thickness of 0.1 μm and 0.15 μm. Power consumption and temperature distribution were determined in the Platinum micro heater's structure over a supply voltage of 5, 6 and 7 V.",
"title": ""
},
{
"docid": "2effb3276d577d961f6c6ad18a1e7b3e",
"text": "This paper extends the recovery of structure and motion to im age sequences with several independently moving objects. The mot ion, structure, and camera calibration are all a-priori unknown. The fundamental constraint that we introduce is that multiple motions must share the same camer parameters. Existing work on independent motions has not employed this constr ai t, and therefore has not gained over independent static-scene reconstructi ons. We show how this constraint leads to several new results in st ructure and motion recovery, where Euclidean reconstruction becomes pos ible in the multibody case, when it was underconstrained for a static scene. We sho w how to combine motions of high-relief, low-relief and planar objects. Add itionally we show that structure and motion can be recovered from just 4 points in th e uncalibrated, fixed camera, case. Experiments on real and synthetic imagery demonstrate the v alidity of the theory and the improvement in accuracy obtained using multibody an alysis.",
"title": ""
},
{
"docid": "8fa8e875a948aed94b7682b86fcbc171",
"text": "Do teams show stable conflict interaction patterns that predict their performance hours, weeks, or even months in advance? Two studies demonstrate that two of the same patterns of emotional interaction dynamics that distinguish functional from dysfunctional marriages also distinguish high from low-performance design teams in the field, up to 6 months in advance, with up to 91% accuracy, and based on just 15minutes of interaction data: Group Affective Balance, the balance of positive to negative affect during an interaction, and Hostile Affect, the expression of a set of specific negative behaviors were both found as predictors of team performance. The research also contributes a novel method to obtain a representative sample of a team's conflict interaction. Implications for our understanding of design work in teams and for the design of groupware and feedback intervention systems are discussed.",
"title": ""
},
{
"docid": "f5c4c25286eb419eb8f7100702062180",
"text": "The primary objective of this investigation was to quantitatively identify which training variables result in the greatest strength and hypertrophy outcomes with lower body low intensity training with blood flow restriction (LI-BFR). Searches were performed for published studies with certain criteria. First, the primary focus of the study must have compared the effects of low intensity endurance or resistance training alone to low intensity exercise with some form of blood flow restriction. Second, subject populations had to have similar baseline characteristics so that valid outcome measures could be made. Finally, outcome measures had to include at least one measure of muscle hypertrophy. All studies included in the analysis utilized MRI except for two which reported changes via ultrasound. The mean overall effect size (ES) for muscle strength for LI-BFR was 0.58 [95% CI: 0.40, 0.76], and 0.00 [95% CI: −0.18, 0.17] for low intensity training. The mean overall ES for muscle hypertrophy for LI-BFR training was 0.39 [95% CI: 0.35, 0.43], and −0.01 [95% CI: −0.05, 0.03] for low intensity training. Blood flow restriction resulted in significantly greater gains in strength and hypertrophy when performed with resistance training than with walking. In addition, performing LI-BFR 2–3 days per week resulted in the greatest ES compared to 4–5 days per week. Significant correlations were found between ES for strength development and weeks of duration, but not for muscle hypertrophy. This meta-analysis provides insight into the impact of different variables on muscular strength and hypertrophy to LI-BFR training.",
"title": ""
},
{
"docid": "a649a105b1d127c9c9ea2a9d4dad5d11",
"text": "Given the size and confidence of pairwise local orderings, angular embedding (AE) finds a global ordering with a near-global optimal eigensolution. As a quadratic criterion in the complex domain, AE is remarkably robust to outliers, unlike its real domain counterpart LS, the least squares embedding. Our comparative study of LS and AE reveals that AE's robustness is due not to the particular choice of the criterion, but to the choice of representation in the complex domain. When the embedding is encoded in the angular space, we not only have a nonconvex error function that delivers robustness, but also have a Hermitian graph Laplacian that completely determines the optimum and delivers efficiency. The high quality of embedding by AE in the presence of outliers can hardly be matched by LS, its corresponding L1 norm formulation, or their bounded versions. These results suggest that the key to overcoming outliers lies not with additionally imposing constraints on the embedding solution, but with adaptively penalizing inconsistency between measurements themselves. AE thus significantly advances statistical ranking methods by removing the impact of outliers directly without explicit inconsistency characterization, and advances spectral clustering methods by covering the entire size-confidence measurement space and providing an ordered cluster organization.",
"title": ""
}
] |
scidocsrr
|
f2d65fe33583d08a48e7d71f3964e10e
|
Vertex reconstruction of neutrino interactions using deep learning
|
[
{
"docid": "4a684a0a590f326894416d5afc31b63c",
"text": "Collisions at high-energy particle colliders are a traditionally fruitful source of exotic particle discoveries. Finding these rare particles requires solving difficult signal-versus-background classification problems, hence machine-learning approaches are often used. Standard approaches have relied on 'shallow' machine-learning models that have a limited capacity to learn complex nonlinear functions of the inputs, and rely on a painstaking search through manually constructed nonlinear features. Progress on this problem has slowed, as a variety of techniques have shown equivalent performance. Recent advances in the field of deep learning make it possible to learn more complex functions and better discriminate between signal and background classes. Here, using benchmark data sets, we show that deep-learning methods need no manually constructed inputs and yet improve the classification metric by as much as 8% over the best current approaches. This demonstrates that deep-learning approaches can improve the power of collider searches for exotic particles.",
"title": ""
},
{
"docid": "add4f2513f01e94651d789ce79669085",
"text": "Particle colliders are the primary experimental instruments of high-energy physics. By creating conditions that have not occurred naturally since the Big Bang, collider experiments aim to probe the most fundamental properties of matter and the universe. These costly experiments generate very large amounts of noisy data, creating important challenges and opportunities for machine learning. In this work we use deep learning to greatly improve the statistical power on three benchmark problems involving: (1) Higgs bosons; (2) supersymmetric particles; and (3) Higgs boson decay modes. This approach increases the expected discovery significance over traditional shallow methods, by 50%, 2%, and 11% respectively. In addition, we explore the use of model compression to transfer information (dark knowledge) from deep networks to shallow networks.",
"title": ""
}
] |
[
{
"docid": "f65fabefec9be896bcc53c0306ae69ea",
"text": "Multi-Criteria Decision Making (MCDM) techniques are gaining popularity in sustainable energy management. The techniques provide solutions to the problems involving conflicting and multiple objectives. Several methods based on weighted averages, priority setting, outranking, fuzzy principles and their combinations are employed for energy planning decisions. A review of more than 90 published papers is presented here to analyze the applicability of various methods discussed. A classification on application areas and the year of application is presented to highlight the trends. It is observed that Analytical Hierarchy Process is the most popular technique followed by outranking techniques PROMETHEE and ELECTRE. Validation of results with multiple methods, development of interactive decision support systems and application of fuzzy methods to tackle uncertainties in the data is observed in the published literature. # 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ab3fccee69128818756a4d33055aa58f",
"text": "This paper presents an approach to analyze the thematic evolution of a given research field. This approach combines performance analysis and science mapping for detecting and visualizing conceptual subdomains (particular themes or general thematic areas). It allows us to quantify and visualize the thematic evolution of a given research field. To do this, coword analysis is used in a longitudinal framework in order to detect the different themes treated by the research field across the given time period. The performance analysis uses different bibliometric measures, including the h-index, with the purpose of measuring the impact of both the detected themes and thematic areas. The presented approach includes a visualization method for showing the thematic evolution of the studied field. Then, as an example, the thematic evolution of the Fuzzy Sets Theory field is analyzed using the two most important journals in the topic: Fuzzy Sets and Systems and IEEE Transactions on Fuzzy Systems. © 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9d45323cd4550075d4c2569065ae583c",
"text": "Research on Offline Handwritten Signature Verification explored a large variety of handcrafted feature extractors, ranging from graphology, texture descriptors to interest points. In spite of advancements in the last decades, performance of such systems is still far from optimal when we test the systems against skilled forgeries - signature forgeries that target a particular individual. In previous research, we proposed a formulation of the problem to learn features from data (signature images) in a Writer-Independent format, using Deep Convolutional Neural Networks (CNNs), seeking to improve performance on the task. In this research, we push further the performance of such method, exploring a range of architectures, and obtaining a large improvement in state-of-the-art performance on the GPDS dataset, the largest publicly available dataset on the task. In the GPDS-160 dataset, we obtained an Equal Error Rate of 2.74%, compared to 6.97% in the best result published in literature (that used a combination of multiple classifiers). We also present a visual analysis of the feature space learned by the model, and an analysis of the errors made by the classifier. Our analysis shows that the model is very effective in separating signatures that have a different global appearance, while being particularly vulnerable to forgeries that very closely resemble genuine signatures, even if their line quality is bad, which is the case of slowly-traced forgeries.",
"title": ""
},
{
"docid": "543a4aacf3d0f3c33071b0543b699d3c",
"text": "This paper describes a buffer sharing technique that strikes a balance between the use of disk bandwidth and memory in order to maximize the performance of a video-on-demand server. We make the key observation that the configuration parameters of the system should be independent of the physical characteristics of the data (e.g., popularity of a clip). Instead, the configuration parameters are fixed and our strategy adjusts itself dynamically at run-time to support a pattern of access to the video clips.",
"title": ""
},
{
"docid": "90b84ebf999724d1bbd9e463627341b2",
"text": "Fast searching of content in large motion databases is essential for efficient motion analysis and synthesis. In this work we demonstrate that identifying locally similar regions in human motion data can be practical even for huge databases, if medium-dimensional (15--90 dimensional) feature sets are used for kd-tree-based nearest-neighbor-searches. On the basis of kd-tree-based local neighborhood searches we devise a novel fast method for global similarity searches. We show that knn-searches can be used efficiently within the problems of (a) \"numerical and logical similarity searches\", (b) reconstruction of motions from sparse marker sets, and (c) building so called \"fat graphs\", tasks for which previously algorithms with preprocessing time quadratic in the size of the database and thus only applicable to small collections of motions had been presented. We test our techniques on the two largest freely available motion capture databases, the CMU and HDM05 motion databases comprising more than 750 min of motion capture data proving that our approach is not only theoretically applicable but also solves the problem of fast similarity searches in huge motion databases in practice.",
"title": ""
},
{
"docid": "979e25abca763217d58b995c06bd6c83",
"text": "This paper examines search across competing e-commerce sites. By analyzing panel data from over 10,000 Internet households and three commodity-like products (books, compact discs (CDs), and air travel services), we show that the amount of online search is actually quite limited. On average, households visit only 1.2 book sites, 1.3 CD sites, and 1.8 travel sites during a typical active month in each category. Using probabilistic models, we characterize search behavior at the individual level in terms of (1) depth of search, (2) dynamics of search, and (3) activity of search. We model an individual's tendency to search as a logarithmic process, finding that shoppers search across very few sites in a given shopping month. We extend the logarithmic model of search to allow for time-varying dynamics that may cause the consumer to evolve and, perhaps, learn to search over time. We find that for two of the three product categories studied, search propensity does not change from month to month. However, in the third product category we find mild evidence of time-varying dynamics, where search decreases over time from already low levels. Finally, we model the level of a household's shopping activity and integrate it into our model of search. The results suggest that more-active online shoppers tend also to search across more sites. This consumer characteristic largely drives the dynamics of search that can easily be mistaken as increases from experience at the individual level.",
"title": ""
},
{
"docid": "50044f80063441c9477acc40ac07e19a",
"text": "Natural Language Inference (NLI) is fundamental to many Natural Language Processing (NLP) applications including semantic search and question answering. The NLI problem has gained significant attention due to the release of large scale, challenging datasets. Present approaches to the problem largely focus on learning-based methods that use only textual information in order to classify whether a given premise entails, contradicts, or is neutral with respect to a given hypothesis. Surprisingly, the use of methods based on structured knowledge – a central topic in artificial intelligence – has not received much attention vis-a-vis the NLI problem. While there are many open knowledge bases that contain various types of reasoning information, their use for NLI has not been well explored. To address this, we present a combination of techniques that harness external knowledge to improve performance on the NLI problem in the science questions domain. We present the results of applying our techniques on text, graph, and text-and-graph based models; and discuss the implications of using external knowledge to solve the NLI problem. Our model achieves close to state-of-the-art performance for NLI on the SciTail science questions dataset.",
"title": ""
},
{
"docid": "e1837a92d2a322a8f7157c55b93d1c16",
"text": "An experimental study of interaction in a collaborative desktop virtual environment is described. The aim of the experiment was to investigate if added haptic force feedback in such an environment affects perceived virtual presence, perceived social presence, perceived task performance, and task performance. A between-group design was employed, where seven pairs of subjects used an interface with graphic representation of the environment, audio connection, and haptic force feedback. Seven other pairs of subjects used an interface without haptic force feedback, but with identical features otherwise. The PHANToM, a one-point haptic device, was used for the haptic force feedback, and a program especially developed for the purpose provided the virtual environment. The program enables for two individuals placed in different locations to simultaneously feel and manipulate dynamic objects in a shared desktop virtual environment. Results show that haptic force feedback significantly improves task performance, perceived task performance, and pereceived virtual presence in the collaborative distributed environment. The results suggest that haptic force feedback increases perceived social presence, but the difference is not significant.",
"title": ""
},
{
"docid": "690888d679f93891d278bded0c1238fd",
"text": "The challenge of predicting future values of a time series covers a variety of disciplines. The fundamental problem of selecting the order and identifying the time varying parameters of an autoregressive moving average model (ARMA) concerns many important fields of interest such as linear prediction, system identification and spectral analysis. Recent research activities in forecasting with artificial neural networks (ANNs) suggest that ANNs can be a promising alternative to the traditional ARMA structure. These linear models and ANNs are often compared with mixed conclusions in terms of the superiority in forecasting performance. This study was designed: (a) to investigate a hybrid methodology that combines ANN and ARMA models; (b) to resolve one of the most important problems in time series using ARMA structure and Box–Jenkins methodology: the identification of the model. In this paper, we present a new procedure to predict time series using paradigms such as: fuzzy systems, neural networks and evolutionary algorithms. Our goal is to obtain an expert system based on paradigms of artificial intelligence, so that the linear model can be identified automatically, without the need of human expert participation. The obtained linear model will be combined with ANN, making up an hybrid system that could outperform the forecasting result. r 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0870519536e7229f861323bd4a44c4d2",
"text": "It has become increasingly common for websites and computer media to provide computer generated visual images, called avatars, to represent users and bots during online interactions. In this study, participants (N=255) evaluated a series of avatars in a static context in terms of their androgyny, anthropomorphism, credibility, homophily, attraction, and the likelihood they would choose them during an interaction. The responses to the images were consistent with what would be predicted by uncertainty reduction theory. The results show that the masculinity or femininity (lack of androgyny) of an avatar, as well as anthropomorphism, significantly influence perceptions of avatars. Further, more anthropomorphic avatars were perceived to be more attractive and credible, and people were more likely to choose to be represented by them. Participants reported masculine avatars as less attractive than feminine avatars, and most people reported a preference for human avatars that matched their gender. Practical and theoretical implications of these results for users, designers, and researchers of avatars are discussed.",
"title": ""
},
{
"docid": "24e52943ec6db389dc44b3b5c5a0efbd",
"text": "OBJECTIVE\nWe present the PaHaW Parkinson's disease handwriting database, consisting of handwriting samples from Parkinson's disease (PD) patients and healthy controls. Our goal is to show that kinematic features and pressure features in handwriting can be used for the differential diagnosis of PD.\n\n\nMETHODS AND MATERIAL\nThe database contains records from 37 PD patients and 38 healthy controls performing eight different handwriting tasks. The tasks include drawing an Archimedean spiral, repetitively writing orthographically simple syllables and words, and writing of a sentence. In addition to the conventional kinematic features related to the dynamics of handwriting, we investigated new pressure features based on the pressure exerted on the writing surface. To discriminate between PD patients and healthy subjects, three different classifiers were compared: K-nearest neighbors (K-NN), ensemble AdaBoost classifier, and support vector machines (SVM).\n\n\nRESULTS\nFor predicting PD based on kinematic and pressure features of handwriting, the best performing model was SVM with classification accuracy of Pacc=81.3% (sensitivity Psen=87.4% and specificity of Pspe=80.9%). When evaluated separately, pressure features proved to be relevant for PD diagnosis, yielding Pacc=82.5% compared to Pacc=75.4% using kinematic features.\n\n\nCONCLUSION\nExperimental results showed that an analysis of kinematic and pressure features during handwriting can help assess subtle characteristics of handwriting and discriminate between PD patients and healthy controls.",
"title": ""
},
{
"docid": "d09573af38436e0892695bcda052758f",
"text": "Damage to prefrontal cortex (PFC) impairs decision-making, but the underlying value computations that might cause such impairments remain unclear. Here we report that value computations are doubly dissociable among PFC neurons. Although many PFC neurons encoded chosen value, they used opponent encoding schemes such that averaging the neuronal population extinguished value coding. However, a special population of neurons in anterior cingulate cortex (ACC), but not in orbitofrontal cortex (OFC), multiplexed chosen value across decision parameters using a unified encoding scheme and encoded reward prediction errors. In contrast, neurons in OFC, but not ACC, encoded chosen value relative to the recent history of choice values. Together, these results suggest complementary valuation processes across PFC areas: OFC neurons dynamically evaluate current choices relative to recent choice values, whereas ACC neurons encode choice predictions and prediction errors using a common valuation currency reflecting the integration of multiple decision parameters.",
"title": ""
},
{
"docid": "1f3a41fc5202d636fcfe920603df57e4",
"text": "We present data on corporal punishment (CP) by a nationally representative sample of 991 American parents interviewed in 1995. Six types of CP were examined: slaps on the hand or leg, spanking on the buttocks, pinching, shaking, hitting on the buttocks with a belt or paddle, and slapping in the face. The overall prevalence rate (the percentage of parents using any of these types of CP during the previous year) was 35% for infants and reached a peak of 94% at ages 3 and 4. Despite rapid decline after age 5, just over half of American parents hit children at age 12, a third at age 14, and 13% at age 17. Analysis of chronicity found that parents who hit teenage children did so an average of about six times during the year. Severity, as measured by hitting the child with a belt or paddle, was greatest for children age 5-12 (28% of such children). CP was more prevalent among African American and low socioeconomic status parents, in the South, for boys, and by mothers. The pervasiveness of CP reported in this article, and the harmful side effects of CP shown by recent longitudinal research, indicates a need for psychology and sociology textbooks to reverse the current tendency to almost ignore CP and instead treat it as a major aspect of the socialization experience of American children; and for developmental psychologists to be cognizant of the likelihood that parents are using CP far more often than even advocates of CP recommend, and to inform parents about the risks involved.",
"title": ""
},
{
"docid": "42cf4bd800000aed5e0599cba52ba317",
"text": "There is a significant amount of controversy related to the optimal amount of dietary carbohydrate. This review summarizes the health-related positives and negatives associated with carbohydrate restriction. On the positive side, there is substantive evidence that for many individuals, low-carbohydrate, high-protein diets can effectively promote weight loss. Low-carbohydrate diets (LCDs) also can lead to favorable changes in blood lipids (i.e., decreased triacylglycerols, increased high-density lipoprotein cholesterol) and decrease the severity of hypertension. These positives should be balanced by consideration of the likelihood that LCDs often lead to decreased intakes of phytochemicals (which could increase predisposition to cardiovascular disease and cancer) and nondigestible carbohydrates (which could increase risk for disorders of the lower gastrointestinal tract). Diets restricted in carbohydrates also are likely to lead to decreased glycogen stores, which could compromise an individual's ability to maintain high levels of physical activity. LCDs that are high in saturated fat appear to raise low-density lipoprotein cholesterol and may exacerbate endothelial dysfunction. However, for the significant percentage of the population with insulin resistance or those classified as having metabolic syndrome or prediabetes, there is much experimental support for consumption of a moderately restricted carbohydrate diet (i.e., one providing approximately 26%-44 % of calories from carbohydrate) that emphasizes high-quality carbohydrate sources. This type of dietary pattern would likely lead to favorable changes in the aforementioned cardiovascular disease risk factors, while minimizing the potential negatives associated with consumption of the more restrictive LCDs.",
"title": ""
},
{
"docid": "ec130c42c43a2a0ba8f33cd4a5d0082b",
"text": "Support vector machine (SVM) has appeared as a powerful tool for forecasting forex market and demonstrated better performance over other methods, e.g., neural network or ARIMA based model. SVM-based forecasting model necessitates the selection of appropriate kernel function and values of free parameters: regularization parameter and ε– insensitive loss function. In this paper, we investigate the effect of different kernel functions, namely, linear, polynomial, radial basis and spline on prediction error measured by several widely used performance metrics. The effect of regularization parameter is also studied. The prediction of six different foreign currency exchange rates against Australian dollar has been performed and analyzed. Some interesting results are presented.",
"title": ""
},
{
"docid": "04f4058d37a33245abf8ed9acd0af35d",
"text": "After being introduced in 2009, the first fully homomorphic encryption (FHE) scheme has created significant excitement in academia and industry. Despite rapid advances in the last 6 years, FHE schemes are still not ready for deployment due to an efficiency bottleneck. Here we introduce a custom hardware accelerator optimized for a class of reconfigurable logic to bring LTV based somewhat homomorphic encryption (SWHE) schemes one step closer to deployment in real-life applications. The accelerator we present is connected via a fast PCIe interface to a CPU platform to provide homomorphic evaluation services to any application that needs to support blinded computations. Specifically we introduce a number theoretical transform based multiplier architecture capable of efficiently handling very large polynomials. When synthesized for the Xilinx Virtex 7 family the presented architecture can compute the product of large polynomials in under 6.25 msec making it the fastest multiplier design of its kind currently available in the literature and is more than 102 times faster than a software implementation. Using this multiplier we can compute a relinearization operation in 526 msec. When used as an accelerator, for instance, to evaluate the AES block cipher, we estimate a per block homomorphic evaluation performance of 442 msec yielding performance gains of 28.5 and 17 times over similar CPU and GPU implementations, respectively.",
"title": ""
},
{
"docid": "07941e1f7a8fd0bbc678b641b80dc037",
"text": "This contribution presents a very brief and critical discussion on automated machine learning (AutoML), which is categorized here into two classes, referred to as narrow AutoML and generalized AutoML, respectively. The conclusions yielded from this discussion can be summarized as follows: (1) most existent research on AutoML belongs to the class of narrow AutoML; (2) advances in narrow AutoML are mainly motivated by commercial needs, while any possible benefit obtained is definitely at a cost of increase in computing burdens; (3)the concept of generalized AutoML has a strong tie in spirit with artificial general intelligence (AGI), also called “strong AI”, for which obstacles abound for obtaining pivotal progresses.",
"title": ""
},
{
"docid": "8d4c66f9e12c1225df1e79628d666702",
"text": "Recently, wavelet transforms have gained very high attention in many fields and applications such as physics, engineering, signal processing, applied mathematics and statistics. In this paper, we present the advantage of wavelet transforms in forecasting financial time series data. Amman stock market (Jordan) was selected as a tool to show the ability of wavelet transform in forecasting financial time series, experimentally. This article suggests a novel technique for forecasting the financial time series data, based on Wavelet transforms and ARIMA model. Daily return data from 1993 until 2009 is used for this study. 316 S. Al Wadi et al",
"title": ""
},
{
"docid": "ad6d35ab48d46b0f5205397606f0d26a",
"text": "In this paper we use a case study of a project to create a Web 2.0-based, Virtual Research Environment (VRE) for researchers to share digital resources in order to reflect on the principles and practices for embedding eResearch applications within user communities. In particular, we focus on the software development methodologies and project management techniques adopted by the project team in order to ensure that the project remained responsive to changing user requirements without compromising their capacity to keep the project ‘on track’, i.e. meeting the goals declared in the project proposal within budget and on time. Drawing on ethnographic fieldwork, we describe how the project team, whose members are distributed across multiple sites (and often mobile), exploit a repertoire of coordination mechanisms, communication modes and tools, artefacts and structuring devices as they seek to establish the orderly running of the project while following an agile, user-centred development approach.",
"title": ""
},
{
"docid": "5d8ad5dd91a0f59112809ee6dc154e0e",
"text": "In this work we propose a neural network based image descriptor suitable for image patch matching, which is an important task in many computer vision applications. Our approach is influenced by recent success of deep convolutional neural networks (CNNs) in object detection and classification tasks. We develop a model which maps the raw input patch to a low dimensional feature vector so that the distance between representations is small for similar patches and large otherwise. As a distance metric we utilize L2 norm, i.e. Euclidean distance, which is fast to evaluate and used in most popular hand-crafted descriptors, such as SIFT. According to the results, our approach outperforms state-of-the-art L2-based descriptors and can be considered as a direct replacement of SIFT. In addition, we conducted experiments with batch normalization and histogram equalization as a preprocessing method of the input data. The results confirm that these techniques further improve the performance of the proposed descriptor. Finally, we show promising preliminary results by appending our CNNs with recently proposed spatial transformer networks and provide a visualisation and interpretation of their impact.",
"title": ""
}
] |
scidocsrr
|
d10b8314ba96815d7e9476c0e9a938ae
|
Energy Efficient Mobile Cloud Computing Powered by Wireless Energy Transfer
|
[
{
"docid": "10187e22397b1c30b497943764d32c34",
"text": "Wireless networks can be self-sustaining by harvesting energy from ambient radio-frequency (RF) signals. Recently, researchers have made progress on designing efficient circuits and devices for RF energy harvesting suitable for low-power wireless applications. Motivated by this and building upon the classic cognitive radio (CR) network model, this paper proposes a novel method for wireless networks coexisting where low-power mobiles in a secondary network, called secondary transmitters (STs), harvest ambient RF energy from transmissions by nearby active transmitters in a primary network, called primary transmitters (PTs), while opportunistically accessing the spectrum licensed to the primary network. We consider a stochastic-geometry model in which PTs and STs are distributed as independent homogeneous Poisson point processes (HPPPs) and communicate with their intended receivers at fixed distances. Each PT is associated with a guard zone to protect its intended receiver from ST's interference, and at the same time delivers RF energy to STs located in its harvesting zone. Based on the proposed model, we analyze the transmission probability of STs and the resulting spatial throughput of the secondary network. The optimal transmission power and density of STs are derived for maximizing the secondary network throughput under the given outage-probability constraints in the two coexisting networks, which reveal key insights to the optimal network design. Finally, we show that our analytical result can be generally applied to a non-CR setup, where distributed wireless power chargers are deployed to power coexisting wireless transmitters in a sensor network.",
"title": ""
},
{
"docid": "0cbd3587fe466a13847e94e29bb11524",
"text": "The cloud heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems, but is it the ultimate solution for extending such systems' battery lifetimes?",
"title": ""
}
] |
[
{
"docid": "597e00855111c6ccb891c96e28f23585",
"text": "Global food demand is increasing rapidly, as are the environmental impacts of agricultural expansion. Here, we project global demand for crop production in 2050 and evaluate the environmental impacts of alternative ways that this demand might be met. We find that per capita demand for crops, when measured as caloric or protein content of all crops combined, has been a similarly increasing function of per capita real income since 1960. This relationship forecasts a 100-110% increase in global crop demand from 2005 to 2050. Quantitative assessments show that the environmental impacts of meeting this demand depend on how global agriculture expands. If current trends of greater agricultural intensification in richer nations and greater land clearing (extensification) in poorer nations were to continue, ~1 billion ha of land would be cleared globally by 2050, with CO(2)-C equivalent greenhouse gas emissions reaching ~3 Gt y(-1) and N use ~250 Mt y(-1) by then. In contrast, if 2050 crop demand was met by moderate intensification focused on existing croplands of underyielding nations, adaptation and transfer of high-yielding technologies to these croplands, and global technological improvements, our analyses forecast land clearing of only ~0.2 billion ha, greenhouse gas emissions of ~1 Gt y(-1), and global N use of ~225 Mt y(-1). Efficient management practices could substantially lower nitrogen use. Attainment of high yields on existing croplands of underyielding nations is of great importance if global crop demand is to be met with minimal environmental impacts.",
"title": ""
},
{
"docid": "fdfea6d3a5160c591863351395929a99",
"text": "Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision [22, 33], but their use in graphics problems has been limited ([23, 7] are notable recent exceptions). In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches, which consist of multiple complex stages of processing, each of which requires careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network, which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system, which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. We show view interpolation results on imagery from the KITTI dataset [12], from data from [1] as well as on Google Street View images. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.",
"title": ""
},
{
"docid": "cf5128cb4259ea87027ddd00189dc931",
"text": "This paper interrogates the currently pervasive discourse of the ‘net generation’ finding the concept of the ‘digital native’ especially problematic, both empirically and conceptually. We draw on a research project of South African higher education students’ access to and use of Information and Communication Technologies (ICTs) to show that age is not a determining factor in students’ digital lives; rather, their familiarity and experience using ICTs is more relevant. We also demonstrate that the notion of a generation of ‘digital natives’ is inaccurate: those with such attributes are effectively a digital elite. Instead of a new net generation growing up to replace an older analogue generation, there is a deepening digital divide in South Africa characterized not by age but by access and opportunity; indeed, digital apartheid is alive and well. We suggest that the possibility for digital democracy does exist in the form of a mobile society which is not age specific, and which is ubiquitous. Finally, we propose redefining the concepts ‘digital’, ‘net’, ‘native’, and ‘generation’ in favour of reclaiming the term ‘digitizen’.",
"title": ""
},
{
"docid": "8e1b6eb4a939c493eff27cf78bab8d47",
"text": "Among the various natural calamities, flood is considered one of the most catastrophic natural hazards, which has a significant impact on the socio-economic lifeline of a country. The Assessment of flood risks facilitates taking appropriate measures to reduce the consequences of flooding. The flood risk assessment requires Big data which are coming from different sources, such as sensors, social media, and organizations. However, these data sources contain various types of uncertainties because of the presence of incomplete and inaccurate information. This paper presents a Belief rule-based expert system (BRBES) which is developed in Big data platform to assess flood risk in real time. The system processes extremely large dataset by integrating BRBES with Apache Spark while a web-based interface has developed allowing the visualization of flood risk in real time. Since the integrated BRBES employs knowledge driven learning mechanism, it has been compared with other data-driven learning mechanisms to determine the reliability in assessing flood risk. The integrated BRBES produces reliable results in comparison to other data-driven approaches. Data for the expert system has been collected by considering different case study areas of Bangladesh to validate the system.",
"title": ""
},
{
"docid": "0a9f37b5a22d4c13cedcff69fc2caf7b",
"text": "The Íslendinga sögur – or Sagas of Icelanders – constitute a collection of medieval literature set in Iceland around the late 9th to early 11th centuries, the so-called Saga Age. They purport to describe events during the period around the settlement of Iceland and the generations immediately following and constitute an important element of world literature thanks to their unique narrative style. Although their historicity is a matter of scholarly debate, the narratives contain interwoven and overlapping plots involving thousands of characters and interactions between them. Here we perform a network analysis of the Íslendinga sögur in an attempt to gather quantitative information on interrelationships between characters and to compare saga society to other social networks.",
"title": ""
},
{
"docid": "1c8cd8953ed2c6dc5c95975a0581237a",
"text": "We present a point tracking system powered by two deep convolutional neural networks. The first network, MagicPoint, operates on single images and extracts salient 2D points. The extracted points are “SLAM-ready” because they are by design isolated and well-distributed throughout the image. We compare this network against classical point detectors and discover a significant performance gap in the presence of image noise. As transformation estimation is more simple when the detected points are geometrically stable, we designed a second network, MagicWarp, which operates on pairs of point images (outputs of MagicPoint), and estimates the homography that relates the inputs. This transformation engine differs from traditional approaches because it does not use local point descriptors, only point locations. Both networks are trained with simple synthetic data, alleviating the requirement of expensive external camera ground truthing and advanced graphics rendering pipelines. The system is fast and lean, easily running 30+ FPS on a single CPU.",
"title": ""
},
{
"docid": "f46c9848064716097c289ecb08052cad",
"text": "This paper compares the performance of Black-Scholes with an artificial neural network (ANN) in pricing European style call options on the FTSE 100 index. It is the first extensive study of the performance of ANNs in pricing UK options, and the first to allow for dividends in the closed-form model. For out-of themoney options, the ANN is clearly superior to Black-Scholes. For in-the-money options, if the sample space is restricted by excluding deep in-the-money and long maturity options (3.4% of total volume), the performance of the ANN is comparable with that of Black-Scholes. The superiority of the ANN is a surprising result, given that European style equity options are the home ground of Black-Scholes, and suggests that ANNs may have an important role to play in pricing other options for which there is either no closed-form model, or the closed-form model is less successful than Black-Scholes for equity options.",
"title": ""
},
{
"docid": "5e94e30719ac09e86aaa50d9ab4ad57b",
"text": "Blogs, regularly updated online journals, allow people to quickly and easily create and share online content. Most bloggers write about their everyday lives and generally have a small audience of regular readers. Readers interact with bloggers by contributing comments in response to specific blog posts. Moreover, readers of blogs are often bloggers themselves and acknowledge their favorite blogs by adding them to their blogrolls or linking to them in their posts. This paper presents a study of bloggers’ online and real life relationships in three blog communities: Kuwait Blogs, Dallas/Fort Worth Blogs, and United Arab Emirates Blogs. Through a comparative analysis of the social network structures created by blogrolls and blog comments, we find different characteristics for different kinds of links. Our online survey of the three communities reveals that few of the blogging interactions reflect close offline relationships, and moreover that many online relationships were formed through blogging.",
"title": ""
},
{
"docid": "6e60d6b878c35051ab939a03bdd09574",
"text": "We propose a new CNN-CRF end-to-end learning framework, which is based on joint stochastic optimization with respect to both Convolutional Neural Network (CNN) and Conditional Random Field (CRF) parameters. While stochastic gradient descent is a standard technique for CNN training, it was not used for joint models so far. We show that our learning method is (i) general, i.e. it applies to arbitrary CNN and CRF architectures and potential functions; (ii) scalable, i.e. it has a low memory footprint and straightforwardly parallelizes on GPUs; (iii) easy in implementation. Additionally, the unified CNN-CRF optimization approach simplifies a potential hardware implementation. We empirically evaluate our method on the task of semantic labeling of body parts in depth images and show that it compares favorably to competing techniques.",
"title": ""
},
{
"docid": "a80e3d5ee1d158295378671fcc3ea4fb",
"text": "We review the task of Sentence Pair Scoring, popular in the literature in various forms — viewed as Answer Sentence Selection, Semantic Text Scoring, Next Utterance Ranking, Recognizing Textual Entailment, Paraphrasing or e.g. a component of Memory Networks. We argue that all such tasks are similar from the model perspective and propose new baselines by comparing the performance of common IR metrics and popular convolutional, recurrent and attentionbased neural models across many Sentence Pair Scoring tasks and datasets. We discuss the problem of evaluating randomized models, propose a statistically grounded methodology, and attempt to improve comparisons by releasing new datasets that are much harder than some of the currently used well explored benchmarks. We introduce a unified open source software framework with easily pluggable models and tasks, which enables us to experiment with multi-task reusability of trained sentence models.",
"title": ""
},
{
"docid": "bfe45d100d0df1b5dad7c63a7a070359",
"text": "AIM\nTo present the regenerative endodontic treatment procedure of a perforated internal root resorption case and its clinical and radiographic findings after 2 years.\n\n\nSUMMARY\nA 14-year-old female patient was referred complaining of moderate pain associated with her maxillary left lateral incisor. After radiographic examination, a perforated internal resorption lesion in the middle third of tooth 22 was detected. Under local anaesthesia and rubber dam isolation, an access cavity was prepared and the root canal was shaped using K-files under copious irrigation with 1% NaOCl, 17% EDTA and distilled water. At the end of the first and second appointments, calcium hydroxide (CH) paste was placed in the root canal using a lentulo. After 3 months, the CH paste was removed using 1% NaOCl and 17% EDTA solutions and bleeding in the root canal was achieved by placing a size 20 K-file into the periapical tissues. Mineral trioxide aggregate was then placed over the blood clot. The access cavity was restored using glass-ionomer cement and resin composite. After 2 years, the tooth was asymptomatic and radiographic examination revealed hard tissue formation in the perforated resorption area and remodelling of the root surface.\n\n\nKEY LEARNING POINTS\nRegenerative endodontic treatment procedures are an alternative approach to treat perforated internal root resorption lesions. Calcium hydroxide was effective as an intracanal medicament in regenerative endodontic treatment procedures.",
"title": ""
},
{
"docid": "88a21d973ec80ee676695c95f6b20545",
"text": "Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum.",
"title": ""
},
{
"docid": "d558f980b85bf970a7b57c00df361591",
"text": "URL shortener services today have come to play an important role in our social media landscape. They direct user attention and disseminate information in online social media such as Twitter or Facebook. Shortener services typically provide short URLs in exchange for long URLs. These short URLs can then be shared and diffused by users via online social media, e-mail or other forms of electronic communication. When another user clicks on the shortened URL, she will be redirected to the underlying long URL. Shortened URLs can serve many legitimate purposes, such as click tracking, but can also serve illicit behavior such as fraud, deceit and spam. Although usage of URL shortener services today is ubiquituous, our research community knows little about how exactly these services are used and what purposes they serve. In this paper, we study usage logs of a URL shortener service that has been operated by our group for more than a year. We expose the extent of spamming taking place in our logs, and provide first insights into the planetary-scale of this problem. Our results are relevant for researchers and engineers interested in understanding the emerging phenomenon and dangers of spamming via URL shortener services.",
"title": ""
},
{
"docid": "2e3f05ee44b276b51c1b449e4a62af94",
"text": "We make some simple extensions to the Active Shape Model of Cootes et al. [4], and use it to locate features in frontal views of upright faces. We show on independent test data that with the extensions the Active Shape Model compares favorably with more sophisticated methods. The extensions are (i) fitting more landmarks than are actually needed (ii) selectively using twoinstead of one-dimensional landmark templates (iii) adding noise to the training set (iv) relaxing the shape model where advantageous (v) trimming covariance matrices by setting most entries to zero, and (vi) stacking two Active Shape Models in series.",
"title": ""
},
{
"docid": "85bc241c03d417099aa155766e6a1421",
"text": "Passwords continue to prevail on the web as the primary method for user authentication despite their well-known security and usability drawbacks. Password managers offer some improvement without requiring server-side changes. In this paper, we evaluate the security of dual-possession authentication, an authentication approach offering encrypted storage of passwords and theft-resistance without the use of a master password. We further introduce Tapas, a concrete implementation of dual-possession authentication leveraging a desktop computer and a smartphone. Tapas requires no server-side changes to websites, no master password, and protects all the stored passwords in the event either the primary or secondary device (e.g., computer or phone) is stolen. To evaluate the viability of Tapas as an alternative to traditional password managers, we perform a 30 participant user study comparing Tapas to two configurations of Firefox's built-in password manager. We found users significantly preferred Tapas. We then improve Tapas by incorporating feedback from this study, and reevaluate it with an additional 10 participants.",
"title": ""
},
{
"docid": "f31669e97fc655e74e8bb8324031060b",
"text": "Being an emerging paradigm for display advertising, RealTime Bidding (RTB) drives the focus of the bidding strategy from context to users’ interest by computing a bid for each impression in real time. The data mining work and particularly the bidding strategy development becomes crucial in this performance-driven business. However, researchers in computational advertising area have been suffering from lack of publicly available benchmark datasets, which are essential to compare different algorithms and systems. Fortunately, a leading Chinese advertising technology company iPinYou decided to release the dataset used in its global RTB algorithm competition in 2013. The dataset includes logs of ad auctions, bids, impressions, clicks, and final conversions. These logs reflect the market environment as well as form a complete path of users’ responses from advertisers’ perspective. This dataset directly supports the experiments of some important research problems such as bid optimisation and CTR estimation. To the best of our knowledge, this is the first publicly available dataset on RTB display advertising. Thus, they are valuable for reproducible research and understanding the whole RTB ecosystem. In this paper, we first provide the detailed statistical analysis of this dataset. Then we introduce the research problem of bid optimisation in RTB and the simple yet comprehensive evaluation protocol. Besides, a series of benchmark experiments are also conducted, including both click-through rate (CTR) estimation and bid optimisation.",
"title": ""
},
{
"docid": "e50842fc8438af7fe6ce4b6d9a5439a7",
"text": "OBJECTIVE\nTimely recognition and optimal management of atherogenic dyslipidemia (AD) and residual vascular risk (RVR) in family medicine.\n\n\nBACKGROUND\nThe global increase of the incidence of obesity is accompanied by an increase in the incidence of many metabolic and lipoprotein disorders, in particular AD, as an typical feature of obesity, metabolic syndrome, insulin resistance and diabetes type 2. AD is an important factor in cardio metabolic risk, and is characterized by a lipoprotein profile with low levels of high-density lipoprotein (HDL), high levels of triglycerides (TG) and high levels of low-density lipoprotein (LDL) cholesterol. Standard cardiometabolic risk assessment using the Framingham risk score and standard treatment with statins is usually sufficient, but not always that effective, because it does not reduce RVR that is attributed to elevated TG and reduced HDL cholesterol. RVR is subject to reduction through lifestyle changes or by pharmacological interventions. In some studies it was concluded that dietary interventions should aim to reduce the intake of calories, simple carbohydrates and saturated fats, with the goal of reaching cardiometabolic suitability, rather than weight reduction. Other studies have found that the reduction of carbohydrates in the diet or weight loss can alleviate AD changes, while changes in intake of total or saturated fat had no significant influence. In our presented case, a lifestyle change was advised as a suitable diet with reduced intake of carbohydrates and a moderate physical activity of walking for at least 180 minutes per week, with an recommendation for daily intake of calories alignment with the total daily (24-hour) energy expenditure (24-EE), depending on the degree of physical activity, type of food and the current health condition. Such lifestyle changes together with combined medical therapy with Statins, Fibrates and Omega-3 fatty acids, resulted in significant improvement in atherogenic lipid parameters.\n\n\nCONCLUSION\nUnsuitable atherogenic nutrition and insufficient physical activity are the new risk factors characteristic for AD. Nutritional interventions such as diet with reduced intake of carbohydrates and calories, moderate physical activity, combined with pharmacotherapy can improve atherogenic dyslipidemic profile and lead to loss of weight. Although one gram of fat release twice more kilo calories compared to carbohydrates, carbohydrates seems to have a greater atherogenic potential, which should be explored in future.",
"title": ""
},
{
"docid": "20fa99f56e249d4326a7d840c5cbd9b7",
"text": "Single image super-resolution (SR) is an ill-posed problem, which tries to recover a high-resolution image from its low-resolution observation. To regularize the solution of the problem, previous methods have focused on designing good priors for natural images, such as sparse representation, or directly learning the priors from a large data set with models, such as deep neural networks. In this paper, we argue that domain expertise from the conventional sparse coding model can be combined with the key ingredients of deep learning to achieve further improved results. We demonstrate that a sparse coding model particularly designed for SR can be incarnated as a neural network with the merit of end-to-end optimization over training data. The network has a cascaded structure, which boosts the SR performance for both fixed and incremental scaling factors. The proposed training and testing schemes can be extended for robust handling of images with additional degradation, such as noise and blurring. A subjective assessment is conducted and analyzed in order to thoroughly evaluate various SR techniques. Our proposed model is tested on a wide range of images, and it significantly outperforms the existing state-of-the-art methods for various scaling factors both quantitatively and perceptually.",
"title": ""
},
{
"docid": "59370193760b0bebaf530ce669e4ef80",
"text": "AlGaN/GaN HEMT using field plate and recessed gate for X-band application was developed on SiC substrate. Internal matching circuits were designed to achieve high gain at 8 GHz for the developed device with single chip and four chips combining, respectively. The internally matched 5.52 mm single chip AlGaN/GaN HEMT exhibited 36.5 W CW output power with a power added efficiency (PAE) of 40.1% and power density of 6.6 W/mm at 35 V drain bias voltage (Vds). The device with four chips combining demonstrated a CW over 100 W across the band of 7.7-8.2 GHz, and an maximum CW output power of 119.1 W with PAE of 38.2% at Vds =31.5 V. This is the highest output power for AlGaN/GaN HEMT operated at X-band to the best of our knowledge.",
"title": ""
},
{
"docid": "356c29a56a781074462a107a849c3412",
"text": "One of the long-standing challenges in Artificial Intelligence for goal-directed behavior is to build a single agent which can solve multiple tasks. Recent progress in multi-task learning for goal-directed sequential tasks has been in the form of distillation based learning wherein a student network learns from multiple task-specific expert networks by mimicking the task-specific policies of the expert networks. While such approaches offer a promising solution to the multitask learning problem, they require supervision from large task-specific (expert) networks which require extensive training. We propose a simple yet efficient multi-task learning framework which solves multiple goal-directed tasks in an online or active learning setup without the need for expert supervision.",
"title": ""
}
] |
scidocsrr
|
8a36a7b27bf1715dda981a63bf1764e5
|
Hiding Data in Video Sequences using LSB with Elliptic Curve Cryptography
|
[
{
"docid": "8c8a100e4dc69e1e68c2bd55f010656d",
"text": "In this paper, a data hiding scheme by simple LSB substitution is proposed. By applying an optimal pixel adjustment process to the stego-image obtained by the simple LSB substitution method, the image quality of the stego-image can be greatly improved with low extra computational complexity. The worst case mean-square-error between the stego-image and the cover-image is derived. Experimental results show that the stego-image is visually indistinguishable from the original cover-image. The obtained results also show a signi7cant improvement with respect to a previous work. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "73862c0aa60c03d5a96f755cdc3bf07b",
"text": "Adaptive and innovative application of classical data mining principles and techniques in time series analysis has resulted in development of a concept known as time series data mining. Since the time series are present in all areas of business and scientific research, attractiveness of mining of time series datasets should not be seen only in the context of the research challenges in the scientific community, but also in terms of usefulness of the research results, as a support to the process of business decision-making. A fundamental component in the mining process of time series data is time series segmentation. As a data mining research problem, segmentation is focused on the discovery of rules in movements of observed phenomena in a form of interpretable, novel, and useful temporal patterns. In this Paper, a comprehensive review of the conceptual determinations, including the elements of comparative analysis, of the most commonly used algorithms for segmentation of time series, is being considered.",
"title": ""
},
{
"docid": "899b3bcf6eaaa02e597499862641f868",
"text": "Crowdsourcing systems are popular for solving large-scale labeling tasks with low-paid workers. We study the problem of recovering the true labels from the possibly erroneous crowdsourced labels under the popular Dawid–Skene model. To address this inference problem, several algorithms have recently been proposed, but the best known guarantee is still significantly larger than the fundamental limit. We close this gap by introducing a tighter lower bound on the fundamental limit and proving that the belief propagation (BP) exactly matches the lower bound. The guaranteed optimality of BP is the strongest in the sense that it is information-theoretically impossible for any other algorithm to correctly label a larger fraction of the tasks. Experimental results suggest that the BP is close to optimal for all regimes considered and improves upon competing the state-of-the-art algorithms.",
"title": ""
},
{
"docid": "fef24d203d0a2e5d52aa887a0a442cf3",
"text": "The property that has given humans a dominant advantage over other species is not strength or speed, but intelligence. If progress in artificial intelligence continues unabated, AI systems will eventually exceed humans in general reasoning ability. A system that is “superintelligent” in the sense of being “smarter than the best human brains in practically every field” could have an enormous impact upon humanity (Bostrom 2014). Just as human intelligence has allowed us to develop tools and strategies for controlling our environment, a superintelligent system would likely be capable of developing its own tools and strategies for exerting control (Muehlhauser and Salamon 2012). In light of this potential, it is essential to use caution when developing AI systems that can exceed human levels of general intelligence, or that can facilitate the creation of such systems.",
"title": ""
},
{
"docid": "63a75bf6cdb340cf328b87feb4f0ee22",
"text": "A large number of e-commerce websites have started to markup their products using standards such as Microdata, Microformats, and RDFa. However, the markup is mostly not as fine-grained as desirable for applications and mostly consists of free text properties. This paper discusses the challenges that arise in the task of matching descriptions of electronic products from several thousand e-shops that offer Microdata markup. Specifically, our goal is to extract product attributes from product offers, by means of regular expressions, in order to build well structured product specifications. For this purpose we present a technique for learning regular expressions. We evaluate our attribute extraction approach using 1.9 million product offers from 9,240 e-shops which we extracted from the Common Crawl 2012, a large public Web corpus. Our results show that with our approach we are able to reach a similar matching quality as with manually defined regular expressions.",
"title": ""
},
{
"docid": "781bdc522ed49108cd7132a9aaf49fce",
"text": "ROC curve analysis is often applied to measure the diagnostic accuracy of a biomarker. The analysis results in two gains: diagnostic accuracy of the biomarker and the optimal cut-point value. There are many methods proposed in the literature to obtain the optimal cut-point value. In this study, a new approach, alternative to these methods, is proposed. The proposed approach is based on the value of the area under the ROC curve. This method defines the optimal cut-point value as the value whose sensitivity and specificity are the closest to the value of the area under the ROC curve and the absolute value of the difference between the sensitivity and specificity values is minimum. This approach is very practical. In this study, the results of the proposed method are compared with those of the standard approaches, by using simulated data with different distribution and homogeneity conditions as well as a real data. According to the simulation results, the use of the proposed method is advised for finding the true cut-point.",
"title": ""
},
{
"docid": "5b134fae94a5cc3a2e1b7cc19c5d29e5",
"text": "We explore making virtual desktops behave in a more physically realistic manner by adding physics simulation and using piling instead of filing as the fundamental organizational structure. Objects can be casually dragged and tossed around, influenced by physical characteristics such as friction and mass, much like we would manipulate lightweight objects in the real world. We present a prototype, called BumpTop, that coherently integrates a variety of interaction and visualization techniques optimized for pen input we have developed to support this new style of desktop organization.",
"title": ""
},
{
"docid": "d7345ac01159101a7b1264f844fcc9e1",
"text": "Neural networks have become very popular in recent years because of the astonishing success of deep learning in various domains such as image and speech recognition. In many of these domains, specific architectures of neural networks, such as convolutional networks, seem to fit the particular structure of the problem domain very well, and can therefore perform in an astonishingly effective way. However, the success of neural networks is not universal across all domains. Indeed, for learning problems without any special structure, or in cases where the data is somewhat limited, neural networks are known not to perform well with respect to traditional machine learning methods such as random forests. In this paper, we show that a carefully designed neural network with random forest structure can have better generalization ability. In fact, this architecture is more powerful than random forests, because the back-propagation algorithm reduces to a more powerful and generalized way of constructing a decision tree. Furthermore, the approach is efficient to train and requires a small constant factor of the number of training examples. This efficiency allows the training of multiple neural networks in order to improve the generalization accuracy. Experimental results on 10 realworld benchmark datasets demonstrate the effectiveness of the proposed enhancements.",
"title": ""
},
{
"docid": "94366591151f18db1551a4a3e4012d95",
"text": "As part of the Taste of Computing project, the Exploring Computer Science (ECS) instructional model has been expanded to many high schools in the Chicago Public Schools system. The authors report on initial outcomes showing that students value the ECS course experience, resulting in increased awareness of and interest in the field of computer science. The authors also compare these results by race and gender. The data provide a good basis for exploring the impact of meaningful computer science instruction on students from groups underrepresented in computing; of several hundred students surveyed, nearly half were female, and over half were Hispanic or African American.",
"title": ""
},
{
"docid": "c8e446ab0dbdaf910b5fb98f672a35dc",
"text": "MinHash and SimHash are the two widely adopted Locality Sensitive Hashing (LSH) algorithms for large-scale data processing applications. Deciding which LSH to use for a particular problem at hand is an important question, which has no clear answer in the existing literature. In this study, we provide a theoretical answer (validated by experiments) that MinHash virtually always outperforms SimHash when the data are binary, as common in practice such as search. The collision probability of MinHash is a function of resemblance similarity (R), while the collision probability of SimHash is a function of cosine similarity (S). To provide a common basis for comparison, we evaluate retrieval results in terms of S for both MinHash and SimHash. This evaluation is valid as we can prove that MinHash is a valid LSH with respect to S, by using a general inequality S ≤ R ≤ S 2−S . Our worst case analysis can show that MinHash significantly outperforms SimHash in high similarity region. Interestingly, our intensive experiments reveal that MinHash is also substantially better than SimHash even in datasets where most of the data points are not too similar to each other. This is partly because, in practical data, often R ≥ S z−S holds where z is only slightly larger than 2 (e.g., z ≤ 2.1). Our restricted worst case analysis by assuming S z−S ≤ R ≤ S 2−S shows that MinHash indeed significantly outperforms SimHash even in low similarity region. We believe the results in this paper will provide valuable guidelines for search in practice, especially when the data are sparse. Appearing in Proceedings of the 17 International Conference on Artificial Intelligence and Statistics (AISTATS) 2014, Reykjavik, Iceland. JMLR: W&CP volume 33. Copyright 2014 by the authors.",
"title": ""
},
{
"docid": "6140255e69aa292bf8c97c9ef200def7",
"text": "Food production requires application of fertilizers containing phosphorus, nitrogen and potassium on agricultural fields in order to sustain crop yields. However modern agriculture is dependent on phosphorus derived from phosphate rock, which is a non-renewable resource and current global reserves may be depleted in 50–100 years. While phosphorus demand is projected to increase, the expected global peak in phosphorus production is predicted to occur around 2030. The exact timing of peak phosphorus production might be disputed, however it is widely acknowledged within the fertilizer industry that the quality of remaining phosphate rock is decreasing and production costs are increasing. Yet future access to phosphorus receives little or no international attention. This paper puts forward the case for including long-term phosphorus scarcity on the priority agenda for global food security. Opportunities for recovering phosphorus and reducing demand are also addressed together with institutional challenges. 2009 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "5ccda95046b0e5d1cfc345011b1e350d",
"text": "Considerable emphasis is currently placed on reducing healthcare-associated infection through improving hand hygiene compliance among healthcare professionals. There is also increasing discussion in the lay media of perceived poor hand hygiene compliance among healthcare staff. Our aim was to report the outcomes of a systematic search for peer-reviewed, published studies - especially clinical trials - that focused on hand hygiene compliance among healthcare professionals. Literature published between December 2009, after publication of the World Health Organization (WHO) hand hygiene guidelines, and February 2014, which was indexed in PubMed and CINAHL on the topic of hand hygiene compliance, was searched. Following examination of relevance and methodology of the 57 publications initially retrieved, 16 clinical trials were finally included in the review. The majority of studies were conducted in the USA and Europe. The intensive care unit emerged as the predominant focus of studies followed by facilities for care of the elderly. The category of healthcare worker most often the focus of the research was the nurse, followed by the healthcare assistant and the doctor. The unit of analysis reported for hand hygiene compliance was 'hand hygiene opportunity'; four studies adopted the 'my five moments for hand hygiene' framework, as set out in the WHO guidelines, whereas other papers focused on unique multimodal strategies of varying design. We concluded that adopting a multimodal approach to hand hygiene improvement intervention strategies, whether guided by the WHO framework or by another tested multimodal framework, results in moderate improvements in hand hygiene compliance.",
"title": ""
},
{
"docid": "c224cc83b4c58001dbbd3e0ea44a768a",
"text": "We review the current status of research in dorsal-ventral (D-V) patterning in vertebrates. Emphasis is placed on recent work on Xenopus, which provides a paradigm for vertebrate development based on a rich heritage of experimental embryology. D-V patterning starts much earlier than previously thought, under the influence of a dorsal nuclear -Catenin signal. At mid-blastula two signaling centers are present on the dorsal side: The prospective neuroectoderm expresses bone morphogenetic protein (BMP) antagonists, and the future dorsal endoderm secretes Nodal-related mesoderm-inducing factors. When dorsal mesoderm is formed at gastrula, a cocktail of growth factor antagonists is secreted by the Spemann organizer and further patterns the embryo. A ventral gastrula signaling center opposes the actions of the dorsal organizer, and another set of secreted antagonists is produced ventrally under the control of BMP4. The early dorsal -Catenin signal inhibits BMP expression at the transcriptional level and promotes expression of secreted BMP antagonists in the prospective central nervous system (CNS). In the absence of mesoderm, expression of Chordin and Noggin in ectoderm is required for anterior CNS formation. FGF (fibroblast growth factor) and IGF (insulin-like growth factor) signals are also potent neural inducers. Neural induction by anti-BMPs such as Chordin requires mitogen-activated protein kinase (MAPK) activation mediated by FGF and IGF. These multiple signals can be integrated at the level of Smad1. Phosphorylation by BMP receptor stimulates Smad1 transcriptional activity, whereas phosphorylation by MAPK has the opposite effect. Neural tissue is formed only at very low levels of activity of BMP-transducing Smads, which require the combination of both low BMP levels and high MAPK signals. Many of the molecular players that regulate D-V patterning via regulation of BMP signaling have been conserved between Drosophila and the vertebrates.",
"title": ""
},
{
"docid": "ce87a635c0c3aaa17e7b83d5fb52adce",
"text": "We present a novel definition of the reinforcement learning state, actions and reward function that allows a deep Q-network (DQN) to learn to control an optimization hyperparameter. Using Q-learning with experience replay, we train two DQNs to accept a state representation of an objective function as input and output the expected discounted return of rewards, or q-values, connected to the actions of either adjusting the learning rate or leaving it unchanged. The two DQNs learn a policy similar to a line search, but differ in the number of allowed actions. The trained DQNs in combination with a gradient-based update routine form the basis of the Q-gradient descent algorithms. To demonstrate the viability of this framework, we show that the DQN’s q-values associated with optimal action converge and that the Q-gradient descent algorithms outperform gradient descent with an Armijo or nonmonotone line search. Unlike traditional optimization methods, Q-gradient descent can incorporate any objective statistic and by varying the actions we gain insight into the type of learning rate adjustment strategies that are successful for neural network optimization.",
"title": ""
},
{
"docid": "7c9a28889b209832adfbdee93494620d",
"text": "Wake-up radios have been a popular transceiver architecture in recent years for battery-powered applications such as wireless body area networks (WBANs) [1], wireless sensor networks (WSNs) [2,3], and even electronic toll collection systems (ETCS) [4]. The most important consideration in implementing a wake-up receiver (WuRX) is low power dissipation while maximizing sensitivity. Because of this requirement of very low power, WuRX are usually designed by a simple RF envelope detector (RFED) consisting of Schottky diodes [1,3] or MOSFETs in the weak inversion region [2] without active filtering or amplification of the input signal. Therefore, the performance of the RFED itself is critical for attaining good sensitivity of the WuRX. Moreover, the poor filtering of the input signal renders the WuRX vulnerable to interferers from nearby terminals with high transmit power such as mobile phones and WiFi devices, and this can result in false wake-ups [1]. Although the RFED has very low power, a false wake-up will increase the power consumption of the wake-up radio as it will enable the power-hungry main transceiver.",
"title": ""
},
{
"docid": "aed5bb8a488215afaf30efe054d22d4b",
"text": "OBJECTIVE\nStudies of the neurobiological processes underlying drug addiction primarily have focused on limbic subcortical structures. Here the authors evaluated the role of frontal cortical structures in drug addiction.\n\n\nMETHOD\nAn integrated model of drug addiction that encompasses intoxication, bingeing, withdrawal, and craving is proposed. This model and findings from neuroimaging studies on the behavioral, cognitive, and emotional processes that are at the core of drug addiction were used to analyze the involvement of frontal structures in drug addiction.\n\n\nRESULTS\nThe orbitofrontal cortex and the anterior cingulate gyrus, which are regions neuroanatomically connected with limbic structures, are the frontal cortical areas most frequently implicated in drug addiction. They are activated in addicted subjects during intoxication, craving, and bingeing, and they are deactivated during withdrawal. These regions are also involved in higher-order cognitive and motivational functions, such as the ability to track, update, and modulate the salience of a reinforcer as a function of context and expectation and the ability to control and inhibit prepotent responses.\n\n\nCONCLUSIONS\nThese results imply that addiction connotes cortically regulated cognitive and emotional processes, which result in the overvaluing of drug reinforcers, the undervaluing of alternative reinforcers, and deficits in inhibitory control for drug responses. These changes in addiction, which the authors call I-RISA (impaired response inhibition and salience attribution), expand the traditional concepts of drug dependence that emphasize limbic-regulated responses to pleasure and reward.",
"title": ""
},
{
"docid": "87949c3616f14711fe0eb6f7cc9f95b3",
"text": "Three hydroponic systems (aeroponics, aerohydroponics, and deep-water culture) were compared for the production of potato (Solanum tuberosum) seed tubers. Aerohydroponics was designed to improve the root zone environment of aeroponics by maintaining root contact with nutrient solution in the lower part of the beds, while intermittently spraying roots in the upper part. Root vitality, shoot fresh and dry weight, and total leaf area were significantly highest when cv. Superior, a medium early-maturing cultivar, was grown in the aeroponic system. This better plant growth in the aeroponic system was accompanied by rapid changes of solution pH and EC, and early tuberization. However, with cv. Atlantic, a mid-late maturing cultivar, there were no significant differences in shoot weight and leaf area among the hydroponic systems. The first tuberization was observed in aeroponics on 26–30 and 43–53 days after transplanting for cvs Superior and Atlantic, respectively. Tuberization in aerohydroponics and deep-water culture system occurred about 3–4 and 6–8 days later, respectively. The number of tubers produced was greatest in the deep-water culture system, but the total tuber weight per plant was the least in this system. For cv. Atlantic, the number of tubers <30 g weight was higher in aerohydroponics than in aeroponics, whereas there was no difference in the number of tubers >30 g between aerohydroponics and aeroponics. For cv. Superior, there was no difference in the size distribution of tubers between the two aeroponic systems. It could be concluded that deep-water culture system could be used to produce many small tubers (1–5 g) for plant propagation. However, the reduced number of large tubers above 5 g weight in the deep-water culture system, may favor use of either aeroponics or aerohydroponics. These two systems produced a similar number of tubers in each size group for the medium-early season cv. Superior, whereas aerohydroponics produced more tubers than aeroponics for the mid-late cultivar Atlantic.",
"title": ""
},
{
"docid": "4de2c6422d8357e6cb00cce21e703370",
"text": "OBJECTIVE\nFalls and fall-related injuries are leading problems in residential aged care facilities. The objective of this study was to provide descriptive data about falls in nursing homes.\n\n\nDESIGN/SETTING/PARTICIPANTS\nProspective recording of all falls over 1 year covering all residents from 528 nursing homes in Bavaria, Germany.\n\n\nMEASUREMENTS\nFalls were reported on a standardized form that included a facility identification code, date, time of the day, sex, age, degree of care need, location of the fall, and activity leading to the fall. Data detailing homes' bed capacities and occupancy levels were used to estimate total person-years under exposure and to calculate fall rates. All analyses were stratified by residents' degree of care need.\n\n\nRESULTS\nMore than 70,000 falls were recorded during 42,843 person-years. The fall rate was higher in men than in women (2.18 and 1.49 falls per person-year, respectively). Fall risk differed by degree of care need with lower fall risks both in the least and highest care categories. About 75% of all falls occurred in the residents' rooms or in the bathrooms and only 22% were reported within the common areas. Transfers and walking were responsible for 41% and 36% of all falls respectively. Fall risk varied during the day. Most falls were observed between 10 am and midday and between 2 pm and 8 pm.\n\n\nCONCLUSION\nThe differing fall risk patterns in specific subgroups may help to target preventive measures.",
"title": ""
},
{
"docid": "b1c0351af515090e418d59a4b553b866",
"text": "BACKGROUND\nThe dermatoscopic examination of the nail plate has been recently introduced for the evaluation of pigmented nail lesions. There is, however, no evidence that this technique improves diagnostic accuracy of in situ melanoma.\n\n\nOBJECTIVE\nTo establish and validate patterns for intraoperative dermatoscopy of the nail matrix.\n\n\nMETHODS\nIntraoperative nail matrix dermatoscopy was performed in 100 consecutive bands of longitudinal melanonychia that were excised and submitted to histopathologic examination.\n\n\nRESULTS\nWe identified 4 dermatoscopic patterns: regular gray pattern (hypermelanosis), regular brown pattern (benign melanocytic hyperplasia), regular brown pattern with globules or blotch (melanocytic nevi), and irregular pattern (melanoma).\n\n\nLIMITATIONS\nNail matrix dermatoscopy is an invasive procedure that can not routinely be performed in all cases of melanonychia.\n\n\nCONCLUSION\nThe patterns described present high sensitivity and specificity for intraoperative differential diagnosis of pigmented nail lesions.",
"title": ""
},
{
"docid": "793453bdbd1044309e62736ab8b7f017",
"text": "There has been a rapid increase in the number and demand for approved biopharmaceuticals produced from animal cell culture processes over the last few years. In part, this has been due to the efficacy of several humanized monoclonal antibodies that are required at large doses for therapeutic use. There have also been several identifiable advances in animal cell technology that has enabled efficient biomanufacture of these products. Gene vector systems allow high specific protein expression and some minimize the undesirable process of gene silencing that may occur in prolonged culture. Characterization of cellular metabolism and physiology has enabled the design of fed-batch and perfusion bioreactor processes that has allowed a significant improvement in product yield, some of which are now approaching 5 g/L. Many of these processes are now being designed in serum-free and animal-component-free media to ensure that products are not contaminated with the adventitious agents found in bovine serum. There are several areas that can be identified that could lead to further improvement in cell culture systems. This includes the down-regulation of apoptosis to enable prolonged cell survival under potentially adverse conditions. The characterization of the critical parameters of glycosylation should enable process control to reduce the heterogeneity of glycoforms so that production processes are consistent. Further improvement may also be made by the identification of glycoforms with enhanced biological activity to enhance clinical efficacy. The ability to produce the ever-increasing number of biopharmaceuticals by animal cell culture is dependent on sufficient bioreactor capacity in the industry. A recent shortfall in available worldwide culture capacity has encouraged commercial activity in contract manufacturing operations. However, some analysts indicate that this still may not be enough and that future manufacturing demand may exceed production capacity as the number of approved biotherapeutics increases.",
"title": ""
},
{
"docid": "16b08c95aaa4f7db98b00b50cb387014",
"text": "Blockchain-based solutions are one of the major areas of research for institutions, particularly in the financial and the government sectors. There is little disagreement that backbone technologies currently used in these sectors are outdated and need an overhaul to conform to the needs of the times. Distributed or decentralized ledgers in the form of blockchains are one of themost discussed potential solutions to the stated problem. We provide a description of permissioned blockchain systems that could be used in creating secure ledgers or timestamped registries. We contend that the blockchain protocol and data should be accessible to end users to provide a higher level of decentralization and transparency and argue that proof ofwork could be effectively used in permissioned blockchains as a means of providing and diversifying security.",
"title": ""
}
] |
scidocsrr
|
08428df180bcc62b895fc6d2198e76cd
|
Ontology Learning from Text: An Overview
|
[
{
"docid": "94aeb6dad00f174f89b709feab3db21f",
"text": "We present a novel approach to the automatic acquisition of taxonomies or concept hierarchies from a text corpus. The approach is based on Formal Concept Analysis (FCA), a method mainly used for the analysis of data, i.e. for investigating and processing explicitly given information. We follow Harris’ distributional hypothesis and model the context of a certain term as a vector representing syntactic dependencies which are automatically acquired from the text corpus with a linguistic parser. On the basis of this context information, FCA produces a lattice that we convert into a special kind of partial order constituting a concept hierarchy. The approach is evaluated by comparing the resulting concept hierarchies with hand-crafted taxonomies for two domains: tourism and finance. We also directly compare our approach with hierarchical agglomerative clustering as well as with Bi-Section-KMeans as an instance of a divisive clustering algorithm. Furthermore, we investigate the impact of using different measures weighting the contribution of each attribute as well as of applying a particular smoothing technique to cope with data sparseness.",
"title": ""
}
] |
[
{
"docid": "68a3f9fb186289f343b34716b2e087f6",
"text": "User interface (UI) is one of the most important components of a mobile app and strongly influences users' perception of the app. However, UI design tasks are typically manual and time-consuming. This paper proposes a novel approach to (semi)-automate those tasks. Our key idea is to develop and deploy advanced deep learning models based on recurrent neural networks (RNN) and generative adversarial networks (GAN) to learn UI design patterns from millions of currently available mobile apps. Once trained, those models can be used to search for UI design samples given user-provided descriptions written in natural language and generate professional-looking UI designs from simpler, less elegant design drafts.",
"title": ""
},
{
"docid": "cce5ae7083e8b23f78cbb592902bf849",
"text": "Digitization changes our world. Industry 4.0, the digital transformation of manufacturing changes the labor market. The impacts of rapid technology development of the fourth industrial revolution present huge challenges for the society and for policy makers. Are we facing reduction of employment by automation rendering human work force uncompetitive with machines? Can creation of new fields of employment, new types of jobs compensate for the loss of traditional labor market requirements?",
"title": ""
},
{
"docid": "2c58d8590ac76348d6495694a35cdb9b",
"text": "Lane detection is to detect lanes on the road and provide the accurate location and shape of each lane. It severs as one of the key techniques to enable modern assisted and autonomous driving systems. However, several unique properties of lanes challenge the detection methods. The lack of distinctive features makes lane detection algorithms tend to be confused by other objects with similar local appearance. Moreover, the inconsistent number of lanes on a road as well as diverse lane line patterns, e.g. solid, broken, single, double, merging, and splitting lines further hamper the performance. In this paper, we propose a deep neural network based method, named LaneNet, to break down the lane detection into two stages: lane edge proposal and lane line localization. Stage one uses a lane edge proposal network for pixel-wise lane edge classification, and the lane line localization network in stage two then detects lane lines based on lane edge proposals. Please note that the goal of our LaneNet is built to detect lane line only, which introduces more difficulties on suppressing the false detections on the similar lane marks on the road like arrows and characters. Despite all the difficulties, our lane detection is shown to be robust to both highway and urban road scenarios method without relying on any assumptions on the lane number or the lane line patterns. The high running speed and low computational cost endow our LaneNet the capability of being deployed on vehicle-based systems. Experiments validate that our LaneNet consistently delivers outstanding performances on real world traffic scenarios.",
"title": ""
},
{
"docid": "0e4334595aeec579e8eb35b0e805282d",
"text": "In this paper, we present madmom, an open-source audio processing and music information retrieval (MIR) library written in Python. madmom features a concise, NumPy-compatible, object oriented design with simple calling conventions and sensible default values for all parameters, which facilitates fast prototyping of MIR applications. Prototypes can be seamlessly converted into callable processing pipelines through madmom's concept of Processors, callable objects that run transparently on multiple cores. Processors can also be serialised, saved, and re-run to allow results to be easily reproduced anywhere. Apart from low-level audio processing, madmom puts emphasis on musically meaningful high-level features. Many of these incorporate machine learning techniques and madmom provides a module that implements some methods commonly used in MIR such as hidden Markov models and neural networks. Additionally, madmom comes with several state-of-the-art MIR algorithms for onset detection, beat, downbeat and meter tracking, tempo estimation, and chord recognition. These can easily be incorporated into bigger MIR systems or run as stand-alone programs.",
"title": ""
},
{
"docid": "e027e472740cea38ef29a347442b14d9",
"text": "De-noising and segmentation are fundamental steps in processing of images. They can be used as preprocessing and post-processing step. They are used to enhance the image quality. Various medical imaging that are used in these days are Magnetic Resonance Images (MRI), Ultrasound, X-Ray, CT Scan etc. Various types of noises affect the quality of images which may lead to unpredictable results. Various noises like speckle noise, Gaussian noise and Rician noise is present in ultrasound, MRI respectively. With the segmentation region required for analysis and diagnosis purpose is extracted. Various algorithm for segmentation like watershed, K-mean clustering, FCM, thresholding, region growing etc. exist. In this paper, we propose an improved watershed segmentation using denoising filter. First of all, image will be de-noised with morphological opening-closing technique then watershed transform using linear correlation and convolution operations is applied to improve efficiency, accuracy and complexity of the algorithm. In this paper, watershed segmentation and various techniques which are used to improve the performance of watershed segmentation are discussed and comparative analysis is done.",
"title": ""
},
{
"docid": "36feae58daa260eca6f6dfe6d8e9dbac",
"text": "Novel closed-form expressions for effective material properties of honeycomb radar-absorbing structure (RAS) are proposed. These expressions, which are derived from strong fluctuation theory with anisotropic correlation function, consist of two parts: 1) the initial value part and 2) the dispersion characteristic part. Compared with the classical closed-form formulas, the novel expressions provide for a better formulation of the effective electromagnetic parameters of honeycomb RAS, which are characterized by well-behaved increase in wide frequency band. The good agreement between the theoretical results and the existing experimental data confirms the validity of the proposed expressions. Furthermore, a linear monomial dispersion characteristic function, which argues not for the absolute frequency value, but the relative frequency displacement of a frequency point relative to the frequency of initial value, is introduced to replace the polynomial expansion of the unknown correlation part in strong fluctuation theory. Such replacement reveals the near-linear relationship between the undetermined coefficients of monomial function and the coating thickness of honeycomb RAS. Compared with polynomial fitting method, which is based on polynomial expansion, this technique can further support the prediction of undetermined coefficients, when simulation results or measurement data are not available.",
"title": ""
},
{
"docid": "9520b99708d905d3713867fac14c3814",
"text": "When people work together to analyze a data set, they need to organize their findings, hypotheses, and evidence, share that information with their collaborators, and coordinate activities amongst team members. Sharing externalizations (recorded information such as notes) could increase awareness and assist with team communication and coordination. However, we currently know little about how to provide tool support for this sort of sharing. We explore how linked common work (LCW) can be employed within a `collaborative thinking space', to facilitate synchronous collaborative sensemaking activities in Visual Analytics (VA). Collaborative thinking spaces provide an environment for analysts to record, organize, share and connect externalizations. Our tool, CLIP, extends earlier thinking spaces by integrating LCW features that reveal relationships between collaborators' findings. We conducted a user study comparing CLIP to a baseline version without LCW. Results demonstrated that LCW significantly improved analytic outcomes at a collaborative intelligence task. Groups using CLIP were also able to more effectively coordinate their work, and held more discussion of their findings and hypotheses. LCW enabled them to maintain awareness of each other's activities and findings and link those findings to their own work, preventing disruptive oral awareness notifications.",
"title": ""
},
{
"docid": "ac9fa26b0c4fb063c55405ca62975d15",
"text": "Autoimmune B cells play a major role in mediating tissue damage in multiple sclerosis (MS). In MS, B cells are believed to cross the blood-brain barrier and undergo stimulation, antigen-driven affinity maturation and clonal expansion within the supportive CNS environment. These highly restricted populations of clonally expanded B cells and plasma cells can be detected in MS lesions, in cerebrospinal fluid, and also in peripheral blood. In phase II trials in relapsing MS, monoclonal antibodies that target circulating CD20-positive B lymphocytes dramatically reduced disease activity. These beneficial effects occurred within weeks of treatment, indicating that a direct effect on B cells--and likely not on putative autoantibodies--was responsible. The discovery that depletion of B cells has an impact on MS biology enabled a paradigm shift in understanding how the inflammatory phase of MS develops, and will hopefully lead to development of increasingly selective therapies against culprit B cells and related humoral immune system pathways. More broadly, these studies illustrate how lessons learned from the bedside have unique power to inform translational research. They highlight the essential role of clinician scientists, currently endangered, who navigate the rocky and often unpredictable terrain between the worlds of clinical medicine and biomedical research.",
"title": ""
},
{
"docid": "b9b87641122440fb41d5157dab6a41f4",
"text": "Web services are playing an important role in e-business and ecommerce applications. As web service applications are interoperable and can work on any platform, large scale distributed systems can be developed easily using web services. Finding most suitable web service from vast collection of web services is very crucial for successful execution of applications. Traditional web service discovery approach is a keyword based search using UDDI. Various other approaches for discovering web services are also available. Some of the discovery approaches are syntax based while other are semantic based. Having system for service discovery which can work automatically is also the concern of service discovery approaches. As these approaches are different, one solution may be better than another depending on requirements. Selecting a specific service discovery system is a hard task. In this paper, we give an overview of different approaches for web service discovery described in literature. We present a survey of how these approaches differ from each other.",
"title": ""
},
{
"docid": "212e6ca443e3bb09ffde5204ed8453da",
"text": "Many inverse problems are formulated as optimization problems over certain appropriate input distributions. Recently, there has been a growing interest in understanding the computational hardness of these optimization problems, not only in the worst case, but in an average-complexity sense under this same input distribution. In this note, we are interested in studying another aspect of hardness, related to the ability to learn how to solve a problem by simply observing a collection of previously solved instances. These are used to supervise the training of an appropriate predictive model that parametrizes a broad class of algorithms, with the hope that the resulting “algorithm” will provide good accuracy-complexity tradeoffs in the average sense. We illustrate this setup on the Quadratic Assignment Problem, a fundamental problem in Network Science. We observe that data-driven models based on Graph Neural Networks offer intriguingly good performance, even in regimes where standard relaxation based techniques appear to suffer.",
"title": ""
},
{
"docid": "850854aeae187ffdd74c56135d9a4d5b",
"text": "Dynamic interactive maps with transparent but powerful human interface capabilities are beginning to emerge for a variety of geographical information systems, including ones situated on portables for travelers, students, business and service people, and others working in field settings. In the present research, interfaces supporting spoken, pen-based, and multimodal input were analyze for their potential effectiveness in interacting with this new generation of map systems. Input modality (speech, writing, multimodal) and map display format (highly versus minimally structured) were varied in a within-subject factorial design as people completed realistic tasks with a simulated map system. The results identified a constellation of performance difficulties associated with speech-only map interactions, including elevated performance errors, spontaneous disfluencies, and lengthier task completion t ime-problems that declined substantially when people could interact multimodally with the map. These performance advantages also mirrored a strong user preference to interact multimodally. The error-proneness and unacceptability of speech-only input to maps was attributed in large part to people's difficulty generating spoken descriptions of spatial location. Analyses also indicated that map display format can be used to minimize performance errors and disfluencies, and map interfaces that guide users' speech toward brevity can nearly eliminate disfiuencies. Implications of this research are discussed for the design of high-performance multimodal interfaces for future map",
"title": ""
},
{
"docid": "7d7fff1a6aca2eb2b81a11afc9205122",
"text": "This paper addresses the problem of 3D human pose estimation in the wild. A significant challenge is the lack of training data, i.e., 2D images of humans annotated with 3D poses. Such data is necessary to train state-of-the-art CNN architectures. Here, we propose a solution to generate a large set of photorealistic synthetic images of humans with 3D pose annotations. We introduce an image-based synthesis engine that artificially augments a dataset of real images with 2D human pose annotations using 3D motion capture data. Given a candidate 3D pose, our algorithm selects for each joint an image whose 2D pose locally matches the projected 3D pose. The selected images are then combined to generate a new synthetic image by stitching local image patches in a kinematically constrained manner. The resulting images are used to train an end-to-end CNN for full-body 3D pose estimation. We cluster the training data into a large number of pose classes and tackle pose estimation as a K-way classification problem. Such an approach is viable only with large training sets such as ours. Our method outperforms most of the published works in terms of 3D pose estimation in controlled environments (Human3.6M) and shows promising results for real-world images (LSP). This demonstrates that CNNs trained on artificial images generalize well to real images. Compared to data generated from more classical rendering engines, our synthetic images do not require any domain adaptation or fine-tuning stage.",
"title": ""
},
{
"docid": "d0992076bfbf8cac6fd66c5bbfb671eb",
"text": "In this paper, we propose a supervised model for ranking word importance that incorporates a rich set of features. Our model is superior to prior approaches for identifying words used in human summaries. Moreover we show that an extractive summarizer which includes our estimation of word importance results in summaries comparable with the state-of-the-art by automatic evaluation. Disciplines Computer Engineering | Computer Sciences Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-14-02. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/989 Improving the Estimation of Word Importance for News Multi-Document Summarization Extended Technical Report Kai Hong University of Pennsylvania Philadelphia, PA, 19104 hongkai1@seas.upenn.edu Ani Nenkova University of Pennsylvania Philadelphia, PA, 19104 nenkova@seas.upenn.edu",
"title": ""
},
{
"docid": "5bde29ce109714f623ae9d69184a8708",
"text": "Adaptive beamforming methods are known to degrade if some of underlying assumptions on the environment, sources, or sensor array become violated. In particular, if the desired signal is present in training snapshots, the adaptive array performance may be quite sensitive even to slight mismatches between the presumed and actual signal steering vectors (spatial signatures). Such mismatches can occur as a result of environmental nonstationarities, look direction errors, imperfect array calibration, distorted antenna shape, as well as distortions caused by medium inhomogeneities, near–far mismatch, source spreading, and local scattering. The similar type of performance degradation can occur when the signal steering vector is known exactly but the training sample size is small. In this paper, we develop a new approach to robust adaptive beamforming in the presence of an arbitrary unknown signal steering vector mismatch. Our approach is based on the optimization of worst-case performance. It turns out that the natural formulation of this adaptive beamforming problem involves minimization of a quadratic function subject to infinitely many nonconvex quadratic constraints. We show that this (originally intractable) problem can be reformulated in a convex form as the so-called second-order cone (SOC) program and solved efficiently (in polynomial time) using the well-established interior point method. It is also shown that the proposed technique can be interpreted in terms of diagonal loading where the optimal value of the diagonal loading factor is computed based on the known level of uncertainty of the signal steering vector. Computer simulations with several frequently encountered types of signal steering vector mismatches show better performance of our robust beamformer as compared with existing adaptive beamforming algorithms.",
"title": ""
},
{
"docid": "ae8ad19049574cd52106e0df51cc4e68",
"text": "In the domain of e-health, there are diverse and heterogeneous health care systems with different brands on various platforms. One of the most important challenges in this field is the interoperability which plays a key role on information exchange and sharing. Achieving the interoperability is a difficult task because of complexity and diversity of systems, standards, and kinds of information. The lack of interoperability would lead to increase costs and errors of medical operation in hospitals. The purpose of this article is to present a conceptual model for solving interoperability in health information systems. A Health Service Bus (HSB) as an integrated infrastructure is suggested to facilitate Service Oriented Architecture. A scenario-based evaluation on the proposed conceptual model shows that adopting web service technology is an effective way for this task.",
"title": ""
},
{
"docid": "422adf480622a0b6011c8d0941767ba9",
"text": "The paper presents a method for the calculus of the currents in the elementary conductors and the additional winding losses for high power a.c. machines. The accuracy method estimation and the results for a hydro-generator of 216 MW validate the proposed method for the design of the Roebel bars.",
"title": ""
},
{
"docid": "8cf02bf19145df237e77273e70babc1d",
"text": "Micro-facial expressions are spontaneous, involuntary movements of the face when a person experiences an emotion but attempts to hide their facial expression, most likely in a high-stakes environment. Recently, research in this field has grown in popularity, however publicly available datasets of micro-expressions have limitations due to the difficulty of naturally inducing spontaneous micro-expressions. Other issues include lighting, low resolution and low participant diversity. We present a newly developed spontaneous micro-facial movement dataset with diverse participants and coded using the Facial Action Coding System. The experimental protocol addresses the limitations of previous datasets, including eliciting emotional responses from stimuli tailored to each participant. Dataset evaluation was completed by running preliminary experiments to classify micro-movements from non-movements. Results were obtained using a selection of spatio-temporal descriptors and machine learning. We further evaluate the dataset on emerging methods of feature difference analysis and propose an Adaptive Baseline Threshold that uses individualised neutral expression to improve the performance of micro-movement detection. In contrast to machine learning approaches, we outperform the state of the art with a recall of 0.91. The outcomes show the dataset can become a new standard for micro-movement data, with future work expanding on data representation and analysis.",
"title": ""
},
{
"docid": "fd897f886b24b2fc7d877954d5c004cd",
"text": "In this paper, we developed a detailed mathematical model of dual action pneumatic actuators controlled with proportional spool valves. Effects of nonlinear flow through the valve, air compressibility in cylinder chambers, leakage between chambers, end of stroke inactive volume, and time delay and attenuation in the pneumatic lines were carefully considered. System identification, numerical simulation and model validation experiments were conducted for two types of air cylinders and different connecting tubes length, showing very good agreement. This mathematical model will be used in the development of high performance nonlinear force controllers, with applications in teleoperation, haptic interfaces, and robotics.",
"title": ""
},
{
"docid": "7ceffe2b8345566f72027780681f2a43",
"text": "This paper presents a transistor optimization methodology for low-power analog integrated CMOS circuits, relying on the physics-based gm/ID characteristics as a design optimization guide. Our custom layout tool LIT implements and uses the ACM MOS compact model in the optimization loop. The methodology is implemented for automation within LIT and exploits all design space through the simulated annealing optimization process, providing solutions close to optimum with a single technology-dependent curve and accurate expressions for transconductance and current valid in all operation regions. The compact model itself contributes to convergence and to optimized implementations, since it has analytic expressions which are continuous in all current regimes, including weak and moderate inversion. The advantage of constraining the optimization within a power budget is of great importance for low-power CMOS. As examples we show the optimization results obtained with LIT, resulting in significant power savings, for the design of a two-stage Miller operational amplifier.",
"title": ""
},
{
"docid": "cda6f812328d1a883b0c5938695981fe",
"text": "This paper investigates the problem of weakly-supervised semantic segmentation, where image-level labels are used as weak supervision. Inspired by the successful use of Convolutional Neural Networks (CNNs) for fully-supervised semantic segmentation, we choose to directly train the CNNs over the oversegmented regions of images for weakly-supervised semantic segmentation. Although there are a few studies on CNNs-based weakly-supervised semantic segmentation, they have rarely considered the noise issue, i.e., the initial weak labels (e.g., social tags) may be noisy. To cope with this issue, we thus propose graph-boosted CNNs (GB-CNNs) for weakly-supervised semantic segmentation. In our GB-CNNs, the graph-based model provides the initial supervision for training the CNNs, and then the outcomes of the CNNs are used to retrain the graph-based model. This training procedure is iteratively implemented to boost the results of semantic segmentation. Experimental results demonstrate that the proposed model outperforms the state-of-the-art weakly-supervised methods. More notably, the proposed model is shown to be more robust in the noisy setting for weakly-supervised semantic segmentation.",
"title": ""
}
] |
scidocsrr
|
987a5a60286c2f9e6f718dbfc1b423e2
|
Building and scaling virtual clusters with residual resources from interactive clouds
|
[
{
"docid": "4dc5aee7d80e2204cc8b2e9305149cca",
"text": "MapReduce offers an ease-of-use programming paradigm for processing large data sets, making it an attractive model for distributed volunteer computing systems. However, unlike on dedicated resources, where MapReduce has mostly been deployed, such volunteer computing systems have significantly higher rates of node unavailability. Furthermore, nodes are not fully controlled by the MapReduce framework. Consequently, we found the data and task replication scheme adopted by existing MapReduce implementations woefully inadequate for resources with high unavailability.\n To address this, we propose MOON, short for MapReduce On Opportunistic eNvironments. MOON extends Hadoop, an open-source implementation of MapReduce, with adaptive task and data scheduling algorithms in order to offer reliable MapReduce services on a hybrid resource architecture, where volunteer computing systems are supplemented by a small set of dedicated nodes. Our tests on an emulated volunteer computing system, which uses a 60-node cluster where each node possesses a similar hardware configuration to a typical computer in a student lab, demonstrate that MOON can deliver a three-fold performance improvement to Hadoop in volatile, volunteer computing environments.",
"title": ""
}
] |
[
{
"docid": "fee87acfa909c016ae4996983cbee50a",
"text": "The multiple traveling salesperson problem (MTSP) is an extension of the well known traveling salesperson problem (TSP). Given m > 1 salespersons and n > m cities to visit, the MTSP seeks a partition of cities into m groups as well as an ordering among cities in each group so that each group of cities is visited by exactly one salesperson in their specified order in such a way that each city is visited exactly once and sum of total distance traveled by all the salespersons is minimized. Apart from the objective of minimizing the total distance traveled by all the salespersons, we have also considered an alternate objective of minimizing the maximum distance traveled by any one salesperson, which is related with balancing the workload among salespersons. In this paper, we have proposed a new grouping genetic algorithm based approach for the MTSP and compared our results with other approaches available in the literature. Our approach outperformed the other approaches on both the objectives.",
"title": ""
},
{
"docid": "cb9a54b8eeb6ca14bdbdf8ee3faa8bdb",
"text": "The problem of auto-focusing has been studied for long, but most techniques found in literature do not always work well for low-contrast images. In this paper, a robust focus measure based on the energy of the image is proposed. It performs equally well on ordinary and low-contrast images. In addition, it is computationally efficient.",
"title": ""
},
{
"docid": "046f15ecf1037477b10bfb4fa315c9c9",
"text": "With the rapid proliferation of camera-equipped smart devices (e.g., smartphones, pads, tablets), visible light communication (VLC) over screen-camera links emerges as a novel form of near-field communication. Such communication via smart devices is highly competitive for its user-friendliness, security, and infrastructure-less (i.e., no dependency on WiFi or cellular infrastructure). However, existing approaches mostly focus on improving the transmission speed and ignore the transmission reliability. Considering the interplay between the transmission speed and reliability towards effective end-to-end communication, in this paper, we aim to boost the throughput over screen-camera links by enhancing the transmission reliability. To this end, we propose RDCode, a robust dynamic barcode which enables a novel packet-frame-block structure. Based on the layered structure, we design different error correction schemes at three levels: intra-blocks, inter-blocks and inter-frames, in order to verify and recover the lost blocks and frames. Finally, we implement RDCode and experimentally show that RDCode reaches a high level of transmission reliability (e.g., reducing the error rate to 10%) and yields a at least doubled transmission rate, compared with the existing state-of-the-art approach COBRA.",
"title": ""
},
{
"docid": "88c0789e82c86b0e730480f44712012d",
"text": "In spite of their having sufficient immunogenicity, tumor vaccines remain largely ineffective. The mechanisms underlying this lack of efficacy are still unclear. Here we report a previously undescribed mechanism by which the tumor endothelium prevents T cell homing and hinders tumor immunotherapy. Transcriptional profiling of microdissected tumor endothelial cells from human ovarian cancers revealed genes associated with the absence or presence of tumor-infiltrating lymphocytes (TILs). Overexpression of the endothelin B receptor (ETBR) was associated with the absence of TILs and short patient survival time. The ETBR inhibitor BQ-788 increased T cell adhesion to human endothelium in vitro, an effect countered by intercellular adhesion molecule-1 (ICAM-1) blockade or treatment with NO donors. In mice, ETBR neutralization by BQ-788 increased T cell homing to tumors; this homing required ICAM-1 and enabled tumor response to otherwise ineffective immunotherapy in vivo without changes in systemic antitumor immune response. These findings highlight a molecular mechanism with the potential to be pharmacologically manipulated to enhance the efficacy of tumor immunotherapy in humans.",
"title": ""
},
{
"docid": "77d80da2b0cd3e8598f9c677fc8827a9",
"text": "In this report, our approach to tackling the task of ActivityNet 2018 Kinetics-600 challenge is described in detail. Though spatial-temporal modelling methods, which adopt either such end-to-end framework as I3D [1] or two-stage frameworks (i.e., CNN+RNN), have been proposed in existing state-of-the-arts for this task, video modelling is far from being well solved. In this challenge, we propose spatial-temporal network (StNet) for better joint spatial-temporal modelling and comprehensively video understanding. Besides, given that multimodal information is contained in video source, we manage to integrate both early-fusion and later-fusion strategy of multi-modal information via our proposed improved temporal Xception network (iTXN) for video understanding. Our StNet RGB single model achieves 78.99% top-1 precision in the Kinetics-600 validation set and that of our improved temporal Xception network which integrates RGB, flow and audio modalities is up to 82.35%. After model ensemble, we achieve top-1 precision as high as 85.0% on the validation set and rank No.1 among all submissions.",
"title": ""
},
{
"docid": "74ef9ec31d4799845765c7752f95720d",
"text": "With the rapid growth of social networks and microblogging websites, communication between people from different cultural and psychological backgrounds has become more direct, resulting in more and more “cyber” conflicts between these people. Consequently, hate speech is used more and more, to the point where it has become a serious problem invading these open spaces. Hate speech refers to the use of aggressive, violent or offensive language, targeting a specific group of people sharing a common property, whether this property is their gender (i.e., sexism), their ethnic group or race (i.e., racism) or their believes and religion. While most of the online social networks and microblogging websites forbid the use of hate speech, the size of these networks and websites makes it almost impossible to control all of their content. Therefore, arises the necessity to detect such speech automatically and filter any content that presents hateful language or language inciting to hatred. In this paper, we propose an approach to detect hate expressions on Twitter. Our approach is based on unigrams and patterns that are automatically collected from the training set. These patterns and unigrams are later used, among others, as features to train a machine learning algorithm. Our experiments on a test set composed of 2010 tweets show that our approach reaches an accuracy equal to 87.4% on detecting whether a tweet is offensive or not (binary classification), and an accuracy equal to 78.4% on detecting whether a tweet is hateful, offensive, or clean (ternary classification).",
"title": ""
},
{
"docid": "fe20c0bee35db1db85968b4d2793b83b",
"text": "The Smule Ocarina is a wind instrument designed for the iPhone, fully leveraging its wide array of technologies: microphone input (for breath input), multitouch (for fingering), accelerometer, real-time sound synthesis, highperformance graphics, GPS/location, and persistent data connection. In this mobile musical artifact, the interactions of the ancient flute-like instrument are both preserved and transformed via breath-control and multitouch finger-holes, while the onboard global positioning and persistent data connection provide the opportunity to create a new social experience, allowing the users of Ocarina to listen to one another. In this way, Ocarina is also a type of social instrument that enables a different, perhaps even magical, sense of global connectivity.",
"title": ""
},
{
"docid": "df36496e721bf3f0a38791b6a4b99b2d",
"text": "Support for an extremist entity such as Islamic State (ISIS) somehow manages to survive globally online despite considerable external pressure and may ultimately inspire acts by individuals having no history of extremism, membership in a terrorist faction, or direct links to leadership. Examining longitudinal records of online activity, we uncovered an ecology evolving on a daily time scale that drives online support, and we provide a mathematical theory that describes it. The ecology features self-organized aggregates (ad hoc groups formed via linkage to a Facebook page or analog) that proliferate preceding the onset of recent real-world campaigns and adopt novel adaptive mechanisms to enhance their survival. One of the predictions is that development of large, potentially potent pro-ISIS aggregates can be thwarted by targeting smaller ones.",
"title": ""
},
{
"docid": "21f8d5f566efa477597e4bf4a8121b29",
"text": "Silicon epitaxial deposition is a process strongly influenced by wafer temperature behavior, which has to be constantly monitored to avoid the production of defective wafers. However, temperature measurements are not reliable, and the sensors have to be appropriately calibrated with some dedicated procedure. A predictive maintenance (PdM) system is proposed with the aim of predicting process behavior and scheduling control actions on the sensors in advance. Two different prediction techniques have been employed and compared: the Kalman predictor and the particle filter with Gaussian kernel density estimator. The accuracy of the PdM module has been tested on real industrial production datasets.",
"title": ""
},
{
"docid": "42d2cdb17f23e22da74a405ccb71f09b",
"text": "Nostalgia is a psychological phenomenon we all can relate to but have a hard time to define. What characterizes the mental state of feeling nostalgia? What psychological function does it serve? Different published materials in a wide range of fields, from consumption research and sport science to clinical psychology, psychoanalysis and sociology, all have slightly different definition of this mental experience. Some claim it is a psychiatric disease giving melancholic emotions to a memory you would consider a happy one, while others state it enforces positivity in our mood. First in this paper a thorough review of the history of nostalgia is presented, then a look at the body of contemporary nostalgia research to see what it could be constituted of. Finally, we want to dig even deeper to see what is suggested by the literature in terms of triggers and functions. Some say that digitally recorded material like music and videos has a potential nostalgic component, which could trigger a reflection of the past in ways that was difficult before such inventions. Hinting towards that nostalgia as a cultural phenomenon is on a rising scene. Some authors say that odors have the strongest impact on nostalgic reverie due to activating it without too much cognitive appraisal. Cognitive neuropsychology has shed new light on a lot of human psychological phenomena‘s and even though empirical testing have been scarce in this field, it should get a fair scrutiny within this perspective as well and hopefully helping to clarify the definition of the word to ease future investigations, both scientifically speaking and in laymen‘s retro hysteria.",
"title": ""
},
{
"docid": "55fcec6d008f4abf377fc55b5b73f01a",
"text": "This work exploits the benefits of adaptive downtilt and vertical sectorization schemes for Long Term Evolution Advanced (LTE-A) networks equipped with active antenna systems (AAS). We highlight how the additional control in the elevation domain (via AAS) enables use of adaptive downtilt and vertical sectorization techniques, thereby improving system spectrum efficiency. Our results, based on a full 3 dimensional (3D) channel, demonstrate that adaptive downtilt achieves up to 11% cell edge and 5% cell average spectrum efficiency gains when compared to a baseline system utilizing fixed downtilt, without the need for complex coordination among cells. In addition, vertical sectorization, especially high-order vertical sectorization utilizing multiple vertical beams, which increases spatial reuse of time and frequency resources, is shown to provide even higher performance gains.",
"title": ""
},
{
"docid": "458358b21a2cc894fad1b6b02bb28f5d",
"text": "There is a considerable debate on addiction and abuse to Smartphone among adolescents and its consequent impact on their health; not only in a global context, but also specifically in the Indian population; considering that Smartphone's, globally occupy more than 50% of mobile phones market and more precise quantification of the associated problems is important to facilitate understanding in this field. As per PRISMA (2009) guidelines, extensive search of various studies in any form from a global scale to the more narrow Indian context using two key search words: \"Smartphone's addiction\" and \"Indian adolescents\" was done using websites of EMBASE, MEDLINE, PubMed, Global Health, Psyc-INFO, Biomed-Central, Web of Science, Cochrane Library, world library - World-Cat, Indian libraries such as National Medical Library of India from 1 January, 1995 to March 31, 2014 first for systematic-review. Finally, meta-analysis on only Indian studies was done using Med-Calc online software capable of doing meta-analysis of proportions. A total of 45 articles were considered in systematic-review from whole world; later on 6 studies out of these 45 related to Smartphone's addiction in India were extracted to perform meta-analysis, in which total 1304 participants (range: 165-335) were enrolled. The smartphone addiction magnitude in India ranged from 39% to 44% as per fixed effects calculated (P < 0.0001). Smartphone addiction among Indian teens can not only damage interpersonal skills, but also it can lead to significant negative health risks and harmful psychological effects on Indian adolescents.",
"title": ""
},
{
"docid": "b23679366a54a0e4fd577592c310bb12",
"text": "In this paper, we explore the application of Recursive Neural Networks on the sentiment analysis task with tweets. Tweets, being a form of communication that has been largely infused with symbols and short-hands, are especially challenging as a sentiment analysis task. In this project, we experiment with different genres of neural net and analyze how models suit the data set in which the nature of the data and model structures come to play. The neural net structures we experimented include one-hidden-layer Recursive Neural Net (RNN), two-hidden-layer RNN and Recursive Neural Tensor Net (RNTN). Different data filtering layers, such as ReLU, tanh, and drop-out also yields many insights while different combination of them might affect the performance in different ways.",
"title": ""
},
{
"docid": "23df6d913ffcdeda3de8b37977866bb7",
"text": "This paper examined the impact of customer relationship management (CRM) elements on customer satisfaction and loyalty. CRM is one of the critical strategies that can be employed by organizations to improve competitive advantage. Four critical CRM elements are measured in this study are behavior of the employees, quality of customer services, relationship development and interaction management. The study was performed at a departmental store in Tehran, Iran. The study employed quantitative approach and base on 300 respondents. Multiple regression analysis is used to examine the relationship of the variables. The finding shows that behavior of the employees is significantly relate and contribute to customer satisfaction and loyalty.",
"title": ""
},
{
"docid": "cc6ce181c808d749a9553b00d611dbc3",
"text": "With increased complexity of webpages nowadays, computation latency incurred by webpage processing during downloading operations has become a newly identified factor that may substantially affect user experiences in a mobile network. In order to tackle this issue, we propose a simple but effective transport-layer optimization technique which requires necessary context information dissemination from the mobile edge computing (MEC) server to user devices where such an algorithm is actually executed. The key novelty in this case is the mobile edge's knowledge about webpage content characteristics which is able to increase downloading throughput for user QoE enhancement. Our experiment results based on a real LTE-A test-bed show that, when the proportion of computation latency varies between 20% and 50% (which is typical for today's webpages), the downloading throughput can be improved up to 34.5%, with reduced downloading time by up to 25.1%",
"title": ""
},
{
"docid": "16a384727d6a323437a0b6ed3cdcc230",
"text": "The ability to learn from a small number of examples has been a difficult problem in machine learning since its inception. While methods have succeeded with large amounts of training data, research has been underway in how to accomplish similar performance with fewer examples, known as one-shot or more generally few-shot learning. This technique has been shown to have promising performance, but in practice requires fixed-size inputs making it impractical for production systems where class sizes can vary. This impedes training and the final utility of few-shot learning systems. This paper describes an approach to constructing and training a network that can handle arbitrary example sizes dynamically as the system is used.",
"title": ""
},
{
"docid": "c51acd24cb864b050432a055fef2de9a",
"text": "Electric motor and power electronics-based inverter are the major components in industrial and automotive electric drives. In this paper, we present a model-based fault diagnostics system developed using a machine learning technology for detecting and locating multiple classes of faults in an electric drive. Power electronics inverter can be considered to be the weakest link in such a system from hardware failure point of view; hence, this work is focused on detecting faults and finding which switches in the inverter cause the faults. A simulation model has been developed based on the theoretical foundations of electric drives to simulate the normal condition, all single-switch and post-short-circuit faults. A machine learning algorithm has been developed to automatically select a set of representative operating points in the (torque, speed) domain, which in turn is sent to the simulated electric drive model to generate signals for the training of a diagnostic neural network, fault diagnostic neural network (FDNN). We validated the capability of the FDNN on data generated by an experimental bench setup. Our research demonstrates that with a robust machine learning approach, a diagnostic system can be trained based on a simulated electric drive model, which can lead to a correct classification of faults over a wide operating domain.",
"title": ""
},
{
"docid": "fee96195e50e7418b5d63f8e6bd07907",
"text": "Optimal power flow (OPF) is considered for microgrids, with the objective of minimizing either the power distribution losses, or, the cost of power drawn from the substation and supplied by distributed generation (DG) units, while effecting voltage regulation. The microgrid is unbalanced, due to unequal loads in each phase and non-equilateral conductor spacings on the distribution lines. Similar to OPF formulations for balanced systems, the considered OPF problem is nonconvex. Nevertheless, a semidefinite programming (SDP) relaxation technique is advocated to obtain a convex problem solvable in polynomial-time complexity. Enticingly, numerical tests demonstrate the ability of the proposed method to attain the globally optimal solution of the original nonconvex OPF. To ensure scalability with respect to the number of nodes, robustness to isolated communication outages, and data privacy and integrity, the proposed SDP is solved in a distributed fashion by resorting to the alternating direction method of multipliers. The resulting algorithm entails iterative message-passing among groups of consumers and guarantees faster convergence compared to competing alternatives.",
"title": ""
},
{
"docid": "ef925e9d448cf4ca9a889b5634b685cf",
"text": "This paper proposes an ameliorated wheel-based cable inspection robot, which is able to climb up a vertical cylindrical cable on the cable-stayed bridge. The newly-designed robot in this paper is composed of two equally spaced modules, which are joined by connecting bars to form a closed hexagonal body to clasp on the cable. Another amelioration is the newly-designed electric circuit, which is employed to limit the descending speed of the robot during its sliding down along the cable. For the safe landing in case of electricity broken-down, a gas damper with a slider-crank mechanism is introduced to exhaust the energy generated by the gravity when the robot is slipping down. For the present design, with payloads below 3.5 kg, the robot can climb up a cable with diameters varying from 65 mm to 205 mm. The landing system is tested experimentally and a simplified mathematical model is analyzed. Several climbing experiments performed on real cables show the capability of the proposed robot.",
"title": ""
},
{
"docid": "f438c1b133441cd46039922c8a7d5a7d",
"text": "This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.",
"title": ""
}
] |
scidocsrr
|
0f0bb92d4c120fa1b0aa58e99c13ea1e
|
Aligning Where to See and What to Tell: Image Captioning with Region-Based Attention and Scene-Specific Contexts
|
[
{
"docid": "06c0ee8d139afd11aab1cc0883a57a68",
"text": "In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.",
"title": ""
}
] |
[
{
"docid": "623cdf022d333ca4d6b244f54d301650",
"text": "Alveolar rhabdomyosarcoma (ARMS) are aggressive soft tissue tumors harboring specific fusion transcripts, notably PAX3-FOXO1 (P3F). Current therapy concepts result in unsatisfactory survival rates making the search for innovative approaches necessary: targeting PAX3-FOXO1 could be a promising strategy. In this study, we developed integrin receptor-targeted Lipid-Protamine-siRNA (LPR) nanoparticles using the RGD peptide and validated target specificity as well as their post-silencing effects. We demonstrate that RGD-LPRs are specific to ARMS in vitro and in vivo. Loaded with siRNA directed against the breakpoint of P3F, these particles efficiently down regulated the fusion transcript and inhibited cell proliferation, but did not induce substantial apoptosis. In a xenograft ARMS model, LPR nanoparticles targeting P3F showed statistically significant tumor growth delay as well as inhibition of tumor initiation when injected in parallel with the tumor cells. These findings suggest that RGD-LPR targeting P3F are promising to be highly effective in the setting of minimal residual disease for ARMS.",
"title": ""
},
{
"docid": "4b988535edefeb3ff7df89bcb900dd1c",
"text": "Context: As a result of automated software testing, large amounts of software test code (script) are usually developed by software teams. Automated test scripts provide many benefits, such as repeatable, predictable, and efficient test executions. However, just like any software development activity, development of test scripts is tedious and error prone. We refer, in this study, to all activities that should be conducted during the entire lifecycle of test-code as Software Test-Code Engineering (STCE). Objective: As the STCE research area has matured and the number of related studies has increased, it is important to systematically categorize the current state-of-the-art and to provide an overview of the trends in this field. Such summarized and categorized results provide many benefits to the broader community. For example, they are valuable resources for new researchers (e.g., PhD students) aiming to conduct additional secondary studies. Method: In this work, we systematically classify the body of knowledge related to STCE through a systematic mapping (SM) study. As part of this study, we pose a set of research questions, define selection and exclusion criteria, and systematically develop and refine a systematic map. Results: Our study pool includes a set of 60 studies published in the area of STCE between 1999 and 2012. Our mapping data is available through an online publicly-accessible repository. We derive the trends for various aspects of STCE. Among our results are the following: (1) There is an acceptable mix of papers with respect to different contribution facets in the field of STCE and the top two leading facets are tool (68%) and method (65%). The studies that presented new processes, however, had a low rate (3%), which denotes the need for more process-related studies in this area. (2) Results of investigation about research facet of studies and comparing our result to other SM studies shows that, similar to other fields in software engineering, STCE is moving towards more rigorous validation approaches. (3) A good mixture of STCE activities has been presented in the primary studies. Among them, the two leading activities are quality assessment and co-maintenance of test-code with production code. The highest growth rate for co-maintenance activities in recent years shows the importance and challenges involved in this activity. (4) There are two main categories of quality assessment activity: detection of test smells and oracle assertion adequacy. (5) JUnit is the leading test framework which has been used in about 50% of the studies. (6) There is a good mixture of SUT types used in the studies: academic experimental systems (or simple code examples), real open-source and commercial systems. (7) Among 41 tools that are proposed for STCE, less than half of the tools (45%) were available for download. It is good to have this percentile of tools to be available, although not perfect, since the availability of tools can lead to higher impact on research community and industry. Conclusion: We discuss the emerging trends in STCE, and discuss the implications for researchers and practitioners in this area. The results of our systematic mapping can help researchers to obtain an overview of existing STCE approaches and spot areas in the field that require more attention from the",
"title": ""
},
{
"docid": "991420a2abaf1907ab4f5a1c2dcf823d",
"text": "We are interested in counting the number of instances of object classes in natural, everyday images. Previous counting approaches tackle the problem in restricted domains such as counting pedestrians in surveillance videos. Counts can also be estimated from outputs of other vision tasks like object detection. In this work, we build dedicated models for counting designed to tackle the large variance in counts, appearances, and scales of objects found in natural scenes. Our approach is inspired by the phenomenon of subitizing – the ability of humans to make quick assessments of counts given a perceptual signal, for small count values. Given a natural scene, we employ a divide and conquer strategy while incorporating context across the scene to adapt the subitizing idea to counting. Our approach offers consistent improvements over numerous baseline approaches for counting on the PASCAL VOC 2007 and COCO datasets. Subsequently, we study how counting can be used to improve object detection. We then show a proof of concept application of our counting methods to the task of Visual Question Answering, by studying the how many? questions in the VQA and COCO-QA datasets.",
"title": ""
},
{
"docid": "2cac621e3ef4547cf974e1e14fc9fb87",
"text": "The wireless Internet of Things interconnects numerous constrained devices such as sensors and actuators not only with each other, but also with cloud services. We demonstrate a low power and lossy Information-Centric Network interworking with a cloud in an industrial application. Our approach includes a lightweight publish-subscribe system for NDN and an ICN-to-MQTT gateway which translates between NDN names and MQTT topics. This demo is based on RIOT and CCN-lite.",
"title": ""
},
{
"docid": "7b730ec53bcc62f49899a5f7a2bc590d",
"text": "It is difficult to build a real network to test novel experiments. OpenFlow makes it easier for researchers to run their own experiments by providing a virtual slice and configuration on real networks. Multiple users can share the same network by assigning a different slice for each one. Users are given the responsibility to maintain and use their own slice by writing rules in a FlowTable. Misconfiguration problems can arise when a user writes conflicting rules for single FlowTable or even within a path of multiple OpenFlow switches that need multiple FlowTables to be maintained at the same time.\n In this work, we describe a tool, FlowChecker, to identify any intra-switch misconfiguration within a single FlowTable. We also describe the inter-switch or inter-federated inconsistencies in a path of OpenFlow switches across the same or different OpenFlow infrastructures. FlowChecker encodes FlowTables configuration using Binary Decision Diagrams and then uses the model checker technique to model the inter-connected network of OpenFlow switches.",
"title": ""
},
{
"docid": "cbe9729b403a07386a76447c4339c5f3",
"text": "Network appliances perform different functions on network flows and constitute an important part of an operator's network. Normally, a set of chained network functions process network flows. Following the trend of virtualization of networks, virtualization of the network functions has also become a topic of interest. We define a model for formalizing the chaining of network functions using a context-free language. We process deployment requests and construct virtual network function graphs that can be mapped to the network. We describe the mapping as a Mixed Integer Quadratically Constrained Program (MIQCP) for finding the placement of the network functions and chaining them together considering the limited network resources and requirements of the functions. We have performed a Pareto set analysis to investigate the possible trade-offs between different optimization objectives.",
"title": ""
},
{
"docid": "b492a0063354a81bd99ac3f81c3fb1ec",
"text": "— Bangla automatic number plate recognition (ANPR) system using artificial neural network for number plate inscribing in Bangla is presented in this paper. This system splits into three major parts-number plate detection, plate character segmentation and Bangla character recognition. In number plate detection there arises many problems such as vehicle motion, complex background, distance changes etc., for this reason edge analysis method is applied. As Bangla number plate consists of two words and seven characters, detected number plates are segmented into individual words and characters by using horizontal and vertical projection analysis. After that a robust feature extraction method is employed to extract the information from each Bangla words and characters which is non-sensitive to the rotation, scaling and size variations. Finally character recognition system takes this information as an input to recognize Bangla characters and words. The Bangla character recognition is implemented using multilayer feed-forward network. According to the experimental result, (The abstract needs some exact figures of findings (like success rates of recognition) and how much the performance is better than previous one.) the performance of the proposed system on different vehicle images is better in case of severe image conditions.",
"title": ""
},
{
"docid": "7ffaedeabffcc9816d1eb83a4e4cdfd0",
"text": "In this paper, we propose a new method for calculating the output layer in neural machine translation systems. The method is based on predicting a binary code for each word and can reduce computation time/memory requirements of the output layer to be logarithmic in vocabulary size in the best case. In addition, we also introduce two advanced approaches to improve the robustness of the proposed model: using error-correcting codes and combining softmax and binary codes. Experiments on two English ↔ Japanese bidirectional translation tasks show proposed models achieve BLEU scores that approach the softmax, while reducing memory usage to the order of less than 1/10 and improving decoding speed on CPUs by x5 to x10.",
"title": ""
},
{
"docid": "1994429bea369cf4f4395095789b3ec4",
"text": "Since Software-Defined Networking (SDN) gains popularity, mobile/wireless support is mentioned with importance to be handled as one of the crucial aspects in SDN. SDN introduces a centralized entity called SDN controller with the holistic view of the topology on the separated control/data plane architecture. Leveraging the features provided in the SDN controller, mobility management can be simply designed and lightweight, thus there is no need to define and rely on new mobility entities such as given in the traditional IP mobility management architectures. In this paper, we design and implement lightweight IPv6 mobility management in Open Network Operating System (ONOS) that is an open-source SDN control platform for service providers. For the lightweight mobility management, we implement the Neighbor Discovery Proxy (ND Proxy) function into the OpenFlow-enabled AP and switches, and ONOS controller module to handle the receiving ICMPv6 message and to send the unique home network prefix address to an IPv6 host. Thus this approach enables mobility management without bringing or integrating on traditional IP mobility protocols. The proposed idea was experimentally evaluated in the ONOS controller and Raspberry Pi based testbed, identifying the obtained handoff signaling latency is in the acceptable performance range.",
"title": ""
},
{
"docid": "9f5998ebc2457c330c29a10772d8ee87",
"text": "Fuzzy hashing is a known technique that has been adopted to speed up malware analysis processes. However, Hashing has not been fully implemented for malware detection because it can easily be evaded by applying a simple obfuscation technique such as packing. This challenge has limited the usage of hashing to triaging of the samples based on the percentage of similarity between the known and unknown. In this paper, we explore the different ways fuzzy hashing can be used to detect similarities in a file by investigating particular hashes of interest. Each hashing method produces independent but related interesting results which are presented herein. We further investigate combination techniques that can be used to improve the detection rates in hashing methods. Two such evidence combination theory based methods are applied in this work in order propose a novel way of combining the results achieved from different hashing algorithms. This study focuses on file and section Ssdeep hashing, PeHash and Imphash techniques to calculate the similarity of the Portable Executable files. Our results show that the detection rates are improved when evidence combination techniques are used.",
"title": ""
},
{
"docid": "2a057079c544b97dded598b6f0d750ed",
"text": "Introduction Sometimes it is not enough for a DNN to produce an outcome. For example, in applications such as healthcare, users need to understand the rationale of the decisions. Therefore, it is imperative to develop algorithms to learn models with good interpretability (Doshi-Velez 2017). An important factor that leads to the lack of interpretability of DNNs is the ambiguity of neurons, where a neuron may fire for various unrelated concepts. This work aims to increase the interpretability of DNNs on the whole image space by reducing the ambiguity of neurons. In this paper, we make the following contributions:",
"title": ""
},
{
"docid": "a5ce24236867a513a19d98bd46bf99d2",
"text": "The mandala thangka, as a religious art in Tibetan Buddhism, is an invaluable cultural and artistic heritage. However, drawing a mandala is both time and effort consuming and requires mastery skills due to its intricate details. Retaining and digitizing this heritage is an unresolved research challenge to date. In this paper, we propose a computer-aided generation approach of mandala thangka patterns to address this issue. Specifically, we construct parameterized models of three stylistic patterns used in the interior mandalas of Nyingma school in Tibetan Buddhism according to their geometric features, namely the star, crescent and lotus flower patterns. Varieties of interior mandalas are successfully generated using these proposed patterns based on the hierarchical structures observed from hand drawn mandalas. The experimental results show that our approach can efficiently generate beautifully-layered colorful interior mandalas, which significantly reduces the time and efforts in manual production and, more importantly, contributes to the digitization of this great heritage.",
"title": ""
},
{
"docid": "cb70ab2056242ca739adde4751fbca2c",
"text": "In this paper, we consider the task of learning control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language barrier makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. We evaluate our approach on two game worlds, comparing against baselines using bag-ofwords and bag-of-bigrams for state representations. Our algorithm outperforms the baselines on both worlds demonstrating the importance of learning expressive representations. 1",
"title": ""
},
{
"docid": "fd14b9e25affb05fd9b05036f3ce350b",
"text": "Recent advances in pedestrian detection are attained by transferring the learned features of Convolutional Neural Network (ConvNet) to pedestrians. This ConvNet is typically pre-trained with massive general object categories (e.g. ImageNet). Although these features are able to handle variations such as poses, viewpoints, and lightings, they may fail when pedestrian images with complex occlusions are present. Occlusion handling is one of the most important problem in pedestrian detection. Unlike previous deep models that directly learned a single detector for pedestrian detection, we propose DeepParts, which consists of extensive part detectors. DeepParts has several appealing properties. First, DeepParts can be trained on weakly labeled data, i.e. only pedestrian bounding boxes without part annotations are provided. Second, DeepParts is able to handle low IoU positive proposals that shift away from ground truth. Third, each part detector in DeepParts is a strong detector that can detect pedestrian by observing only a part of a proposal. Extensive experiments in Caltech dataset demonstrate the effectiveness of DeepParts, which yields a new state-of-the-art miss rate of 11:89%, outperforming the second best method by 10%.",
"title": ""
},
{
"docid": "3a301b11b704e34af05c9072d8353696",
"text": "Attention-deficit hyperactivity disorder (ADHD) is typically characterized as a disorder of inattention and hyperactivity/impulsivity but there is increasing evidence of deficits in motivation. Using positron emission tomography (PET), we showed decreased function in the brain dopamine reward pathway in adults with ADHD, which, we hypothesized, could underlie the motivation deficits in this disorder. To evaluate this hypothesis, we performed secondary analyses to assess the correlation between the PET measures of dopamine D2/D3 receptor and dopamine transporter availability (obtained with [11C]raclopride and [11C]cocaine, respectively) in the dopamine reward pathway (midbrain and nucleus accumbens) and a surrogate measure of trait motivation (assessed using the Achievement scale on the Multidimensional Personality Questionnaire or MPQ) in 45 ADHD participants and 41 controls. The Achievement scale was lower in ADHD participants than in controls (11±5 vs 14±3, P<0.001) and was significantly correlated with D2/D3 receptors (accumbens: r=0.39, P<0.008; midbrain: r=0.41, P<0.005) and transporters (accumbens: r=0.35, P<0.02) in ADHD participants, but not in controls. ADHD participants also had lower values in the Constraint factor and higher values in the Negative Emotionality factor of the MPQ but did not differ in the Positive Emotionality factor—and none of these were correlated with the dopamine measures. In ADHD participants, scores in the Achievement scale were also negatively correlated with symptoms of inattention (CAARS A, E and SWAN I). These findings provide evidence that disruption of the dopamine reward pathway is associated with motivation deficits in ADHD adults, which may contribute to attention deficits and supports the use of therapeutic interventions to enhance motivation in ADHD.",
"title": ""
},
{
"docid": "d066670bbf58a2c96fa3ef2c037166b1",
"text": "Artificial neural networks are applied in many situations. neuralnet is built to train multi-layer perceptrons in the context of regression analyses, i.e. to approximate functional relationships between covariates and response variables. Thus, neural networks are used as extensions of generalized linear models. neuralnet is a very flexible package. The backpropagation algorithm and three versions of resilient backpropagation are implemented and it provides a custom-choice of activation and error function. An arbitrary number of covariates and response variables as well as of hidden layers can theoretically be included. The paper gives a brief introduction to multilayer perceptrons and resilient backpropagation and demonstrates the application of neuralnet using the data set infert, which is contained in the R distribution.",
"title": ""
},
{
"docid": "faea3dad1f13b8c4be3d4d5ffa88dcf1",
"text": "Describing the latest advances in the field, Quantitative Risk Management covers the methods for market, credit and operational risk modelling. It places standard industry approaches on a more formal footing and explores key concepts such as loss distributions, risk measures and risk aggregation and allocation principles. The book’s methodology draws on diverse quantitative disciplines, from mathematical finance and statistics to econometrics and actuarial mathematics. A primary theme throughout is the need to satisfactorily address extreme outcomes and the dependence of key risk drivers. Proven in the classroom, the book also covers advanced topics like credit derivatives.",
"title": ""
},
{
"docid": "b9e765f42f3cf099ff3de0c7c00bddb4",
"text": "In general, meta-parameters in a reinforcement learning system, such as a learning rate and a discount rate, are empirically determined and fixed during learning. When an external environment is therefore changed, the sytem cannot adapt itself to the variation. Meanwhile, it is suggested that the biological brain might conduct reinforcement learning and adapt itself to the external environment by controlling neuromodulators corresponding to the meta-parameters. In the present paper, based on the above suggestion, a method to adjust metaparameters using a temporal difference (TD) error is proposed. Through various computer simulations using a maze search problem and an inverted pendulum control problem, it is verified that the proposed method could appropriately adjust meta-parameters according to the variation of the external environment.",
"title": ""
},
{
"docid": "ff34e210711483fad6ab7254b7e64430",
"text": "QR (Quick Response) Codes are widely used as a convenient unidirectional communication channel to convey information, such as emails, hyperlinks, or phone numbers, from publicity materials to mobile devices. But the QR Code is not visually appealing and takes up valuable space of publicity materials. In this paper, we propose a new method to embed QR Code on digital screen via temporal psychovisual modulation (TPVM). By exploiting the difference between human eyes and semiconductor imaging sensors in temporal convolution of optical signals, we make QR Code perceptually transparent to human but detectable for mobile devices. Based on the idea of invisible QR Code, many applications can be implemented, e.g., \"physical hyperlink\" for something interesting on TV or digital signage , \"invisible watermark\" for anti-piracy in theater. A prototype system introduced in this paper serves as a proof-of-concept of the invisible QR Code and can be improved in future works.",
"title": ""
},
{
"docid": "c205fe5272318a7c2a4d8f8c51244a74",
"text": "In this paper we describe a new approach to creating rich, dynamic and customized maps for business or leisure activities, and demonstrate how the approach can be implemented through a prototype system. The approach is aimed at changing the way we map the world by providing a meaningful and personalized context that is augmented with the semantic web, social media integration and sentiment analysis. In our approach, smart search for an entity on a map is assisted through the semantic web. Once a map entity is identified, real-time and dynamic information about properties of the entity is gathered from social media and is integrated into the map. Sentiment analysis with regard to the map entity can be conducted and its results displayed or used as filters. The main benefit of the proposed approach is two-fold: it allows users to define their own user experience or context by selecting specific properties they want to display on their maps (ratings, comments, pictures and other specific business information), and it organizes interactive maps through the hierarchy of entities/markers/layers/timeframes. We also compare our approach with related work.",
"title": ""
}
] |
scidocsrr
|
c714aa5ee992fd0fa4944f768f86b11e
|
STRATEGIC PLANNING IN A TURBULENT ENVIRONMENT : EVIDENCE FROM THE OIL MAJORS
|
[
{
"docid": "77e501546d95fa18cf2a459fae274875",
"text": "Complex organizations exhibit surprising, nonlinear behavior. Although organization scientists have studied complex organizations for many years, a developing set of conceptual and computational tools makes possible new approaches to modeling nonlinear interactions within and between organizations. Complex adaptive system models represent a genuinely new way of simplifying the complex. They are characterized by four key elements: agents with schemata, self-organizing networks sustained by importing energy, coevolution to the edge of chaos, and system evolution based on recombination. New types of models that incorporate these elements will push organization science forward by merging empirical observation with computational agent-based simulation. Applying complex adaptive systems models to strategic management leads to an emphasis on building systems that can rapidly evolve effective adaptive solutions. Strategic direction of complex organizations consists of establishing and modifying environments within which effective, improvised, self-organized solutions can evolve. Managers influence strategic behavior by altering the fitness landscape for local agents and reconfiguring the organizational architecture within which agents adapt. (Complexity Theory; Organizational Evolution; Strategic Management) Since the open-systems view of organizations began to diffuse in the 1960s, comnplexity has been a central construct in the vocabulary of organization scientists. Open systems are open because they exchange resources with the environment, and they are systems because they consist of interconnected components that work together. In his classic discussion of hierarchy in 1962, Simon defined a complex system as one made up of a large number of parts that have many interactions (Simon 1996). Thompson (1967, p. 6) described a complex organization as a set of interdependent parts, which together make up a whole that is interdependent with some larger environment. Organization theory has treated complexity as a structural variable that characterizes both organizations and their environments. With respect to organizations, Daft (1992, p. 15) equates complexity with the number of activities or subsystems within the organization, noting that it can be measured along three dimensions. Vertical complexity is the number of levels in an organizational hierarchy, horizontal complexity is the number of job titles or departments across the organization, and spatial complexity is the number of geographical locations. With respect to environments, complexity is equated with the number of different items or elements that must be dealt with simultaneously by the organization (Scott 1992, p. 230). Organization design tries to match the complexity of an organization's structure with the complexity of its environment and technology (Galbraith 1982). The very first article ever published in Organization Science suggested that it is inappropriate for organization studies to settle prematurely into a normal science mindset, because organizations are enormously complex (Daft and Lewin 1990). What Daft and Lewin meant is that the behavior of complex systems is surprising and is hard to 1047-7039/99/1003/0216/$05.OO ORGANIZATION SCIENCE/Vol. 10, No. 3, May-June 1999 Copyright ? 1999, Institute for Operations Research pp. 216-232 and the Management Sciences PHILIP ANDERSON Complexity Theory and Organization Science predict, because it is nonlinear (Casti 1994). In nonlinear systems, intervening to change one or two parameters a small amount can drastically change the behavior of the whole system, and the whole can be very different from the sum of the parts. Complex systems change inputs to outputs in a nonlinear way because their components interact with one another via a web of feedback loops. Gell-Mann (1994a) defines complexity as the length of the schema needed to describe and predict the properties of an incoming data stream by identifying its regularities. Nonlinear systems can difficult to compress into a parsimonious description: this is what makes them complex (Casti 1994). According to Simon (1996, p. 1), the central task of a natural science is to show that complexity, correctly viewed, is only a mask for simplicity. Both social scientists and people in organizations reduce a complex description of a system to a simpler one by abstracting out what is unnecessary or minor. To build a model is to encode a natural system into a formal system, compressing a longer description into a shorter one that is easier to grasp. Modeling the nonlinear outcomes of many interacting components has been so difficult that both social and natural scientists have tended to select more analytically tractable problems (Casti 1994). Simple boxes-andarrows causal models are inadequate for modeling systems with complex interconnections and feedback loops, even when nonlinear relations between dependent and independent variables are introduced by means of exponents, logarithms, or interaction terms. How else might we compress complex behavior so we can comprehend it? For Perrow (1967), the more complex an organization is, the less knowable it is and the more deeply ambiguous is its operation. Modem complexity theory suggests that some systems with many interactions among highly differentiated parts can produce surprisingly simple, predictable behavior, while others generate behavior that is impossible to forecast, though they feature simple laws and few actors. As Cohen and Stewart (1994) point out, normal science shows how complex effects can be understood from simple laws; chaos theory demonstrates that simple laws can have complicated, unpredictable consequences; and complexity theory describes how complex causes can produce simple effects. Since the mid-1980s, new approaches to modeling complex systems have been emerging from an interdisciplinary invisible college, anchored on the Santa Fe Institute (see Waldrop 1992 for a historical perspective). The agenda of these scholars includes identifying deep principles underlying a wide variety of complex systems, be they physical, biological, or social (Fontana and Ballati 1999). Despite somewhat frequent declarations that a new paradigm has emerged, it is still premature to declare that a science of complexity, or even a unified theory of complex systems, exists (Horgan 1995). Holland and Miller (1991) have likened the present situation to that of evolutionary theory before Fisher developed a mathematical theory of genetic selection. This essay is not a review of the emerging body of research in complex systems, because that has been ably reviewed many times, in ways accessible to both scholars and managers. Table 1 describes a number of recent, prominent books and articles that inform this literature; Heylighen (1997) provides an excellent introductory bibliography, with a more comprehensive version available on the Internet at http://pespmcl.vub.ac.be/ Evocobib. html. Organization science has passed the point where we can regard as novel a summary of these ideas or an assertion that an empirical phenomenon is consistent with them (see Browning et al. 1995 for a pathbreaking example). Six important insights, explained at length in the works cited in Table 1, should be regarded as well-established scientifically. First, many dynamical systems (whose state at time t determines their state at time t + 1) do not reach either a fixed-point or a cyclical equilibrium (see Dooley and Van de Ven's paper in this issue). Second, processes that appear to be random may be chaotic, revolving around identifiable types of attractors in a deterministic way that seldom if ever return to the same state. An attractor is a limited area in a system's state space that it never departs. Chaotic systems revolve around \"strange attractors,\" fractal objects that constrain the system to a small area of its state space, which it explores in a neverending series that does not repeat in a finite amount of time. Tests exist that can establish whether a given process is random or chaotic (Koput 1997, Ott 1993). Similarly, time series that appear to be random walks may actually be fractals with self-reinforcing trends (Bar-Yam 1997). Third, the behavior of complex processes can be quite sensitive to small differences in initial conditions, so that two entities with very similar initial states can follow radically divergent paths over time. Consequently, historical accidents may \"tip\" outcomes strongly in a particular direction (Arthur 1989). Fourth, complex systems resist simple reductionist analyses, because interconnections and feedback loops preclude holding some subsystems constant in order to study others in isolation. Because descriptions at multiple scales are necessary to identify how emergent properties are produced (Bar-Yam 1997), reductionism and holism are complementary strategies in analyzing such systems (Fontana and Ballati ORGANIZATION SCIENCE/Vol. 10, No. 3, May-June 1999 217 PHILIP ANDERSON Complexity Theory and Organization Science Table 1 Selected Resources that Provide an Overview of Complexity Theory Allison and Kelly, 1999 Written for managers, this book provides an overview of major themes in complexity theory and discusses practical applications rooted in-experiences at firms such as Citicorp. Bar-Yam, 1997 A very comprehensive introduction for mathematically sophisticated readers, the book discusses the major computational techniques used to analyze complex systems, including spin-glass models, cellular automata, simulation methodologies, and fractal analysis. Models are developed to describe neural networks, protein folding, developmental biology, and the evolution of human civilization. Brown and Eisenhardt, 1998 Although this book is not an introduction to complexity theory, a series of small tables throughout the text introduces and explains most of the important concepts. The purpose of the book is to view stra",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] |
[
{
"docid": "92d3987fc0b5d5962f50871ecc23743e",
"text": "Wireless sensor networks (WSNs) have become a hot area of research in recent years due to the realization of their ability in myriad applications including military surveillance, facility monitoring, target detection, and health care applications. However, many WSN design problems involve tradeoffs between multiple conflicting optimization objectives such as coverage preservation and energy conservation. Many of the existing sensor network design approaches, however, generally focus on a single optimization objective. For example, while both energy conservation in a cluster-based WSNs and coverage-maintenance protocols have been extensively studied in the past, these have not been integrated in a multi-objective optimization manner. This paper employs a recently developed multiobjective optimization algorithm, the so-called multi-objective evolutionary algorithm based on decomposition (MOEA/D) to solve simultaneously the coverage preservation and energy conservation design problems in cluster-based WSNs. The performance of the proposed approach, in terms of coverage and network lifetime is compared with a state-of-the-art evolutionary approach called NSGA II. Under the same environments, simulation results on different network topologies reveal that MOEA/D provides a feasible approach for extending the network lifetime while preserving more coverage area.",
"title": ""
},
{
"docid": "681221fa1c48361dfc5916c66580c855",
"text": "Until recently, those deep steganalyzers in spatial domain are all designed for gray-scale images. In this paper, we propose WISERNet (the wider separate-then-reunion network) for steganalysis of color images. We provide theoretical rationale to claim that the summation in normal convolution is one sort of “linear collusion attack” which reserves strong correlated patterns while impairs uncorrelated noises. Therefore in the bottom convolutional layer which aims at suppressing correlated image contents, we adopt separate channel-wise convolution without summation instead. Conversely, in the upper convolutional layers we believe that the summation in normal convolution is beneficial. Therefore we adopt united normal convolution in those layers and make them remarkably wider to reinforce the effect of “linear collusion attack”. As a result, our proposed wide-and-shallow, separate-then-reunion network structure is specifically suitable for color image steganalysis. We have conducted extensive experiments on color image datasets generated from BOSSBase raw images, with different demosaicking algorithms and downsampling algorithms. The experimental results show that our proposed network outperform other state-of-the-art color image steganalytic models either hand-crafted or learned using deep networks in the literature by a clear margin. Specifically, it is noted that the detection performance gain is achieved with less than half the complexity compared to the most advanced deeplearning steganalyzer as far as we know, which is scarce in the literature.",
"title": ""
},
{
"docid": "ba36e8232460f64fa48c517b264d7254",
"text": "We introduce an extension to CCG that allows form and function to be represented simultaneously, reducing the proliferation of modifier categories seen in standard CCG analyses. We can then remove the non-combinatory rules CCGbank uses to address this problem, producing a grammar that is fully lexicalised and far less ambiguous. There are intrinsic benefits to full lexicalisation, such as semantic transparency and simpler domain adaptation. The clearest advantage is a 52-88% improvement in parse speeds, which comes with only a small reduction in accuracy.",
"title": ""
},
{
"docid": "9775092feda3a71c1563475bae464541",
"text": "Open Shortest Path First (OSPF) is the most commonly used intra-domain internet routing protocol. Traffic flow is routed along shortest paths, sptitting flow at nodes where several outgoing tinks are on shortest paths to the destination. The weights of the tinks, and thereby the shortest path routes, can be changed by the network operator. The weights could be set proportional to their physical distances, but often the main goal is to avoid congestion, i.e. overloading of links, and the standard heuristic rec. ommended by Cisco is to make the weight of a link inversely proportional to its capacity. Our starting point was a proposed AT&T WorldNet backbone with demands projected from previous measurements. The desire was to optimize the weight setting based on the projected demands. We showed that optimiz@ the weight settings for a given set of demands is NP-hard, so we resorted to a local search heuristic. Surprisingly it turned out that for the proposed AT&T WorldNet backbone, we found weight settiis that performed within a few percent from that of the optimal general routing where the flow for each demand is optimalty distributed over all paths between source and destination. This contrasts the common belief that OSPF routing leads to congestion and it shows that for the network and demand matrix studied we cannot get a substantially better load balancing by switching to the proposed more flexible Multi-protocol Label Switching (MPLS) technologies. Our techniques were atso tested on synthetic internetworks, based on a model of Zegura et al. (INFOCOM’96), for which we dld not always get quite as close to the optimal general routing. However, we compared witIs standard heuristics, such as weights inversely proportional to the capac.. ity or proportioml to the physical distances, and found that, for the same network and capacities, we could support a 50 Yo-1 10% increase in the demands. Our assumed demand matrix can also be seen as modeling service level agreements (SLAS) with customers, with demands representing guarantees of throughput for virtnal leased lines. Keywords— OSPF, MPLS, traffic engineering, local search, hashing ta. bles, dynamic shortest paths, mntti-cosnmodity network flows.",
"title": ""
},
{
"docid": "420719690b6249322927153daedba87b",
"text": "• In-domain: 91% F1 on the dev set, 5 we reduced the learning rate from 10−4 to 10−5. We then stopped the training when F1 was not improved after 20 epochs. We did the same for ment-norm except that the learning rate was changed at 91.5% F1. Note that all the hyper-parameters except K and the turning point for early stopping were set to the values used by Ganea and Hofmann (2017). Systematic tuning is expensive though may have further ncreased the result of our models.",
"title": ""
},
{
"docid": "6127d1952432dcf5c2339bf52d70ea0b",
"text": "Crystalline metal-organic frameworks (MOFs) are porous frameworks comprising an infinite array of metal nodes connected by organic linkers. The number of novel MOF structures reported per year is now in excess of 6000, despite significant increases in the complexity of both component units and molecular networks. Their regularly repeating structures give rise to chemically variable porous architectures, which have been studied extensively due to their sorption and separation potential. More recently, catalytic applications have been proposed that make use of their chemical tunability, while reports of negative linear compressibility and negative thermal expansion have further expanded interest in the field. Amorphous metal-organic frameworks (aMOFs) retain the basic building blocks and connectivity of their crystalline counterparts, though they lack any long-range periodic order. Aperiodic arrangements of atoms result in their X-ray diffraction patterns being dominated by broad \"humps\" caused by diffuse scattering and thus they are largely indistinguishable from one another. Amorphous MOFs offer many exciting opportunities for practical application, either as novel functional materials themselves or facilitating other processes, though the domain is largely unexplored (total aMOF reported structures amounting to under 30). Specifically, the use of crystalline MOFs to detect harmful guest species before subsequent stress-induced collapse and guest immobilization is of considerable interest, while functional luminescent and optically active glass-like materials may also be prepared in this manner. The ion transporting capacity of crystalline MOFs might be improved during partial structural collapse, while there are possibilities of preparing superstrong glasses and hybrid liquids during thermal amorphization. The tuning of release times of MOF drug delivery vehicles by partial structural collapse may be possible, and aMOFs are often more mechanically robust than crystalline materials, which is of importance for industrial applications. In this Account, we describe the preparation of aMOFs by introduction of disorder into their parent crystalline frameworks through heating, pressure (both hydrostatic and nonhydrostatic), and ball-milling. The main method of characterizing these amorphous materials (analysis of the pair distribution function) is summarized, alongside complementary techniques such as Raman spectroscopy. Detailed investigations into their properties (both chemical and mechanical) are compiled and compared with those of crystalline MOFs, while the impact of the field on the processing techniques used for crystalline MOF powders is also assessed. Crucially, the benefits amorphization may bring to existing proposed MOF applications are detailed, alongside the possibilities and research directions afforded by the combination of the unique properties of the amorphous domain with the versatility of MOF chemistry.",
"title": ""
},
{
"docid": "a3fdbc08bd9b73474319f9bc5c510f85",
"text": "With the rapid increase of mobile devices, the computing load of roadside cloudlets is fast growing. When the computation tasks of the roadside cloudlet reach the limit, the overload may generate heat radiation problem and unacceptable delay to mobile users. In this paper, we leverage the characteristics of buses and propose a scalable fog computing paradigm with servicing offloading in bus networks. The bus fog servers not only provide fog computing services for the mobile users on bus, but also are motivated to accomplish the computation tasks offloaded by roadside cloudlets. By this way, the computing capability of roadside cloudlets is significantly extended. We consider an allocation strategy using genetic algorithm (GA). With this strategy, the roadside cloudlets spend the least cost to offload their computation tasks. Meanwhile, the user experience of mobile users are maintained. The simulations validate the advantage of the propose scheme.",
"title": ""
},
{
"docid": "606bc892776616ffd4f9f9dc44565019",
"text": "Despite the various attractive features that Cloud has to offer, the rate of Cloud migration is rather slow, primarily due to the serious security and privacy issues that exist in the paradigm. One of the main problems in this regard is that of authorization in the Cloud environment, which is the focus of our research. In this paper, we present a systematic analysis of the existing authorization solutions in Cloud and evaluate their effectiveness against well-established industrial standards that conform to the unique access control requirements in the domain. Our analysis can benefit organizations by helping them decide the best authorization technique for deployment in Cloud; a case study along with simulation results is also presented to illustrate the procedure of using our qualitative analysis for the selection of an appropriate technique, as per Cloud consumer requirements. From the results of this evaluation, we derive the general shortcomings of the extant access control techniques that are keeping them from providing successful authorization and, therefore, widely adopted by the Cloud community. To that end, we enumerate the features an ideal access control mechanisms for the Cloud should have, and combine them to suggest the ultimate solution to this major security challenge — access control as a service (ACaaS) for the software as a service (SaaS) layer. We conclude that a meticulous research is needed to incorporate the identified authorization features into a generic ACaaS framework that should be adequate for providing high level of extensibility and security by integrating multiple access control models.",
"title": ""
},
{
"docid": "61b02ae1994637115e3baec128f05bd8",
"text": "Ensuring reliability as the electrical grid morphs into the “smart grid” will require innovations in how we assess the state of the grid, for the purpose of proactive maintenance, rather than reactive maintenance – in the future, we will not only react to failures, but also try to anticipate and avoid them using predictive modeling (machine learning) techniques. To help in meeting this challenge, we present the Neutral Online Visualization-aided Autonomic evaluation framework (NOVA) for evaluating machine learning algorithms for preventive maintenance on the electrical grid. NOVA has three stages provided through a unified user interface: evaluation of input data quality, evaluation of machine learning results, and evaluation of the reliability improvement of the power grid. A prototype version of NOVA has been deployed for the power grid in New York City, and it is able to evaluate machine learning systems effectively and efficiently. Appearing in the ICML 2011 Workshop on Machine Learning for Global Challenges, Bellevue, WA, USA, 2011. Copyright 2011 by the author(s)/owner(s).",
"title": ""
},
{
"docid": "a57b2e8b24cced6f8bfad942dd530499",
"text": "With the tremendous growth of network-based services and sensitive information on networks, network security is getting more and more importance than ever. Intrusion poses a serious security risk in a network environment. The ever growing new intrusion types posses a serious problem for their detection. The human labelling of the available network audit data instances is usually tedious, time consuming and expensive. In this paper, we apply one of the efficient data mining algorithms called naïve bayes for anomaly based network intrusion detection. Experimental results on the KDD cup’99 data set show the novelty of our approach in detecting network intrusion. It is observed that the proposed technique performs better in terms of false positive rate, cost, and computational time when applied to KDD’99 data sets compared to a back propagation neural network based approach.",
"title": ""
},
{
"docid": "6ac6e57937fa3d2a8e319ce17d960c34",
"text": "In various application domains there is a desire to compare process models, e.g., to relate an organization-specific process model to a reference model, to find a web service matching some desired service description, or to compare some normative process model with a process model discovered using process mining techniques. Although many researchers have worked on different notions of equivalence (e.g., trace equivalence, bisimulation, branching bisimulation, etc.), most of the existing notions are not very useful in this context. First of all, most equivalence notions result in a binary answer (i.e., two processes are equivalent or not). This is not very helpful, because, in real-life applications, one needs to differentiate between slightly different models and completely different models. Second, not all parts of a process model are equally important. There may be parts of the process model that are rarely activated while other parts are executed for most process instances. Clearly, these should be considered differently. To address these problems, this paper proposes a completely new way of comparing process models. Rather than directly comparing two models, the process models are compared with respect to some typical behavior. This way we are able to avoid the two problems. Although the results are presented in the context of Petri nets, the approach can be applied to any process modeling language with executable semantics.",
"title": ""
},
{
"docid": "d61e481378ee88da7a33cf88bf69dbef",
"text": "Deep neural networks (DNNs) have achieved tremendous success in many tasks of machine learning, such as the image classification. Unfortunately, researchers have shown that DNNs are easily attacked by adversarial examples, slightly perturbed images which can mislead DNNs to give incorrect classification results. Such attack has seriously hampered the deployment of DNN systems in areas where security or safety requirements are strict, such as autonomous cars, face recognition, malware detection. Defensive distillation is a mechanism aimed at training a robust DNN which significantly reduces the effectiveness of adversarial examples generation. However, the state-of-the-art attack can be successful on distilled networks with 100% probability. But it is a white-box attack which needs to know the inner information of DNN. Whereas, the black-box scenario is more general. In this paper, we first propose the -neighborhood attack, which can fool the defensively distilled networks with 100% success rate in the white-box setting, and it is fast to generate adversarial examples with good visual quality. On the basis of this attack, we further propose the regionbased attack against defensively distilled DNNs in the blackbox setting. And we also perform the bypass attack to indirectly break the distillation defense as a complementary method. The experimental results show that our black-box attacks have a considerable success rate on defensively distilled networks.",
"title": ""
},
{
"docid": "c4490ecc0b0fb0641dc41313d93ccf44",
"text": "Machine learning predictive modeling algorithms are governed by “hyperparameters” that have no clear defaults agreeable to a wide range of applications. The depth of a decision tree, number of trees in a forest, number of hidden layers and neurons in each layer in a neural network, and degree of regularization to prevent overfitting are a few examples of quantities that must be prescribed for these algorithms. Not only do ideal settings for the hyperparameters dictate the performance of the training process, but more importantly they govern the quality of the resulting predictive models. Recent efforts to move from a manual or random adjustment of these parameters include rough grid search and intelligent numerical optimization strategies. This paper presents an automatic tuning implementation that uses local search optimization for tuning hyperparameters of modeling algorithms in SAS® Visual Data Mining and Machine Learning. The AUTOTUNE statement in the TREESPLIT, FOREST, GRADBOOST, NNET, SVMACHINE, and FACTMAC procedures defines tunable parameters, default ranges, user overrides, and validation schemes to avoid overfitting. Given the inherent expense of training numerous candidate models, the paper addresses efficient distributed and parallel paradigms for training and tuning models on the SAS® ViyaTM platform. It also presents sample tuning results that demonstrate improved model accuracy and offers recommendations for efficient and effective model tuning.",
"title": ""
},
{
"docid": "568c7ef495bfc10936398990e72a04d2",
"text": "Accurate estimation of heart rates from photoplethysmogram (PPG) signals during intense physical activity is a very challenging problem. This is because strenuous and high intensity exercise can result in severe motion artifacts in PPG signals, making accurate heart rate (HR) estimation difficult. In this study we investigated a novel technique to accurately reconstruct motion-corrupted PPG signals and HR based on time-varying spectral analysis. The algorithm is called Spectral filter algorithm for Motion Artifacts and heart rate reconstruction (SpaMA). The idea is to calculate the power spectral density of both PPG and accelerometer signals for each time shift of a windowed data segment. By comparing time-varying spectra of PPG and accelerometer data, those frequency peaks resulting from motion artifacts can be distinguished from the PPG spectrum. The SpaMA approach was applied to three different datasets and four types of activities: (1) training datasets from the 2015 IEEE Signal Process. Cup Database recorded from 12 subjects while performing treadmill exercise from 1 km/h to 15 km/h; (2) test datasets from the 2015 IEEE Signal Process. Cup Database recorded from 11 subjects while performing forearm and upper arm exercise. (3) Chon Lab dataset including 10 min recordings from 10 subjects during treadmill exercise. The ECG signals from all three datasets provided the reference HRs which were used to determine the accuracy of our SpaMA algorithm. The performance of the SpaMA approach was calculated by computing the mean absolute error between the estimated HR from the PPG and the reference HR from the ECG. The average estimation errors using our method on the first, second and third datasets are 0.89, 1.93 and 1.38 beats/min respectively, while the overall error on all 33 subjects is 1.86 beats/min and the performance on only treadmill experiment datasets (22 subjects) is 1.11 beats/min. Moreover, it was found that dynamics of heart rate variability can be accurately captured using the algorithm where the mean Pearson's correlation coefficient between the power spectral densities of the reference and the reconstructed heart rate time series was found to be 0.98. These results show that the SpaMA method has a potential for PPG-based HR monitoring in wearable devices for fitness tracking and health monitoring during intense physical activities.",
"title": ""
},
{
"docid": "18ffa160ffce386993b5c2da5070b364",
"text": "This paper presents a new approach for facial attribute classification using a multi-task learning approach. Unlike other approaches that uses hand engineered features, our model learns a shared feature representation that is wellsuited for multiple attribute classification. Learning a joint feature representation enables interaction between different tasks. For learning this shared feature representation we use a Restricted Boltzmann Machine (RBM) based model, enhanced with a factored multi-task component to become Multi-Task Restricted Boltzmann Machine (MT-RBM). Our approach operates directly on faces and facial landmark points to learn a joint feature representation over all the available attributes. We use an iterative learning approach consisting of a bottom-up/top-down pass to learn the shared representation of our multi-task model and at inference we use a bottom-up pass to predict the different tasks. Our approach is not restricted to any type of attributes, however, for this paper we focus only on facial attributes. We evaluate our approach on three publicly available datasets, the Celebrity Faces (CelebA), the Multi-task Facial Landmarks (MTFL), and the ChaLearn challenge dataset. We show superior classification performance improvement over the state-of-the-art.",
"title": ""
},
{
"docid": "51df36570be2707556a8958e16682612",
"text": "Through co-design of Augmented Reality (AR) based teaching material, this research aims to enhance collaborative learning experience in primary school education. It will introduce an interactive AR Book based on primary school textbook using tablets as the real time interface. The development of this AR Book employs co-design methods to involve children, teachers, educators and HCI experts from the early stages of the design process. Research insights from the co-design phase will be implemented in the AR Book design. The final outcome of the AR Book will be evaluated in the classroom to explore its effect on the collaborative experience of primary school students. The research aims to answer the question - Can Augmented Books be designed for primary school students in order to support collaboration? This main research question is divided into two sub-questions as follows - How can co-design methods be applied in designing Augmented Book with and for primary school children? And what is the effect of the proposed Augmented Book on primary school students' collaboration? This research will not only present a practical application of co-designing AR Book for and with primary school children, it will also clarify the benefit of AR for education in terms of collaborative experience.",
"title": ""
},
{
"docid": "01b3c9758bd68ad68a2f1d262feaa4e8",
"text": "A low-voltage-swing MOSFET gate drive technique is proposed in this paper for enhancing the efficiency characteristics of high-frequency-switching dc-dc converters. The parasitic power dissipation of a dc-dc converter is reduced by lowering the voltage swing of the power transistor gate drivers. A comprehensive circuit model of the parasitic impedances of a monolithic buck converter is presented. Closed-form expressions for the total power dissipation of a low-swing buck converter are proposed. The effect of reducing the MOSFET gate voltage swings is explored with the proposed circuit model. A range of design parameters is evaluated, permitting the development of a design space for full integration of active and passive devices of a low-swing buck converter on the same die, for a target CMOS technology. The optimum gate voltage swing of a power MOSFET that maximizes efficiency is lower than a standard full voltage swing. An efficiency of 88% at a switching frequency of 102 MHz is achieved for a voltage conversion from 1.8 to 0.9 V with a low-swing dc-dc converter based on a 0.18-/spl mu/m CMOS technology. The power dissipation of a low-swing dc-dc converter is reduced by 27.9% as compared to a standard full-swing dc-dc converter.",
"title": ""
},
{
"docid": "55a6353fa46146d89c7acd65bee237b5",
"text": "The drastic increase of Android malware has led to a strong interest in developing methods to automate the malware analysis process. Existing automated Android malware detection and classification methods fall into two general categories: 1) signature-based and 2) machine learning-based. Signature-based approaches can be easily evaded by bytecode-level transformation attacks. Prior learning-based works extract features from application syntax, rather than program semantics, and are also subject to evasion. In this paper, we propose a novel semantic-based approach that classifies Android malware via dependency graphs. To battle transformation attacks, we extract a weighted contextual API dependency graph as program semantics to construct feature sets. To fight against malware variants and zero-day malware, we introduce graph similarity metrics to uncover homogeneous application behaviors while tolerating minor implementation differences. We implement a prototype system, DroidSIFT, in 23 thousand lines of Java code. We evaluate our system using 2200 malware samples and 13500 benign samples. Experiments show that our signature detection can correctly label 93\\% of malware instances; our anomaly detector is capable of detecting zero-day malware with a low false negative rate (2\\%) and an acceptable false positive rate (5.15\\%) for a vetting purpose.",
"title": ""
},
{
"docid": "738303da7e26ff4145d32526d44c55a8",
"text": "Diffuse large B-cell lymphoma (DLBCL) accounts for approximately 30% of non-Hodgkin lymphoma (NHL) cases in adult series. DLBCL is characterized by marked clinical and biological heterogeneity, encompassing up to 16 distinct clinicopathological entities. While current treatments are effective in 60% to 70% of patients, those who are resistant to treatment continue to die from this disease. An expert panel performed a systematic review of all data on the diagnosis, prognosis, and treatment of DLBCL published in PubMed, EMBASE and MEDLINE up to December 2017. Recommendations were classified in accordance with the Grading of Recommendations Assessment Development and Evaluation (GRADE) framework, and the proposed recommendations incorporated into practical algorithms. Initial discussions between experts began in March 2016, and a final consensus was reached in November 2017. The final document was reviewed by all authors in February 2018 and by the Scientific Committee of the Spanish Lymphoma Group GELTAMO.",
"title": ""
},
{
"docid": "28fbb71fab5ea16ef52611b31fcf1dfa",
"text": "Gamification, an emerging idea for using game design elements and principles to make everyday tasks more engaging, is permeating many different types of information systems. Excitement surrounding gamification results from its many potential organizational benefits. However, few research and design guidelines exist regarding gamified information systems. We therefore write this commentary to call upon information systems scholars to investigate the design and use of gamified information systems from a variety of disciplinary perspectives and theories, including behavioral economics, psychology, social psychology, information systems, etc. We first explicate the idea of gamified information systems, provide real-world examples of successful and unsuccessful systems, and, based on a synthesis of the available literature, present a taxonomy of gamification design elements. We then develop a framework for research and design: its main theme is to create meaningful engagement for users; that is, gamified information systems should be designed to address the dual goals of instrumental and experiential outcomes. Using this framework, we develop a set of design principles and research questions, using a running case to illustrate some of our ideas. We conclude with a summary of opportunities for IS researchers to extend our knowledge of gamified information systems, and, at the same time, advance existing theories.",
"title": ""
}
] |
scidocsrr
|
add06224f8eaefc86b826e4b40849918
|
Do multiple outcome measures require p-value adjustment?
|
[
{
"docid": "2449aaafacd9a824a8f867052bd7ffe3",
"text": "As medicine leans increasingly on mathematics no clinician can afford to leave the statistical aspects of a paper to the \"experts.\" If you are numerate, try the \"Basic Statistics for Clinicians\" series in the Canadian Medical Association Journal,1 2 3 4 or a more mainstream statistical textbook.5 If, on the other hand, you find statistics impossibly difficult, this article and the next in this series give a checklist of preliminary questions to help you appraise the statistical validity of a paper.",
"title": ""
}
] |
[
{
"docid": "08473b813d0c9e3441d5293c8d1f1a12",
"text": "We present the design, implementation, and informal evaluation of tactile interfaces for small touch screens used in mobile devices. We embedded a tactile apparatus in a Sony PDA touch screen and enhanced its basic GUI elements with tactile feedback. Instead of observing the response of interface controls, users can feel it with their fingers as they press the screen. In informal evaluations, tactile feedback was greeted with enthusiasm. We believe that tactile feedback will become the next step in touch screen interface design and a standard feature of future mobile devices.",
"title": ""
},
{
"docid": "9d775637b3ed678a6de2e41a53a0a19a",
"text": "Research Article Kevin K.Y. Kuan The University of Sydney kevin.kuan@sydney.edu.au Kai-Lung Hui Hong Kong University of Science and Technology klhui@ust.hk Many online review systems adopt a voluntary voting mechanism to identify helpful reviews to support consumer purchase decisions. While several studies have looked at what makes an online review helpful (review helpfulness), little is known on what makes an online review receive votes (review voting). Drawing on information processing theories and the related literature, we investigated the effects of a select set of review characteristics, including review length and readability, review valence, review extremity, and reviewer credibility on two outcomes—review voting and review helpfulness. We examined and analyzed a large set of review data from Amazon with the sample selection model. Our results indicate that there are systematic differences between voted and non-voted reviews, suggesting that helpful reviews with certain characteristics are more likely to be observed and identified in an online review system than reviews without the characteristics. Furthermore, when review characteristics had opposite effects on the two outcomes (i.e. review voting and review helpfulness), ignoring the selection effects due to review voting would result in the effects on review helpfulness being over-estimated, which increases the risk of committing a type I error. Even when the effects on the two outcomes are in the same direction, ignoring the selection effects due to review voting would increase the risk of committing type II error that cannot be mitigated with a larger sample. We discuss the implications of the findings on research and practice.",
"title": ""
},
{
"docid": "f24fb451d6ee013a6bbc8737c0eae689",
"text": "Data on health literacy (HL) in the population is limited for Asian countries. This study aimed to test the validity of the Mandarin version of the European Health Literacy Survey Questionnaire (HLS-EU-Q) for use in the general public in Taiwan. Multistage stratification random sampling resulted in a sample of 2989 people aged 15 years and above. The HLS-EU-Q was validated by confirmatory factor analysis with excellent model data fit indices. The general HL of the Taiwanese population was 34.4 ± 6.6 on a scale of 50. Multivariate regression analysis showed that higher general HL is significantly associated with the higher ability to pay for medication, higher self-perceived social status, higher frequency of watching health-related TV, and community involvement but associated with younger age. HL is also associated with health status, health behaviors, and health care accessibility and use. The HLS-EU-Q was found to be a useful tool to assess HL and its associated factors in the general population.",
"title": ""
},
{
"docid": "764ce631c7c2c68253b4ee15e130bc34",
"text": "Advances in machine learning (ML) in recent years have enabled a dizzying array of applications such as data analytics, autonomous systems, and security diagnostics. ML is now pervasive—new systems and models are being deployed in every domain imaginable, leading to rapid and widespread deployment of software based inference and decision making. There is growing recognition that ML exposes new vulnerabilities in software systems, yet the technical community’s understanding of the nature and extent of these vulnerabilities remains limited. We systematize recent findings on ML security and privacy, focusing on attacks identified on these systems and defenses crafted to date. We articulate a comprehensive threat model for ML, and categorize attacks and defenses within an adversarial framework. Key insights resulting from works both in the ML and security communities are identified and the effectiveness of approaches are related to structural elements of ML algorithms and the data used to train them. We conclude by formally exploring the opposing relationship between model accuracy and resilience to adversarial manipulation. Through these explorations, we show that there are (possibly unavoidable) tensions between model complexity, accuracy, and resilience that must be calibrated for the environments in which they will be used.",
"title": ""
},
{
"docid": "0c24ae9f3d632e25e1bef425b39f8208",
"text": "Real-world phenomena involve complex interactions between multiple signal modalities. As a consequence, humans are used to integrate at each instant perceptions from all their senses in order to enrich their understanding of the surrounding world. This paradigm can be also extremely useful in many signal processing and computer vision problems involving mutually related signals. The simultaneous processing of multimodal data can, in fact, reveal information that is otherwise hidden when considering the signals independently. However, in natural multimodal signals, the statistical dependencies between modalities are in general not obvious. Learning fundamental multimodal patterns could offer deep insight into the structure of such signals. In this paper, we present a novel model of multimodal signals based on their sparse decomposition over a dictionary of multimodal structures. An algorithm for iteratively learning multimodal generating functions that can be shifted at all positions in the signal is proposed, as well. The learning is defined in such a way that it can be accomplished by iteratively solving a generalized eigenvector problem, which makes the algorithm fast, flexible, and free of user-defined parameters. The proposed algorithm is applied to audiovisual sequences and it is able to discover underlying structures in the data. The detection of such audio-video patterns in audiovisual clips allows to effectively localize the sound source on the video in presence of substantial acoustic and visual distractors, outperforming state-of-the-art audiovisual localization algorithms.",
"title": ""
},
{
"docid": "4f59e141ffc88aaed620ca58522e8f03",
"text": "Undergraduate volunteers rated a series of words for pleasantness while hearing a particular background music. The subjects in Experiment 1 received, immediately or after a 48-h delay, an unexpected word-recall test in one of the following musical cue contexts: same cue (S), different cue (D), or no cue (N). For immediate recall, context dependency (S-D) was significant but same-cue facilitation (S-N) was not. No cue effects at all were found for delayed recall, and there was a significant interaction between cue and retention interval. A similar interaction was also found in Experiment 3, which was designed to rule out an alternative explanation with respect to distraction. When the different musical selection was changed specifically in either tempo or form (genre), only pieces having an altered tempo produced significantly lower immediate recall compared with the same pieces (Experiment 2). The results support a stimulus generalization view of music-dependent memory.",
"title": ""
},
{
"docid": "88de6047cec54692dea08abe752acd25",
"text": "Heap-based attacks depend on a combination of memory management error and an exploitable memory allocator. Many allocators include ad hoc countermeasures against particular exploits but their effectiveness against future exploits has been uncertain. This paper presents the first formal treatment of the impact of allocator design on security. It analyzes a range of widely-deployed memory allocators, including those used by Windows, Linux, FreeBSD and OpenBSD, and shows that they remain vulnerable to attack. It them presents DieHarder, a new allocator whose design was guided by this analysis. DieHarder provides the highest degree of security from heap-based attacks of any practical allocator of which we are aware while imposing modest performance overhead. In particular, the Firefox web browser runs as fast with DieHarder as with the Linux allocator.",
"title": ""
},
{
"docid": "0ccc233ea8225de88882883d678793c8",
"text": "Sustaining of Moore's Law over the next decade will require not only continued scaling of the physical dimensions of transistors but also performance improvement and aggressive reduction in power consumption. Heterojunction Tunnel FET (TFET) has emerged as promising transistor candidate for supply voltage scaling down to sub-0.5V due to the possibility of sub-kT/q switching without compromising on-current (ION). Recently, n-type III-V HTFET with reasonable on-current and sub-kT/q switching at supply voltage of 0.5V have been experimentally demonstrated. However, steep switching performance of III-V HTFET till date has been limited to range of drain current (IDS) spanning over less than a decade. In this work, we will present progress on complimentary Tunnel FETs and analyze primary roadblocks in the path towards achieving steep switching performance in III-V HTFET.",
"title": ""
},
{
"docid": "94c6ab34e39dd642b94cc2f538451af8",
"text": "Like every other social practice, journalism cannot now fully be understood apart from globalization. As part of a larger platform of communication media, journalism contributes to this experience of the world-as-a-single-place and thus represents a key component in these social transformations, both as cause and outcome. These issues at the intersection of journalism and globalization define an important and growing field of research, particularly concerning the public sphere and spaces for political discourse. In this essay, I review this intersection of journalism and globalization by considering the communication field’s approach to ‘media globalization’ within a broader interdisciplinary perspective that mixes the sociology of globalization with aspects of geography and social anthropology. By placing the emphasis on social practices, elites, and specific geographical spaces, I introduce a less media-centric approach to media globalization and how journalism fits into the process. Beyond ‘global village journalism,’ this perspective captures the changes globalization has brought to journalism. Like every other social practice, journalism cannot now fully be understood apart from globalization. This process refers to the intensification of social interconnections, which allows apprehending the world as a single place, creating a greater awareness of our own place and its relative location within the range of world experience. As part of a larger platform of communication media, journalism contributes to this experience and thus represents a key component in these social transformations, both as cause and outcome. These issues at the intersection of journalism and globalization define an important and growing field of research, particularly concerning the public sphere and spaces for political discourse. The study of globalization has become a fashionable growth industry, attracting an interdisciplinary assortment of scholars. Journalism, meanwhile, itself has become an important subject in its own right within media studies, with a growing number of projects taking an international perspective (reviewed in Reese 2009). Combining the two areas yields a complex subject that requires some careful sorting out to get beyond the jargon and the easy country–by-country case studies. From the globalization studies side, the media role often seems like an afterthought, a residual category of social change, or a self-evident symbol of the global era–CNN, for example. Indeed, globalization research has been slower to consider the changing role of journalism, compared to the attention devoted to financial and entertainment flows. That may be expected, given that economic and cultural globalization is further along than that of politics, and journalism has always been closely tied to democratic structures, many of which are inherently rooted in local communities. The media-centrism of communication research, on the other hand, may give the media—and the journalism associated with them—too much credit in the globalization process, treating certain media as the primary driver of global connections and the proper object of study. Global connections support new forms of journalism, which create politically significant new spaces within social systems, lead to social change, and privilege certain forms Sociology Compass 4/6 (2010): 344–353, 10.1111/j.1751-9020.2010.00282.x a 2010 The Author Journal Compilation a 2010 Blackwell Publishing Ltd of power. Therefore, we want to know how journalism has contributed to these new spaces, bringing together new combinations of transnational élites, media professionals, and citizens. To what extent are these interactions shaped by a globally consistent shared logic, and what are the consequences for social change and democratic values? Here, however, the discussion often gets reduced to whether a cultural homogenization is taking place, supporting a ‘McWorld’ thesis of a unitary media and journalistic form. But we do not have to subscribe to a one-world media monolith prediction to expect certain transnational logics to emerge to take their place along side existing ones. Journalism at its best contributes to social transparency, which is at the heart of the globalization optimists’ hopes for democracy (e.g. Giddens 2000). The insertion of these new logics into national communities, especially those closed or tightly controlled societies, can bring an important impulse for social change (seen in a number of case studies from China, as in Reese and Dai 2009). In this essay, I will review a few of the issues at the intersection of journalism and globalization and consider a more nuanced view of media within a broader network of actors, particularly in the case of journalism as it helps create emerging spaces for public affairs discourse. Understanding the complex interplay of the global and local requires an interdisciplinary perspective, mixing the sociology of globalization with aspects of geography and social anthropology. This helps avoid equating certain emerging global news forms with a new and distinct public sphere. The globalization of journalism occurs through a multitude of levels, relationships, social actors, and places, as they combine to create new public spaces. Communication research may bring journalism properly to the fore, but it must be considered within the insights into places and relationships provided by these other disciplines. Before addressing these questions, it is helpful to consider how journalism has figured into some larger debates. Media Globalization: Issues of Scale and Homogeneity One major fault line lies within the broader context of ‘media,’ where journalism has been seen as providing flows of information and transnational connections. That makes it a key factor in the phenomenon of ‘media globalization.’ McLuhan gave us the enduring image of the ‘global village,’ a quasi-utopian idea that has seeped into such theorizing about the contribution of media. The metaphor brings expectations of an extensive, unitary community, with a corresponding set of universal, global values, undistorted by parochial interests and propaganda. The interaction of world media systems, however, has not as of yet yielded the kind of transnational media and programs that would support such ‘village’-worthy content (Ferguson 1992; Sparks 2007). In fact, many of the communication barriers show no signs of coming down, with many specialized enclaves becoming stronger. In this respect, changes in media reflect the larger crux of globalization that it simultaneously facilitates certain ‘monoculture’ global standards along with the proliferation of a host of micro-communities that were not possible before. In a somewhat analogous example, the global wine trade has led to convergent trends in internationally desirable tastes but also allowed a number of specialized local wineries to survive and flourish through the ability to reach global markets. The very concept of ‘media globalization’ suggests that we are not quite sure if media lead to globalization or are themselves the result of it. In any case, giving the media a privileged place in shaping a globalized future has led to high expectations for international journalism, satellite television, and other media to provide a workable global public sphere, making them an easy target if they come up short. In his book, Media globalization Journalism and Globalization 345 a 2010 The Author Sociology Compass 4/6 (2010): 344–353, 10.1111/j.1751-9020.2010.00282.x Journal Compilation a 2010 Blackwell Publishing Ltd myth, Kai Hafez (2007) provides that kind of attack. Certainly, much of the discussion has suffered from overly optimistic and under-conceptualized research, with global media technology being a ‘necessary but not sufficient condition for global communication.’ (p. 2) Few truly transnational media forms have emerged that have a more supranational than national allegiance (among newspapers, the International Herald Tribune, Wall St. Journal Europe, Financial Times), and among transnational media even CNN does not present a single version to the world, split as it is into various linguistic viewer zones. Defining cross-border communication as the ‘core phenomenon’ of globalization leads to comparing intrato inter-national communication as the key indicator of globalization. For example, Hafez rejects the internet as a global system of communication, because global connectivity does not exceed local and regional connections. With that as a standard, we may indeed conclude that media globalization has failed to produce true transnational media platforms or dialogs across boundaries. Rather a combination of linguistic and digital divides, along with enduring regional preferences, actually reinforces some boundaries. (The wishful thinking for a global media may be tracked to highly mobile Western scholars, who in Hafez’s ‘hotel thesis’ overestimate the role of such transnational media, because they are available to them in their narrow and privileged travel circles.) Certainly, the foreign news most people receive, even about big international events, is domesticated through the national journalistic lens. Indeed, international reporting, as a key component of the would-be global public sphere, flunks Hafez’s ‘global test,’ incurring the same criticisms others have leveled for years at national journalism: elite-focused, conflictual, and sensational, with a narrow, parochial emphasis. If ‘global’ means giving ‘dialogic’ voices a chance to speak to each other without reproducing national ethnocentrism, then the world’s media still fail to measure up. Conceptualizing the ‘Global’ For many, ‘global’ means big. That goes too for the global village perspective, which emphasizes the scaling dimension and equates the global with ‘bigness,’ part of a nested hierarchy of levels of analysis based on size: beyond local, regional, and nationa",
"title": ""
},
{
"docid": "dcf7214c15c13f13d33c9a7b2c216588",
"text": "Many machine learning tasks such as multiple instance learning, 3D shape recognition and few-shot image classification are defined on sets of instances. Since solutions to such problems do not depend on the permutation of elements of the set, models used to address them should be permutation invariant. We present an attention-based neural network module, the Set Transformer, specifically designed to model interactions among elements in the input set. The model consists of an encoder and a decoder, both of which rely on attention mechanisms. In an effort to reduce computational complexity, we introduce an attention scheme inspired by inducing point methods from sparse Gaussian process literature. It reduces computation time of self-attention from quadratic to linear in the number of elements in the set. We show that our model is theoretically attractive and we evaluate it on a range of tasks, demonstrating increased performance compared to recent methods for set-structured data.",
"title": ""
},
{
"docid": "74128a89e6dc36b264c36a27f5e63cb0",
"text": "Throughout the last 15 years, Synchronous Reluctance (SyR) motor drives have been presented as a competitive alternative to Induction Motor (IM) drives in many variable speed applications, at least in the small power range. Very few examples of SyR motors above the 100Nm torque size are present in the literature. The main advantage of the SyR motor lays in the absence of rotor Joule losses that permits to obtain a continuous torque that is higher than the one of an IM of the same size. The paper presents a 250kW, 1000rpm Synchronous Reluctance motor for industrial applications and its performance comparison with an Induction Motor with identical stator. The rotor of the SyR motor has been purposely designed to fit the stator and housing of the IM competitor. The experimental comparison of the two motors is presented, on a regenerative test bench where one the two motors under test can load the competitor and vice-versa. The results of the experimental tests confirm the assumption that the SyR motor gives more torque and then more power in continuous operation at rated windings temperature. However, the IM maintains a certain advantage in terms of flux-weakening capability that means a larger constant-power speed range.",
"title": ""
},
{
"docid": "f02224b34170dbb8482e84cd4eb2c31e",
"text": "BACKGROUND\nMany countries in middle- and low-income countries today suffer from severe staff shortages and/or maldistribution of health personnel which has been aggravated more recently by the disintegration of health systems in low-income countries and by the global policy environment. One of the most damaging effects of severely weakened and under-resourced health systems is the difficulty they face in producing, recruiting, and retaining health professionals, particularly in remote areas. Low wages, poor working conditions, lack of supervision, lack of equipment and infrastructure as well as HIV and AIDS, all contribute to the flight of health care personnel from remote areas. In this global context of accelerating inequities health service policy makers and managers are searching for ways to improve the attraction and retention of staff in remote areas. But the development of appropriate strategies first requires an understanding of the factors which influence decisions to accept and/or stay in a remote post, particularly in the context of mid and low income countries (MLICS), and which strategies to improve attraction and retention are therefore likely to be successful. It is the aim of this review article to explore the links between attraction and retention factors and strategies, with a particular focus on the organisational diversity and location of decision-making.\n\n\nMETHODS\nThis is a narrative literature review which took an iterative approach to finding relevant literature. It focused on English-language material published between 1997 and 2007. The authors conducted Pubmed searches using a range of different search terms relating to attraction and retention of staff in remote areas. Furthermore, a number of relevant journals as well as unpublished literature were systematically searched. While the initial search included articles from high- middle- and low-income countries, the review focuses on middle- and low-income countries. About 600 papers were initially assessed and 55 eventually included in the review.\n\n\nRESULTS\nThe authors argue that, although factors are multi-facetted and complex, strategies are usually not comprehensive and often limited to addressing a single or limited number of factors. They suggest that because of the complex interaction of factors impacting on attraction and retention, there is a strong argument to be made for bundles of interventions which include attention to living environments, working conditions and environments and development opportunities. They further explore the organisational location of decision-making related to retention issues and suggest that because promising strategies often lie beyond the scope of human resource directorates or ministries of health, planning and decision-making to improve retention requires multi-sectoral collaboration within and beyond government. The paper provides a simple framework for bringing the key decision-makers together to identify factors and develop multi-facetted comprehensive strategies.\n\n\nCONCLUSION\nThere are no set answers to the problem of attraction and retention. It is only through learning about what works in terms of fit between problem analysis and strategy and effective navigation through the politics of implementation that any headway will be made against the almost universal challenge of staffing health service in remote rural areas.",
"title": ""
},
{
"docid": "ff9b5d96b762b2baacf4bf19348c614b",
"text": "Drought stress is a major factor in reduce growth, development and production of plants. Stress was applied with polyethylene glycol (PEG) 6000 and water potentials were: zero (control), -0.15 (PEG 10%), -0.49 (PEG 20%), -1.03 (PEG 30%) and -1.76 (PEG40%) MPa. The solutes accumulation of two maize (Zea mays L.) cultivars -704 and 301were determined after drought stress. In our experiments, a higher amount of soluble sugars and a lower amount of starch were found under stress. Soluble sugars concentration increased (from 1.18 to 1.90 times) in roots and shoots of both varieties when the studied varieties were subjected to drought stress, but starch content were significantly (p<0.05) decreased (from 16 to 84%) in both varieties. This suggests that sugars play an important role in Osmotic Adjustment (OA) in maize. The free proline level also increased (from 1.56 to 3.13 times) in response to drought stress and the increase in 704 var. was higher than 301 var. It seems to proline may play a role in minimizing the damage caused by dehydration. Increase of proline content in shoots was higher than roots, but increase of soluble sugar content and decrease of starch content in roots was higher than shoots.",
"title": ""
},
{
"docid": "ae4c9e5df340af3bd35ae5490083c72a",
"text": "The massive technological advancements around the world have created significant challenging competition among companies where each of the companies tries to attract the customers using different techniques. One of the recent techniques is Augmented Reality (AR). The AR is a new technology which is capable of presenting possibilities that are difficult for other technologies to offer and meet. Nowadays, numerous augmented reality applications have been used in the industry of different kinds and disseminated all over the world. AR will really alter the way individuals view the world. The AR is yet in its initial phases of research and development at different colleges and high-tech institutes. Throughout the last years, AR apps became transportable and generally available on various devices. Besides, AR begins to occupy its place in our audio-visual media and to be used in various fields in our life in tangible and exciting ways such as news, sports and is used in many domains in our life such as electronic commerce, promotion, design, and business. In addition, AR is used to facilitate the learning whereas it enables students to access location-specific information provided through various sources. Such growth and spread of AR applications pushes organizations to compete one another, and every one of them exerts its best to gain the customers. This paper provides a comprehensive study of AR including its history, architecture, applications, current challenges and future trends.",
"title": ""
},
{
"docid": "830a585529981bd5b61ac5af3055d933",
"text": "Automatic retinal image analysis is emerging as an important screening tool for early detection of eye diseases. Glaucoma is one of the most common causes of blindness. The manual examination of optic disk (OD) is a standard procedure used for detecting glaucoma. In this paper, we present an automatic OD parameterization technique based on segmented OD and cup regions obtained from monocular retinal images. A novel OD segmentation method is proposed which integrates the local image information around each point of interest in multidimensional feature space to provide robustness against variations found in and around the OD region. We also propose a novel cup segmentation method which is based on anatomical evidence such as vessel bends at the cup boundary, considered relevant by glaucoma experts. Bends in a vessel are robustly detected using a region of support concept, which automatically selects the right scale for analysis. A multi-stage strategy is employed to derive a reliable subset of vessel bends called r-bends followed by a local spline fitting to derive the desired cup boundary. The method has been evaluated on 138 images comprising 33 normal and 105 glaucomatous images against three glaucoma experts. The obtained segmentation results show consistency in handling various geometric and photometric variations found across the dataset. The estimation error of the method for vertical cup-to-disk diameter ratio is 0.09/0.08 (mean/standard deviation) while for cup-to-disk area ratio it is 0.12/0.10. Overall, the obtained qualitative and quantitative results show effectiveness in both segmentation and subsequent OD parameterization for glaucoma assessment.",
"title": ""
},
{
"docid": "e3acdb12bf902aeee1d6619fd1bd13cc",
"text": "The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, software abstraction levels, and hardware devices, yet are typically not suitable for rapid prototyping or application to problems in the domain of machine learning. In this paper, we describe a new Python package for the simulation of spiking neural networks, specifically geared toward machine learning and reinforcement learning. Our software, called BindsNET, enables rapid building and simulation of spiking networks and features user-friendly, concise syntax. BindsNET is built on the PyTorch deep neural networks library, facilitating the implementation of spiking neural networks on fast CPU and GPU computational platforms. Moreover, the BindsNET framework can be adjusted to utilize other existing computing and hardware backends; e.g., TensorFlow and SpiNNaker. We provide an interface with the OpenAI gym library, allowing for training and evaluation of spiking networks on reinforcement learning environments. We argue that this package facilitates the use of spiking networks for large-scale machine learning problems and show some simple examples by using BindsNET in practice.",
"title": ""
},
{
"docid": "edec01ca60d2fbdd82a419441d876b89",
"text": "The concept of school engagement has attracted increasing attention as representing a possible antidote to declining academic motivation and achievement. Engagement is presumed to be malleable, responsive to contextualfeatures, and amenable to environmental change. Researchers describe behavioral, emotional, and cognitive engagement and recommend studying engagement as a multifaceted construct. This article reviews definitions, measures, precursors, and outcomes of engagement; discusses limitations in the existing research; and suggests improvements. The authors conclude that, although much has been learned, the potential contribution of the concept of school engagement to research on student experience has yet to be realized. They callfor richer characterizations of how students behave, feel, and think-research that could aid in the development offinely tuned interventions.",
"title": ""
},
{
"docid": "274a9094764edd249f1682fbca93a866",
"text": "Visual saliency detection is a challenging problem in computer vision, but one of great importance and numerous applications. In this paper, we propose a novel model for bottom-up saliency within the Bayesian framework by exploiting low and mid level cues. In contrast to most existing methods that operate directly on low level cues, we propose an algorithm in which a coarse saliency region is first obtained via a convex hull of interest points. We also analyze the saliency information with mid level visual cues via superpixels. We present a Laplacian sparse subspace clustering method to group superpixels with local features, and analyze the results with respect to the coarse saliency region to compute the prior saliency map. We use the low level visual cues based on the convex hull to compute the observation likelihood, thereby facilitating inference of Bayesian saliency at each pixel. Extensive experiments on a large data set show that our Bayesian saliency model performs favorably against the state-of-the-art algorithms.",
"title": ""
},
{
"docid": "7ae332505306f94f8f2b4e3903188126",
"text": "Clustering Web services would greatly boost the ability of Web service search engine to retrieve relevant services. The performance of traditional Web service description language (WSDL)-based Web service clustering is not satisfied, due to the singleness of data source. Recently, Web service search engines such as Seekda! allow users to manually annotate Web services using tags, which describe functions of Web services or provide additional contextual and semantical information. In this paper, we cluster Web services by utilizing both WSDL documents and tags. To handle the clustering performance limitation caused by uneven tag distribution and noisy tags, we propose a hybrid Web service tag recommendation strategy, named WSTRec, which employs tag co-occurrence, tag mining, and semantic relevance measurement for tag recommendation. Extensive experiments are conducted based on our real-world dataset, which consists of 15,968 Web services. The experimental results demonstrate the effectiveness of our proposed service clustering and tag recommendation strategies. Specifically, compared with traditional WSDL-based Web service clustering approaches, the proposed approach produces gains in both precision and recall for up to 14 % in most cases.",
"title": ""
},
{
"docid": "69b3275cb4cae53b3a8888e4fe7f85f7",
"text": "In this paper we propose a way to improve the K-SVD image denoising algorithm. The suggested method aims to reduce the gap that exists between the local processing (sparse-coding of overlapping patches) and the global image recovery (obtained by averaging the overlapping patches). Inspired by game-theory ideas, we define a disagreement-patch as the difference between the intermediate locally denoised patch and its corresponding part in the final outcome. Our algorithm iterates the denoising process several times, applied on modified patches. Those are obtained by subtracting the disagreement-patches from their corresponding input noisy ones, thus pushing the overlapping patches towards an agreement. Experimental results demonstrate the improvement this algorithm leads to.",
"title": ""
}
] |
scidocsrr
|
4c97a2a79dbdb1dceb4477dbc1e1cdac
|
TDMA-ASAP: Sensor Network TDMA Scheduling with Adaptive Slot-Stealing and Parallelism
|
[
{
"docid": "cb2d8e7b01de6cdb5a303a38cc11e211",
"text": "Developing sensor network applications demands a new set of tools to aid programmers. A number of simulation environments have been developed that provide varying degrees of scalability, realism, and detail for understanding the behavior of sensor networks. To date, however, none of these tools have addressed one of the most important aspects of sensor application design: that of power consumption. While simple approximations of overall power usage can be derived from estimates of node duty cycle and communication rates, these techniques often fail to capture the detailed, low-level energy requirements of the CPU, radio, sensors, and other peripherals.\n In this paper, we present, a scalable simulation environment for wireless sensor networks that provides an accurate, per-node estimate of power consumption. PowerTOSSIM is an extension to TOSSIM, an event-driven simulation environment for TinyOS applications. In PowerTOSSIM, TinyOS components corresponding to specific hardware peripherals (such as the radio, EEPROM, LEDs, and so forth) are instrumented to obtain a trace of each device's activity during the simulation runPowerTOSSIM employs a novel code-transformation technique to estimate the number of CPU cycles executed by each node, eliminating the need for expensive instruction-level simulation of sensor nodes. PowerTOSSIM includes a detailed model of hardware energy consumption based on the Mica2 sensor node platform. Through instrumentation of actual sensor nodes, we demonstrate that PowerTOSSIM provides accurate estimation of power consumption for a range of applications and scales to support very large simulations.",
"title": ""
}
] |
[
{
"docid": "a67a6db0baa3c2357023229f61e0c288",
"text": "This paper presents a new control strategy for data centers that aims to optimize the trade-off between maximizing the payoff from the provided quality of computational services and minimizing energy costs for computation and cooling. The data center is modeled as two interacting dynamic networks: a computational (cyber) network representing the distribution and flow of computational tasks, and a thermal (physical) network characterizing the distribution and flow of thermal energy. To make the problem tractable, the control architecture is decomposed hierarchically according to time-scales in the thermal and computational network dynamics, and spatially, reflecting weak coupling between zones in the data center. Simulation results demonstrate the effectiveness of the proposed coordinated control strategy relative to traditional approaches in which the cyber and physical resources are controlled independently.",
"title": ""
},
{
"docid": "da3876613301b46645408e474c1f5247",
"text": "The Strength Pareto Evolutionary Algorithm (SPEA) (Zitzle r and Thiele 1999) is a relatively recent technique for finding or approximatin g the Pareto-optimal set for multiobjective optimization problems. In different st udies (Zitzler and Thiele 1999; Zitzler, Deb, and Thiele 2000) SPEA has shown very good performance in comparison to other multiobjective evolutionary algorith ms, and therefore it has been a point of reference in various recent investigations, e.g., (Corne, Knowles, and Oates 2000). Furthermore, it has been used in different a pplic tions, e.g., (Lahanas, Milickovic, Baltas, and Zamboglou 2001). In this pap er, an improved version, namely SPEA2, is proposed, which incorporates in cont rast o its predecessor a fine-grained fitness assignment strategy, a density estima tion technique, and an enhanced archive truncation method. The comparison of SPEA 2 with SPEA and two other modern elitist methods, PESA and NSGA-II, on diffe rent test problems yields promising results.",
"title": ""
},
{
"docid": "c3c3add0c42f3b98962c4682a72b1865",
"text": "This paper compares to investigate output characteristics according to a conventional and novel stator structure of axial flux permanent magnet (AFPM) motor for cooling fan drive system. Segmented core of stator has advantages such as easy winding and fast manufacture speed. However, a unit cost increase due to cutting off tooth tip to constant slot width. To solve the problem, this paper proposes a novel stator structure with three-step segmented core. The characteristics of AFPM were analyzed by time-stepping three dimensional finite element analysis (3D FEA) in two stator models, when stator cores are cutting off tooth tips from rectangular core and three step segmented core. Prototype motors were manufactured based on analysis results, and were tested as a motor.",
"title": ""
},
{
"docid": "a7f4a57534ee0a02b675e3b7acdf53d3",
"text": "Semantic-oriented service matching is one of the challenges in automatic Web service discovery. Service users may search for Web services using keywords and receive the matching services in terms of their functional profiles. A number of approaches to computing the semantic similarity between words have been developed to enhance the precision of matchmaking, which can be classified into ontology-based and corpus-based approaches. The ontology-based approaches commonly use the differentiated concept information provided by a large ontology for measuring lexical similarity with word sense disambiguation. Nevertheless, most of the ontologies are domain-special and limited to lexical coverage, which have a limited applicability. On the other hand, corpus-based approaches rely on the distributional statistics of context to represent per word as a vector and measure the distance of word vectors. However, the polysemous problem may lead to a low computational accuracy. In this paper, in order to augment the semantic information content in word vectors, we propose a multiple semantic fusion (MSF) model to generate sense-specific vector per word. In this model, various semantic properties of the general-purpose ontology WordNet are integrated to fine-tune the distributed word representations learned from corpus, in terms of vector combination strategies. The retrofitted word vectors are modeled as semantic vectors for estimating semantic similarity. The MSF model-based similarity measure is validated against other similarity measures on multiple benchmark datasets. Experimental results of word similarity evaluation indicate that our computational method can obtain higher correlation coefficient with human judgment in most cases. Moreover, the proposed similarity measure is demonstrated to improve the performance of Web service matchmaking based on a single semantic resource. Accordingly, our findings provide a new method and perspective to understand and represent lexical semantics.",
"title": ""
},
{
"docid": "4d44572846a0989bf4bc230b669c88b7",
"text": "Application-specific integrated circuit (ASIC) ML4425 is often used for sensorless control of permanent-magnet (PM) brushless direct current (BLDC) motor drives. It integrates the terminal voltage of the unenergized winding that contains the back electromotive force (EMF) information and uses a phase-locked loop (PLL) to determine the proper commutation sequence for the BLDC motor. However, even without pulsewidth modulation, the terminal voltage is distorted by voltage pulses due to the freewheel diode conduction. The pulses, which appear very wide in an ultrahigh-speed (120 kr/min) drive, are also integrated by the ASIC. Consequently, the motor commutation is significantly retarded, and the drive performance is deteriorated. In this paper, it is proposed that the ASIC should integrate the third harmonic back EMF instead of the terminal voltage, such that the commutation retarding is largely reduced and the motor performance is improved. Basic principle and implementation of the new ASIC-based sensorless controller will be presented, and experimental results will be given to verify the control strategy. On the other hand, phase delay in the motor currents arises due to the influence of winding inductance, reducing the drive performance. Therefore, a novel circuit with discrete components is proposed. It also uses the integration of third harmonic back EMF and the PLL technique and provides controllable advanced commutation to the BLDC motor.",
"title": ""
},
{
"docid": "6527c10c822c2446b7be928f86d3c8f8",
"text": "In this paper we present a novel algorithm for automatic analysis, transcription, and parameter extraction from isolated polyphonic guitar recordings. In addition to general score-related information such as note onset, duration, and pitch, instrumentspecific information such as the plucked string, the applied plucking and expression styles are retrieved automatically. For this purpose, we adapted several state-of-the-art approaches for onset and offset detection, multipitch estimation, string estimation, feature extraction, and multi-class classification. Furthermore we investigated a robust partial tracking algorithm with respect to inharmonicity, an extensive extraction of novel and known audio features as well as the exploitation of instrument-based knowledge in the form of plausability filtering to obtain more reliable prediction. Our system achieved very high accuracy values of 98 % for onset and offset detection as well as multipitch estimation. For the instrument-related parameters, the proposed algorithm also showed very good performance with accuracy values of 82 % for the string number, 93 % for the plucking style, and 83 % for the expression style. Index Terms playing techniques, plucking style, expression style, multiple fundamental frequency estimation, string classification, fretboard position, fingering, electric guitar, inharmonicity coefficient, tablature",
"title": ""
},
{
"docid": "890758b7ed5c5c879fba957bf3f13527",
"text": "Existing approaches to identify the tie strength between users involve typically only one type of network. To date, no studies exist that investigate the intensity of social relations and in particular partnership between users across social networks. To fill this gap in the literature, we studied over 50 social proximity features to detect the tie strength of users defined as partnership in two different types of networks: location-based and online social networks. We compared user pairs in terms of partners and non-partners and found significant differences between those users. Following these observations, we evaluated the social proximity of users via supervised and unsupervised learning approaches and establish that location-based social networks have a great potential for the identification of a partner relationship. In particular, we established that location-based social networks and correspondingly induced features based on events attended by users could identify partnership with 0.922 AUC, while online social network data had a classification power of 0.892 AUC. When utilizing data from both types of networks, a partnership could be identified to a great extent with 0.946 AUC. This article is relevant for engineers, researchers and teachers who are interested in social network analysis and mining.",
"title": ""
},
{
"docid": "277152e8471b497174d9dd165717f892",
"text": "Fault diagnosis is useful in helping technicians detect, isolate, and identify faults, and troubleshoot. Bayesian network (BN) is a probabilistic graphical model that effectively deals with various uncertainty problems. This model is increasingly utilized in fault diagnosis. This paper presents bibliographical review on use of BNs in fault diagnosis in the last decades with focus on engineering systems. This work also presents general procedure of fault diagnosis modeling with BNs; processes include BN structure modeling, BN parameter modeling, BN inference, fault identification, validation, and verification. The paper provides series of classification schemes for BNs for fault diagnosis, BNs combined with other techniques, and domain of fault diagnosis with BN. This study finally explores current gaps and challenges and several directions for future research.",
"title": ""
},
{
"docid": "e5667a65bc628b93a1d5b0e37bfb8694",
"text": "The problem of determining whether an object is in motion, irrespective of camera motion, is far from being solved. We address this challenging task by learning motion patterns in videos. The core of our approach is a fully convolutional network, which is learned entirely from synthetic video sequences, and their ground-truth optical flow and motion segmentation. This encoder-decoder style architecture first learns a coarse representation of the optical flow field features, and then refines it iteratively to produce motion labels at the original high-resolution. We further improve this labeling with an objectness map and a conditional random field, to account for errors in optical flow, and also to focus on moving things rather than stuff. The output label of each pixel denotes whether it has undergone independent motion, i.e., irrespective of camera motion. We demonstrate the benefits of this learning framework on the moving object segmentation task, where the goal is to segment all objects in motion. Our approach outperforms the top method on the recently released DAVIS benchmark dataset, comprising real-world sequences, by 5.6%. We also evaluate on the Berkeley motion segmentation database, achieving state-of-the-art results.",
"title": ""
},
{
"docid": "15709a8aecbf8f4f35bf47b79c3dca03",
"text": "We introduce a new approach to hierarchy formation and task decomposition in hierarchical reinforcement learning. Our method is based on the Hierarchy Of Abstract Machines (HAM) framework because HAM approach is able to design efficient controllers that will realize specific behaviors in real robots. The key to our algorithm is the introduction of the internal or “mental” environment in which the state represents the structure of the HAM hierarchy. The internal action in this environment leads to changes the hierarchy of HAMs. We propose the classical Qlearning procedure in the internal environment which allows the agent to obtain an optimal hierarchy. We extends the HAM framework by adding on-model approach to select the appropriate sub-machine to execute action sequences for certain class of external environment states. Preliminary experiments demonstrated the prospects of the method.",
"title": ""
},
{
"docid": "6c2ebec143f8a9ee3a83f494867ebce6",
"text": "Monitoring network traffic and detecting unwanted applications has become a challenging problem, since many applications obfuscate their traffic using unregistered port numbers or payload encryption. Apart from some notable exceptions, most traffic monitoring tools use two types of approaches: (a) keeping traffic statistics such as packet sizes and interarrivals, flow counts, byte volumes, etc., or (b) analyzing packet content. In this paper, we propose the use of Traffic Dispersion Graphs (TDGs) as a way to monitor, analyze, and visualize network traffic. TDGs model the social behavior of hosts (\"who talks to whom\"), where the edges can be defined to represent different interactions (e.g. the exchange of a certain number or type of packets). With the introduction of TDGs, we are able to harness a wealth of tools and graph modeling techniques from a diverse set of disciplines.",
"title": ""
},
{
"docid": "e29f4224c5d0f921304e54bd1555cb38",
"text": "More and more sensitivity improvement is required for current sensors that are used in new area of applications, such as electric vehicle, smart meter, and electricity usage monitoring system. To correspond with the technical needs, a high precision magnetic current sensor module has been developed. The sensor module features an excellent linearity and a small magnetic hysteresis. In addition, it offers 2.5-4.5 V voltage output for 0-300 A positive input current and 0.5-2.5 V voltage output for 0-300 A negative input current under -40 °C-125 °C, VCC = 5 V condition.",
"title": ""
},
{
"docid": "8f3497ecbe4c4687a1bc669c8933b556",
"text": "Many problems in multi-view geometry, when posed as minimization of the maximum reprojection error across observations, can be solved optimally in polynomial time. We show that these problems are instances of a convex-concave generalized fractional program. We survey the major solution methods for solving problems of this form and present them in a unified framework centered around a single parametric optimization problem. We propose two new algorithms and show that the algorithm proposed by Olsson et al. [21] is a special case of a classical algorithm for generalized fractional programming. The performance of all the algorithms is compared on a variety of datasets, and the algorithm proposed by Gugat [12] stands out as a clear winner. An open source MATLAB toolbox that implements all the algorithms presented here is made available.",
"title": ""
},
{
"docid": "45a45087a6829486d46eda0adcff978f",
"text": "Container technology has the potential to considerably simplify the management of the software stack of High Performance Computing (HPC) clusters. However, poor integration with established HPC technologies is still preventing users and administrators to reap the benefits of containers. Message Passing Interface (MPI) is a pervasive technology used to run scientific software, often written in Fortran and C/C++, that presents challenges for effective integration with containers. This work shows how an existing MPI implementation can be extended to improve this integration.",
"title": ""
},
{
"docid": "05e4cfafcef5ad060c1f10b9c6ad2bc0",
"text": "Mobile devices have been integrated into our everyday life. Consequently, home automation and security are becoming increasingly prominent features on mobile devices. In this paper, we have developed a security system that interfaces with an Android mobile device. The mobile device and security system communicate via Bluetooth because a short-range-only communications system was desired. The mobile application can be loaded onto any compatible device, and once loaded, interface with the security system. Commands to lock, unlock, or check the status of the door to which the security system is installed can be sent quickly from the mobile device via a simple, easy to use GUI. The security system then acts on these commands, taking the appropriate action and sending a confirmation back to the mobile device. The security system can also tell the user if the door is open. The door also incorporates a traditional lock and key interface in case the user loses the mobile device.",
"title": ""
},
{
"docid": "58042f8c83e5cc4aa41e136bb4e0dc1f",
"text": "In this paper, we propose wire-free integrated sensors that monitor pulse wave velocity (PWV) and respiration, both non-electrical vital signs, by using an all-electrical method. The key techniques that we employ to obtain all-electrical and wire-free measurement are bio-impedance (BI) and analog-modulated body-channel communication (BCC), respectively. For PWV, time difference between ECG signal from the heart and BI signal from the wrist is measured. To remove wires and avoid sampling rate mismatch between ECG and BI sensors, ECG signal is sent to the BI sensor via analog BCC without any sampling. For respiration measurement, BI sensor is located at the abdomen to detect volume change during inhalation and exhalation. A prototype chip fabricated in 0.11 μm CMOS process consists of ECG, BI sensor and BCC transceiver. Measurement results show that heart rate and PWV are both within their normal physiological range. The chip consumes 1.28 mW at 1.2 V supply while occupying 5 mm×2.5 mm of area.",
"title": ""
},
{
"docid": "268f69996c65f0ab8192719935e9460b",
"text": "for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of 'Fourier intrinsic band functions' (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time-frequency-energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms.",
"title": ""
},
{
"docid": "f12cbeb6a202ea8911a67abe3ffa6ccc",
"text": "In order to enhance the study of the kinematics of any robot arm, parameter design is directed according to certain necessities for the robot, and its forward and inverse kinematics are discussed. The DH convention Method is used to form the kinematical equation of the resultant structure. In addition, the Robotics equations are modeled in MATLAB to create a 3D visual simulation of the robot arm to show the result of the trajectory planning algorithms. The simulation has detected the movement of each joint of the robot arm, and tested the parameters, thus accomplishing the predetermined goal which is drawing a sine wave on a writing board.",
"title": ""
},
{
"docid": "efcb591ad7523eb7a11f8291ad2de35a",
"text": "Chebfun is an established software system for computing with functions of a real variable, but its capabilities for handling functions with singularities are limited. Here an analogous system is described based on sinc function expansions instead of Chebyshev series. This experiment sheds light on the strengths and weaknesses of sinc function techniques. It also serves as a review of some of the main features of sinc methods, including construction, evaluation, zerofinding, optimization, integration, and differentiation.",
"title": ""
}
] |
scidocsrr
|
16571f8ee587ad9676d913b149ef7285
|
Merging knowledge bases in different languages
|
[
{
"docid": "47c96721db5ab8595ab3dcc2cf310954",
"text": "Whereas people learn many different types of knowledge from diverse experiences over many years, most current machine learning systems acquire just a single function or data model from just a single data set. We propose a neverending learning paradigm for machine learning, to better reflect the more ambitious and encompassing type of learning performed by humans. As a case study, we describe the Never-Ending Language Learner (NELL), which achieves some of the desired properties of a never-ending learner, and we discuss lessons learned. NELL has been learning to read the web 24 hours/day since January 2010, and so far has acquired a knowledge base with over 80 million confidenceweighted beliefs (e.g., servedWith(tea, biscuits)). NELL has also learned millions of features and parameters that enable it to read these beliefs from the web. Additionally, it has learned to reason over these beliefs to infer new beliefs, and is able to extend its ontology by synthesizing new relational predicates. NELL can be tracked online at http://rtw.ml.cmu.edu, and followed on Twitter at @CMUNELL.",
"title": ""
}
] |
[
{
"docid": "f86d2e40eabe4067da73070db337d9ce",
"text": "Despite tremendous efforts to develop stimuli-responsive enzyme delivery systems, their efficacy has been mostly limited to in vitro applications. Here we introduce, by using an approach of combining biomolecules with artificial compartments, a biomimetic strategy to create artificial organelles (AOs) as cellular implants, with endogenous stimuli-triggered enzymatic activity. AOs are produced by inserting protein gates in the membrane of polymersomes containing horseradish peroxidase enzymes selected as a model for natures own enzymes involved in the redox homoeostasis. The inserted protein gates are engineered by attaching molecular caps to genetically modified channel porins in order to induce redox-responsive control of the molecular flow through the membrane. AOs preserve their structure and are activated by intracellular glutathione levels in vitro. Importantly, our biomimetic AOs are functional in vivo in zebrafish embryos, which demonstrates the feasibility of using AOs as cellular implants in living organisms. This opens new perspectives for patient-oriented protein therapy. The efficacy of stimuli-responsive enzyme delivery systems is usually limited to in vitro applications. Here the authors form artificial organelles by inserting stimuli-responsive protein gates in membranes of polymersomes loaded with enzymes and obtain a triggered functionality both in vitro and in vivo.",
"title": ""
},
{
"docid": "8a3aca46031738f9a273287c355f1d0b",
"text": "Universal background models (UBM) in speaker recognition systems are typically Gaussian mixture models (GMM) traine d from a large amount of data using the maximum likelihood criterion. This paper investigates three alternative crit eria for training the UBM. In the first, we cluster an existing automat ic speech recognition (ASR) acoustic model to generate the UBM . In each of the other two, we use statistics based on the speake r labels of the development data to regularize the maximum lik elihood objective function in training the UBM. We present an iterative algorithm similar to the expectation maximization (EM) algorithm to train the UBM for each of these regularized maximum likelihood criteria. We present several experiments t hat show how combining only two systems outperforms the best published results on the English telephone tasks of the NIST 2008 speaker recognition evaluation.",
"title": ""
},
{
"docid": "e2459b9991cfda1e81119e27927140c5",
"text": "This research demo describes the implementation of a mobile AR-supported educational course application, AR Circuit, which is designed to promote the effectiveness of remote collaborative learning for physics. The application employs the TCP/IP protocol enabling multiplayer functionality in a mobile AR environment. One phone acts as the server and the other acts as the client. The server phone will capture the video frames, process the video frame, and send the current frame and the markers transformation matrices to the client phone.",
"title": ""
},
{
"docid": "b1313b777c940445eb540b1e12fa559e",
"text": "In this paper we explore the correlation between the sound of words and their meaning, by testing if the polarity (‘good guy’ or ‘bad guy’) of a character’s role in a work of fiction can be predicted by the name of the character in the absence of any other context. Our approach is based on phonological and other features proposed in prior theoretical studies of fictional names. These features are used to construct a predictive model over a manually annotated corpus of characters from motion pictures. By experimenting with different mixtures of features, we identify phonological features as being the most discriminative by comparison to social and other types of features, and we delve into a discussion of specific phonological and phonotactic indicators of a character’s role’s polarity.",
"title": ""
},
{
"docid": "418de962446199744b4ced735c506d41",
"text": "In this paper, a stereo matching algorithm based on image segments is presented. We propose the hybrid segmentation algorithm that is based on a combination of the Belief Propagation and Mean Shift algorithms with aim to refine the disparity and depth map by using a stereo pair of images. This algorithm utilizes image filtering and modified SAD (Sum of Absolute Differences) stereo matching method. Firstly, a color based segmentation method is applied for segmenting the left image of the input stereo pair (reference image) into regions. The aim of the segmentation is to simplify representation of the image into the form that is easier to analyze and is able to locate objects in images. Secondly, results of the segmentation are used as an input of the local window-based matching method to determine the disparity estimate of each image pixel. The obtained experimental results demonstrate that the final depth map can be obtained by application of segment disparities to the original images. Experimental results with the stereo testing images show that our proposed Hybrid algorithm HSAD gives a good performance.",
"title": ""
},
{
"docid": "12fe1e2edd640b55a769e5c881822aa6",
"text": "In this paper we introduce a runtime system to allow unmodified multi-threaded applications to use multiple machines. The system allows threads to migrate freely between machines depending on the workload. Our prototype, COMET (Code Offload by Migrating Execution Transparently), is a realization of this design built on top of the Dalvik Virtual Machine. COMET leverages the underlying memory model of our runtime to implement distributed shared memory (DSM) with as few interactions between machines as possible. Making use of a new VM-synchronization primitive, COMET imposes little restriction on when migration can occur. Additionally, enough information is maintained so one machine may resume computation after a network failure. We target our efforts towards augmenting smartphones or tablets with machines available in the network. We demonstrate the effectiveness of COMET on several real applications available on Google Play. These applications include image editors, turn-based games, a trip planner, and math tools. Utilizing a server-class machine, COMET can offer significant speed-ups on these real applications when run on a modern smartphone. With WiFi and 3G networks, we observe geometric mean speed-ups of 2.88X and 1.27X relative to the Dalvik interpreter across the set of applications with speed-ups as high as 15X on some applications.",
"title": ""
},
{
"docid": "787377fc8e1f9da5ec2b6ea77bcc0725",
"text": "We show that the counting class LWPP [8] remains unchanged even if one allows a polynomial number of gap values rather than one. On the other hand, we show that it is impossible to improve this from polynomially many gap values to a superpolynomial number of gap values by relativizable proof techniques. The first of these results implies that the Legitimate Deck Problem (from the study of graph reconstruction) is in LWPP (and thus low for PP, i.e., PPLegitimate Deck = PP) if the weakened version of the Reconstruction Conjecture holds in which the number of nonisomorphic preimages is assumed merely to be polynomially bounded. This strengthens the 1992 result of Köbler, Schöning, and Torán [15] that the Legitimate Deck Problem is in LWPP if the Reconstruction Conjecture holds, and provides strengthened evidence that the Legitimate Deck Problem is not NP-hard. We additionally show on the one hand that our main LWPP robustness result also holds for WPP, and also holds even when one allows both the rejectionand acceptancegap-value targets to simultaneously be polynomial-sized lists; yet on the other hand, we show that for the #P-based analog of LWPP the behavior much differs in that, in some relativized worlds, even two target values already yield a richer class than one value does. 2012 ACM Subject Classification Theory of computation → Complexity classes",
"title": ""
},
{
"docid": "64d53035eb919d5e27daef6b666b7298",
"text": "The 3L-NPC (Neutral-Point-Clamped) is the most popular multilevel converter used in high-power medium-voltage applications. An important disadvantage of this structure is the unequal distribution of losses among the switches. The performances of 3L-NPC structure were improved by developing the 3L-ANPC (Active-NPC) converter which has more degrees of freedom. In this paper the switching states and the loss distribution problem are studied for different PWM strategies in a STATCOM application. The PSIM simulation results are shown in order to validate the PWM strategies studied for 3L-ANPC converter.",
"title": ""
},
{
"docid": "d3b2283ce3815576a084f98c34f37358",
"text": "We present a system for the detection of the stance of headlines with regard to their corresponding article bodies. The approach can be applied in fake news, especially clickbait detection scenarios. The component is part of a larger platform for the curation of digital content; we consider veracity and relevancy an increasingly important part of curating online information. We want to contribute to the debate on how to deal with fake news and related online phenomena with technological means, by providing means to separate related from unrelated headlines and further classifying the related headlines. On a publicly available data set annotated for the stance of headlines with regard to their corresponding article bodies, we achieve a (weighted) accuracy score of 89.59.",
"title": ""
},
{
"docid": "76d59eaa0e2862438492b55f893ceea3",
"text": "The need to increase security in open or public spaces has in turn given rise to the requirement to monitor these spaces and analyse those images on‐site and on‐time. At this point, the use of smart cameras ‐ of which the popularity has been increasing ‐ is one step ahead. With sensors and Digital Signal Processors (DSPs), smart cameras generate ad hoc results by analysing the numeric images transmitted from the sensor by means of a variety of image‐processing algorithms. Since the images are not transmitted to a distance processing unit but rather are processed inside the camera, it does not necessitate high‐ bandwidth networks or high processor powered systems; it can instantaneously decide on the required access. Nonetheless, on account of restricted memory, processing power and overall power, image processing algorithms need to be developed and optimized for embedded processors. Among these algorithms, one of the most important is for face detection and recognition. A number of face detection and recognition methods have been proposed recently and many of these methods have been tested on general‐purpose processors. In smart cameras ‐ which are real‐life applications of such methods ‐ the widest use is on DSPs. In the present study, the Viola‐Jones face detection method ‐ which was reported to run faster on PCs ‐ was optimized for DSPs; the face recognition method was combined with the developed sub‐region and mask‐based DCT (Discrete Cosine Transform). As the employed DSP is a fixed‐point processor, the processes were performed with integers insofar as it was possible. To enable face recognition, the image was divided into sub‐ regions and from each sub‐region the robust coefficients against disruptive elements ‐ like face expression, illumination, etc. ‐ were selected as the features. The discrimination of the selected features was enhanced via LDA (Linear Discriminant Analysis) and then employed for recognition. Thanks to its operational convenience, codes that were optimized for a DSP received a functional test after the computer simulation. In these functional tests, the face recognition system attained a 97.4% success rate on the most popular face database: the FRGC.",
"title": ""
},
{
"docid": "28c2948fb0df6113c31a3f8acdc45db5",
"text": "Group recommendation aims to recommend items for a group of users, e.g., recommending a restaurant for a group of colleagues. The group recommendation problem is challenging, in that a good model should understand the group decision making process appropriately: users are likely to follow decisions of only a few users, who are group′s leaders or experts. To address this challenge, we propose using an attention mechanism to capture the impact of each user in a group. Specifically, our model learns the influence weight of each user in a group and recommends items to the group based on its members′ weighted preferences. Moreover, our model can dynamically adjust the weight of each user across the groups; thus, the model provides a new and flexible method to model the complicated group decision making process, which differentiates us from other existing solutions. Through extensive experiments, it has demonstrated that our model significantly outperforms baseline methods for the group recommendation problem.",
"title": ""
},
{
"docid": "c5bbf52d3f62e27c1070ecc17930f7bc",
"text": "More and more web applications suffer the presence of cross-site scripting vulnerabilities that could be exploited by attackers to access sensitive information (such as credentials or credit card numbers). Hence proper tests are required to assess the security of web applications. In this paper, we resort to a search based approach for security testing web applications. We take advantage of static analysis to detect candidate cross-site scripting vulnerabilities. Input values that expose these vulnerabilities are searched by a genetic algorithm and, to help the genetic algorithm escape local optima, symbolic constraints are collected at run-time and passed to a solver. Search results represent test cases to be used by software developers to understand and fix security problems. We implemented this approach in a prototype and evaluated it on real world PHP code.",
"title": ""
},
{
"docid": "ada1db1673526f98840291977998773d",
"text": "The effect of immediate versus delayed feedback on rule-based and information-integration category learning was investigated. Accuracy rates were examined to isolate global performance deficits, and model-based analyses were performed to identify the types of response strategies used by observers. Feedback delay had no effect on the accuracy of responding or on the distribution of best fitting models in the rule-based category-learning task. However, delayed feedback led to less accurate responding in the information-integration category-learning task. Model-based analyses indicated that the decline in accuracy with delayed feedback was due to an increase in the use of rule-based strategies to solve the information-integration task. These results provide support for a multiple-systems approach to category learning and argue against the validity of single-system approaches.",
"title": ""
},
{
"docid": "cd16afd19a0ac72cd3453a7b59aad42b",
"text": "BACKGROUND\nIncreased flexibility is often desirable immediately prior to sports performance. Static stretching (SS) has historically been the main method for increasing joint range-of-motion (ROM) acutely. However, SS is associated with acute reductions in performance. Foam rolling (FR) is a form of self-myofascial release (SMR) that also increases joint ROM acutely but does not seem to reduce force production. However, FR has never previously been studied in resistance-trained athletes, in adolescents, or in individuals accustomed to SMR.\n\n\nOBJECTIVE\nTo compare the effects of SS and FR and a combination of both (FR+SS) of the plantarflexors on passive ankle dorsiflexion ROM in resistance-trained, adolescent athletes with at least six months of FR experience.\n\n\nMETHODS\nEleven resistance-trained, adolescent athletes with at least six months of both resistance-training and FR experience were tested on three separate occasions in a randomized cross-over design. The subjects were assessed for passive ankle dorsiflexion ROM after a period of passive rest pre-intervention, immediately post-intervention and after 10, 15, and 20 minutes of passive rest. Following the pre-intervention test, the subjects randomly performed either SS, FR or FR+SS. SS and FR each comprised 3 sets of 30 seconds of the intervention with 10 seconds of inter-set rest. FR+SS comprised the protocol from the FR condition followed by the protocol from the SS condition in sequence.\n\n\nRESULTS\nA significant effect of time was found for SS, FR and FR+SS. Post hoc testing revealed increases in ROM between baseline and post-intervention by 6.2% for SS (p < 0.05) and 9.1% for FR+SS (p < 0.05) but not for FR alone. Post hoc testing did not reveal any other significant differences between baseline and any other time point for any condition. A significant effect of condition was observed immediately post-intervention. Post hoc testing revealed that FR+SS was superior to FR (p < 0.05) for increasing ROM.\n\n\nCONCLUSIONS\nFR, SS and FR+SS all lead to acute increases in flexibility and FR+SS appears to have an additive effect in comparison with FR alone. All three interventions (FR, SS and FR+SS) have time courses that lasted less than 10 minutes.\n\n\nLEVEL OF EVIDENCE\n2c.",
"title": ""
},
{
"docid": "ae5df62bc13105298ae28d11a0a92ffa",
"text": "This work presents an approach for estimating the effect of the fractional-N phase locked loop (Frac-N PLL) phase noise profile on frequency modulated continuous wave (FMCW) radar precision. Unlike previous approaches, the proposed modelling method takes the actual shape of the phase noise profile into account leading to insights on the main regions dominating the precision. Estimates from the proposed model are in very good agreement with statistical simulations and measurement results from an FMCW radar test chip fabricated on an IBM7WL BiCMOS 0.18 μm technology. At 5.8 GHz center frequency, a close-in phase noise of −75 dBc/Hz at 1 kHz offset is measured. A root mean squared (RMS) chirp nonlinearity error of 14.6 kHz and a ranging precision of 0.52 cm are achieved which competes with state-of-the-art FMCW secondary radars.",
"title": ""
},
{
"docid": "1238556dbcd297f363fb2116b7ffbab4",
"text": "We describe an efficient method to produce objects comprising spatially controlled and graded cross-link densities using vat photopolymerization additive manufacturing (AM). Using a commercially available diacrylate-based photoresin, 3D printer, and digital light processing (DLP) projector, we projected grayscale images to print objects in which the varied light intensity was correlated to controlled cross-link densities and associated mechanical properties. Cylinder and bar test specimens were used to establish correlations between light intensities used for printing and cross-link density in the resulting specimens. Mechanical testing of octet truss unit cells in which the properties of the crossbars and vertices were independently modified revealed unique mechanical responses from the different compositions. From the various test geometries, we measured changes in mechanical properties such as increased strain-to-break in inhomogeneous structures in comparison with homogeneous variants.",
"title": ""
},
{
"docid": "61ae61d0950610ee2ad5e07f64f9b983",
"text": "We present Searn, an algorithm for integrating search and learning to solve complex structured prediction problems such as those that occur in natural language, speech, computational biology, and vision. Searn is a meta-algorithm that transforms these complex problems into simple classification problems to which any binary classifier may be applied. Unlike current algorithms for structured learning that require decomposition of both the loss function and the feature functions over the predicted structure, Searn is able to learn prediction functions for any loss function and any class of features. Moreover, Searn comes with a strong, natural theoretical guarantee: good performance on the derived classification problems implies good performance on the structured prediction problem.",
"title": ""
},
{
"docid": "e6e6eb1f1c0613a291c62064144ff0ba",
"text": "Mobile phones have become the most popular way to communicate with other individuals. While cell phones have become less of a status symbol and more of a fashion statement, they have created an unspoken social dependency. Adolescents and young adults are more likely to engage in SMS messing, making phone calls, accessing the internet from their phone or playing a mobile driven game. Once pervaded by boredom, teenagers resort to instant connection, to someone, somewhere. Sensation seeking behavior has also linked adolescents and young adults to have the desire to take risks with relationships, rules and roles. Individuals seek out entertainment and avoid boredom at all times be it appropriate or inappropriate. Cell phones are used for entertainment, information and social connectivity. It has been demonstrated that individuals with low self – esteem use cell phones to form and maintain social relationships. They form an attachment with cell phone which molded their mind that they cannot function without their cell phone on a day-to-day basis. In this context, the study attempts to examine the extent of use of mobile phone and its influence on the academic performance of the students. A face to face survey using structured questionnaire was the method used to elicit the opinions of students between the age group of 18-25 years in three cities covering all the three regions the State of Andhra Pradesh in India. The survey was administered among 1200 young adults through two stage random sampling to select the colleges and respondents from the selected colleges, with 400 from each city. In Hyderabad, 201 males and 199 females participated in the survey. In Visakhapatnam, 192 males and 208 females participated. In Tirupati, 220 males and 180 females completed the survey. Two criteria were taken into consideration while choosing the participants for the survey. The participants are college-going and were mobile phone users. Each of the survey responses was entered and analyzed using SPSS software. The Statistical Package for Social Sciences (SPSS 16) had been used to work out the distribution of samples in terms of percentages for each specified parameter.",
"title": ""
},
{
"docid": "08df6cd44a26be6c4cc96082631a0e6e",
"text": "In the natural habitat of our ancestors, physical activity was not a preventive intervention but a matter of survival. In this hostile environment with scarce food and ubiquitous dangers, human genes were selected to optimize aerobic metabolic pathways and conserve energy for potential future famines.1 Cardiac and vascular functions were continuously challenged by intermittent bouts of high-intensity physical activity and adapted to meet the metabolic demands of the working skeletal muscle under these conditions. When speaking about molecular cardiovascular effects of exercise, we should keep in mind that most of the changes from baseline are probably a return to normal values. The statistical average of physical activity in Western societies is so much below the levels normal for our genetic background that sedentary lifestyle in combination with excess food intake has surpassed smoking as the No. 1 preventable cause of death in the United States.2 Physical activity has been shown to have beneficial effects on glucose metabolism, skeletal muscle function, ventilator muscle strength, bone stability, locomotor coordination, psychological well-being, and other organ functions. However, in the context of this review, we will focus entirely on important molecular effects on the cardiovascular system. The aim of this review is to provide a bird’s-eye view on what is known and unknown about the physiological and biochemical mechanisms involved in mediating exercise-induced cardiovascular effects. The resulting map is surprisingly detailed in some areas (ie, endothelial function), whereas other areas, such as direct cardiac training effects in heart failure, are still incompletely understood. For practical purposes, we have decided to use primarily an anatomic approach to present key data on exercise effects on cardiac and vascular function. For the cardiac effects, the left ventricle and the cardiac valves will be described separately; for the vascular effects, we will follow the arterial vascular tree, addressing changes in the aorta, the large conduit arteries, the resistance vessels, and the microcirculation before turning our attention toward the venous and the pulmonary circulation (Figure 1). Cardiac Effects of Exercise Left Ventricular Myocardium and Ventricular Arrhythmias The maintenance of left ventricular (LV) mass and function depends on regular exercise. Prolonged periods of physical inactivity, as studied in bed rest trials, lead to significant reductions in LV mass and impaired cardiac compliance, resulting in reduced upright stroke volume and orthostatic intolerance.3 In contrast, a group of bed rest subjects randomized to regular supine lower-body negative pressure treadmill exercise showed an increase in LV mass and a preserved LV stoke volume.4 In previously sedentary healthy subjects, a 12-week moderate exercise program induced a mild cardiac hypertrophic response as measured by cardiac magnetic resonance imaging.5 These findings highlight the plasticity of LV mass and function in relation to the current level of physical activity.",
"title": ""
},
{
"docid": "d8127fc372994baee6fd8632d585a347",
"text": "Dynamic query interfaces (DQIs) form a recently developed method of database access that provides continuous realtime feedback to the user during the query formulation process. Previous work shows that DQIs are elegant and powerful interfaces to small databases. Unfortunately, when applied to large databases, previous DQI algorithms slow to a crawl. We present a new approach to DQI algorithms that works well with large databases.",
"title": ""
}
] |
scidocsrr
|
9f4188197a105d99f6ef0bf2663b7f78
|
Retinal Optic Disc Segmentation Using Conditional Generative Adversarial Network
|
[
{
"docid": "d8cc257b156a618b10b97db70306dcfe",
"text": "This paper presents Deep Retinal Image Understanding (DRIU), a unified framework of retinal image analysis that provides both retinal vessel and optic disc segmentation. We make use of deep Convolutional Neural Networks (CNNs), which have proven revolutionary in other fields of computer vision such as object detection and image classification, and we bring their power to the study of eye fundus images. DRIU uses a base network architecture on which two set of specialized layers are trained to solve both the retinal vessel and optic disc segmentation. We present experimental validation, both qualitative and quantitative, in four public datasets for these tasks. In all of them, DRIU presents super-human performance, that is, it shows results more consistent with a gold standard than a second human annotator used as control.",
"title": ""
}
] |
[
{
"docid": "e5ad17a5e431c8027ae58337615a60bd",
"text": "In this paper, we focus on learning structure-aware document representations from data without recourse to a discourse parser or additional annotations. Drawing inspiration from recent efforts to empower neural networks with a structural bias (Cheng et al., 2016; Kim et al., 2017), we propose a model that can encode a document while automatically inducing rich structural dependencies. Specifically, we embed a differentiable non-projective parsing algorithm into a neural model and use attention mechanisms to incorporate the structural biases. Experimental evaluations across different tasks and datasets show that the proposed model achieves state-of-the-art results on document modeling tasks while inducing intermediate structures which are both interpretable and meaningful.",
"title": ""
},
{
"docid": "251210e932884c2103f7f2d71c5ec519",
"text": "Recent work on deep neural networks as acoustic models for automatic speech recognition (ASR) have demonstrated substantial performance improvements. We introduce a model which uses a deep recurrent auto encoder neural network to denoise input features for robust ASR. The model is trained on stereo (noisy and clean) audio features to predict clean features given noisy input. The model makes no assumptions about how noise affects the signal, nor the existence of distinct noise environments. Instead, the model can learn to model any type of distortion or additive noise given sufficient training data. We demonstrate the model is competitive with existing feature denoising approaches on the Aurora2 task, and outperforms a tandem approach where deep networks are used to predict phoneme posteriors directly.",
"title": ""
},
{
"docid": "37aca8c5ec945d4a91984683538b0bc6",
"text": "Little is known about the neurobiological mechanisms underlying prosocial decisions and how they are modulated by social factors such as perceived group membership. The present study investigates the neural processes preceding the willingness to engage in costly helping toward ingroup and outgroup members. Soccer fans witnessed a fan of their favorite team (ingroup member) or of a rival team (outgroup member) experience pain. They were subsequently able to choose to help the other by enduring physical pain themselves to reduce the other's pain. Helping the ingroup member was best predicted by anterior insula activation when seeing him suffer and by associated self-reports of empathic concern. In contrast, not helping the outgroup member was best predicted by nucleus accumbens activation and the degree of negative evaluation of the other. We conclude that empathy-related insula activation can motivate costly helping, whereas an antagonistic signal in nucleus accumbens reduces the propensity to help.",
"title": ""
},
{
"docid": "ce863c10e38ca976f0f994b3d1c4f9f1",
"text": "Grammatical inference – used successfully in a variety of fields such as pattern recognition, computational biology and natural language processing – is the process of automatically inferring a grammar by examining the sentences of an unknown language. Software engineering can also benefit from grammatical inference. Unlike these other fields, which use grammars as a convenient tool to model naturally occuring patterns, software engineering treats grammars as first-class objects typically created and maintained for a specific purpose by human designers. We introduce the theory of grammatical inference and review the state of the art as it relates to software engineering.",
"title": ""
},
{
"docid": "91f20c48f5a4329260aadb87a0d8024c",
"text": "In this paper, we survey key design for manufacturing issues for extreme scaling with emerging nanolithography technologies, including double/multiple patterning lithography, extreme ultraviolet lithography, and electron-beam lithography. These nanolithography and nanopatterning technologies have different manufacturing processes and their unique challenges to very large scale integration (VLSI) physical design, mask synthesis, and so on. It is essential to have close VLSI design and underlying process technology co-optimization to achieve high product quality (power/performance, etc.) and yield while making future scaling cost-effective and worthwhile. Recent results and examples will be discussed to show the enablement and effectiveness of such design and process integration, including lithography model/analysis, mask synthesis, and lithography friendly physical design.",
"title": ""
},
{
"docid": "1f5a30218a65e79bdfffb2c2d7dfcc30",
"text": "A lot of applications depend on reliable and stable Internet connectivity. These characteristics are crucial for missioncritical services such as telemedical applications. An important factor that can affect connection availability is the convergence time of BGP, the de-facto inter-domain routing (IDR) protocol in the Internet. After a routing change, it may take several minutes until the network converges and BGP routing becomes stable again [13]. Kotronis et al. [8,9] propose a novel Internet routing approach based on SDN principles that combines several Autonomous Systems (AS) into groups, called clusters, and introduces a logically centralized routing decision process for the cluster participants. One of the goals of this concept is to stabilize the IDR system and bring down its convergence time. However, testing whether such approaches can improve on BGP problems requires hybrid SDN and BGP experimentation tools that can emulate multiple ASes. Presently, there is a lack of an easy to use public tool for this purpose. This work fills this gap by building a suitable emulation framework and evaluating the effect that a proof-of-concept IDR controller has on IDR convergence time.",
"title": ""
},
{
"docid": "a1dec377f2f17a508604d5101a5b0e44",
"text": "The goal of this work is to develop a soft robotic manipulation system that is capable of autonomous, dynamic, and safe interactions with humans and its environment. First, we develop a dynamic model for a multi-body fluidic elastomer manipulator that is composed entirely from soft rubber and subject to the self-loading effects of gravity. Then, we present a strategy for independently identifying all unknown components of the system: the soft manipulator, its distributed fluidic elastomer actuators, as well as drive cylinders that supply fluid energy. Next, using this model and trajectory optimization techniques we find locally optimal open-loop policies that allow the system to perform dynamic maneuvers we call grabs. In 37 experimental trials with a physical prototype, we successfully perform a grab 92% of the time. By studying such an extreme example of a soft robot, we can begin to solve hard problems inhibiting the mainstream use of soft machines.",
"title": ""
},
{
"docid": "4a5a5958eaf3a011a04d4afc1155e521",
"text": "1 Department of Geography, University of Kentucky, Lexington, Kentucky, United States of America, 2 Microsoft Research, New York, New York, United States of America, 3 Data & Society, New York, New York, United States of America, 4 Information Law Institute, New York University, New York, New York, United States of America, 5 Department of Media and Communications, London School of Economics, London, United Kingdom, 6 Harvard-Smithsonian Center for Astrophysics, Harvard University, Cambridge, Massachusetts, United States of America, 7 Center for Engineering Ethics and Society, National Academy of Engineering, Washington, DC, United States of America, 8 Institute for Health Aging, University of California-San Francisco, San Francisco, California, United States of America, 9 Ethical Resolve, Santa Cruz, California, United States of America, 10 Department of Computer Science, Princeton University, Princeton, New Jersey, United States of America, 11 Department of Sociology, Columbia University, New York, New York, United States of America, 12 Carey School of Law, University of Maryland, Baltimore, Maryland, United States of America",
"title": ""
},
{
"docid": "e8babc224158f04da2eccd13a4b14b76",
"text": "SFI Working Papers contain accounts of scienti5ic work of the author(s) and do not necessarily represent the views of the Santa Fe Institute. We accept papers intended for publication in peer-‐reviewed journals or proceedings volumes, but not papers that have already appeared in print. Except for papers by our external faculty, papers must be based on work done at SFI, inspired by an invited visit to or collaboration at SFI, or funded by an SFI grant.",
"title": ""
},
{
"docid": "cffce89fbb97dc1d2eb31a060a335d3c",
"text": "This doctoral thesis deals with a number of challenges related to investigating and devising solutions to the Sentiment Analysis Problem, a subset of the discipline known as Natural Language Processing (NLP), following a path that differs from the most common approaches currently in-use. The majority of the research and applications building in Sentiment Analysis (SA) / Opinion Mining (OM) have been conducted and developed using Supervised Machine Learning techniques. It is our intention to prove that a hybrid approach merging fuzzy sets, a solid sentiment lexicon, traditional NLP techniques and aggregation methods will have the effect of compounding the power of all the positive aspects of these tools. In this thesis we will prove three main aspects, namely: 1. That a Hybrid Classification Model based on the techniques mentioned in the previous paragraphs will be capable of: (a) performing same or better than established Supervised Machine Learning techniques -namely, Naı̈ve Bayes and Maximum Entropy (ME)when the latter are utilised respectively as the only classification methods being applied, when calculating subjectivity polarity, and (b) computing the intensity of the polarity previously estimated. 2. That cross-ratio uninorms can be used to effectively fuse the classification outputs of several algorithms producing a compensatory effect. 3. That the Induced Ordered Weighted Averaging (IOWA) operator is a very good choice to model the opinion of the majority (consensus) when the outputs of a number of classification methods are combined together. For academic and experimental purposes we have built the proposed methods and associated prototypes in an iterative fashion: • Step 1: we start with the so-called Hybrid Standard Classification (HSC) method, responsible for subjectivity polarity determination. • Step 2: then, we have continued with the Hybrid Advanced Classification (HAC) method that computes the polarity intensity of opinions/sentiments. • Step 3: in closing, we present two methods that produce a semantic-specific aggregation of two or more classification methods, as a complement to the HSC/HAC methods when the latter cannot generate a classification value or when we are looking for an aggregation that implies consensus, respectively: ◦ the Hybrid Advanced Classification with Aggregation by Cross-ratio Uninorm (HACACU) method. ◦ the Hybrid Advanced Classification with Aggregation by Consensus (HACACO) method.",
"title": ""
},
{
"docid": "f7d023abf0f651177497ae38d8494efc",
"text": "Developing Question Answering systems has been one of the important research issues because it requires insights from a variety of disciplines, including, Artificial Intelligence, Information Retrieval, Information Extraction, Natural Language Processing, and Psychology. In this paper we realize a formal model for a lightweight semantic–based open domain yes/no Arabic question answering system based on paragraph retrieval (with variable length). We propose a constrained semantic representation. Using an explicit unification framework based on semantic similarities and query expansion (synonyms and antonyms). This frequently improves the precision of the system. Employing the passage retrieval system achieves a better precision by retrieving more paragraphs that contain relevant answers to the question; It significantly reduces the amount of text to be processed by the system.",
"title": ""
},
{
"docid": "30aa4e82b5e8a8fb3cc7bea65f389014",
"text": "Numerous studies on the mechanisms of ankle injury deal with injuries to the syndesmosis and anterior ligamentous structures but a previous sectioning study also describes the important role of the posterior talofibular ligament (PTaFL) in the ankle's resistance to external rotation of the foot. It was hypothesized that failure level external rotation of the foot would lead to injury of the PTaFL. Ten ankles were tested by externally rotating the foot until gross injury. Two different frequencies of rotation were used in this study, 0.5 Hz and 2 Hz. The mean failure torque of the ankles was 69.5+/-11.7 Nm with a mean failure angle of 40.7+/-7.3 degrees . No effects of rotation frequency or flexion angle were noted. The most commonly injured structure was the PTaFL. Visible damage to the syndesmosis only occurred in combination with fibular fracture in these experiments. The constraint of the subtalar joint in the current study may have affected the mechanics of the foot and led to the resultant strain in the PTaFL. In the real world, talus rotations may be affected by athletic footwear that may influence the location and potential for an ankle injury under external rotation of the foot.",
"title": ""
},
{
"docid": "173f5497089e86c29075df964891ca13",
"text": "Artificial neural networks have been successfully applied to a variety of business application problems involving classification and regression. Although backpropagation neural networks generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions are not as interpretable as those of decision trees. In many applications, it is desirable to extract knowledge from trained neural networks so that the users can gain a better understanding of the solution. This paper presents an efficient algorithm to extract rules from artificial neural networks. We use two-phase training algorithm for backpropagation learning. In the first phase, the number of hidden nodes of the network is determined automatically in a constructive fashion by adding nodes one after another based on the performance of the network on training data. In the second phase, the number of relevant input units of the network is determined using pruning algorithm. The pruning process attempts to eliminate as many connections as possible from the network. Relevant and irrelevant attributes of the data are distinguished during the training process. Those that are relevant will be kept and others will be automatically discarded. From the simplified networks having small number of connections and nodes we may easily able to extract symbolic rules using the proposed algorithm. Extensive experimental results on several benchmarks problems in neural networks demonstrate the effectiveness of the proposed approach with good generalization ability.",
"title": ""
},
{
"docid": "bbed3608cbae4ce9d21fa3c1413ecff6",
"text": "In this paper, a new improved plate detection method which uses genetic algorithm (GA) is proposed. GA randomly scans an input image using a fixed detection window repeatedly, until a region with the highest evaluation score is obtained. The performance of the genetic algorithm is evaluated based on the area coverage of pixels in an input image. It was found that the GA can cover up to 90% of the input image in just less than an average of 50 iterations using 30×130 detection window size, with 20 population members per iteration. Furthermore, the algorithm was tested on a database that contains 1537 car images. Out of these images, more than 98% of the plates were successfully detected.",
"title": ""
},
{
"docid": "f20c0ace77f7b325d2ae4862d300d440",
"text": "http://dx.doi.org/10.1016/j.knosys.2014.02.003 0950-7051/ 2014 Elsevier B.V. All rights reserved. ⇑ Corresponding author. Address: Zhejiang University, Hangzhou 310027, China. Tel.: +86 571 87951453. E-mail addresses: xlzheng@zju.edu.cn (X. Zheng), nblin@zju.edu.cn (Z. Lin), alexwang@zju.edu.cn (X. Wang), klin@ece.uci.edu (K.-J. Lin), mnsong@bupt.edu.cn (M. Song). 1 http://www.yelp.com/. Xiaolin Zheng a,b,⇑, Zhen Lin , Xiaowei Wang , Kwei-Jay Lin , Meina Song e",
"title": ""
},
{
"docid": "fb0875ee874dc0ada51d0097993e16c8",
"text": "The literature on testing effects is vast but supports surprisingly few prescriptive conclusions for how to schedule practice to achieve both durable and efficient learning. Key limitations are that few studies have examined the effects of initial learning criterion or the effects of relearning, and no prior research has examined the combined effects of these 2 factors. Across 3 experiments, 533 students learned conceptual material via retrieval practice with restudy. Items were practiced until they were correctly recalled from 1 to 4 times during an initial learning session and were then practiced again to 1 correct recall in 1-5 subsequent relearning sessions (across experiments, more than 100,000 short-answer recall responses were collected and hand-scored). Durability was measured by cued recall and rate of relearning 1-4 months after practice, and efficiency was measured by total practice trials across sessions. A consistent qualitative pattern emerged: The effects of initial learning criterion and relearning were subadditive, such that the effects of initial learning criterion were strong prior to relearning but then diminished as relearning increased. Relearning had pronounced effects on long-term retention with a relatively minimal cost in terms of additional practice trials. On the basis of the overall patterns of durability and efficiency, our prescriptive conclusion for students is to practice recalling concepts to an initial criterion of 3 correct recalls and then to relearn them 3 times at widely spaced intervals.",
"title": ""
},
{
"docid": "562f0d3835fbd8c79dfef72c2bf751b4",
"text": "Alzheimer’s disease (AD) is the most common age-related neurodegenerative disease and has become an urgent public health problem in most areas of the world. Substantial progress has been made in understanding the basic neurobiology of AD and, as a result, new drugs for its treatment have become available. Cholinesterase inhibitors (ChEIs), which increase the availability of acetylcholine in central synapses, have become the main approach to symptomatic treatment. ChEIs that have been approved or submitted to the US Food and Drug Administration (FDA) include tacrine, donepezil, metrifonate, rivastigmine and galantamine. In this review we discuss their pharmacology, clinical experience to date with their use and their potential benefits or disadvantages. ChEIs have a significant, although modest, effect on the cognitive status of patients with AD. In addition to their effect on cognition, ChEIs have a positive effect on mood and behaviour. Uncertainty remains about the duration of the benefit because few studies of these compounds beyond one year have been published. Although ChEIs are generally well tolerated, all patients should be followed closely for possible adverse effects. There is no substantial difference in the effectivenes of the various ChEIs, however, they may have different safety profiles. We believe the benefits of their use outweigh the risks and costs and, therefore, ChEIs should be considered as primary therapy for patients with mild to moderate AD.",
"title": ""
},
{
"docid": "b42788c688193d653bd77379375531ed",
"text": "Despite existing work on ensuring generalization of neural networks in terms of scale sensitive complexity measures, such as norms, margin and sharpness, these complexity measures do not offer an explanation of why neural networks generalize better with over-parametrization. In this work we suggest a novel complexity measure based on unit-wise capacities resulting in a tighter generalization bound for two layer ReLU networks. Our capacity bound correlates with the behavior of test error with increasing network sizes, and could potentially explain the improvement in generalization with over-parametrization. We further present a matching lower bound for the Rademacher complexity that improves over previous capacity lower bounds for neural networks.",
"title": ""
},
{
"docid": "ec323459d1bd85c80bc54dc9114fd8b8",
"text": "The hype around mobile payments has been growing in Sri Lanka with the exponential growth of the mobile adoption and increasing connectivity to the Internet. Mobile payments offer advantages in comparison to other payment modes, benefiting both the consumer and the society at large. Drawing upon the traditional technology adoption theories, this research develops a conceptual framework to uncover the influential factors fundamental to the mobile payment usage. The phenomenon discussed in this research is the factors influencing the use of mobile payments. In relation to the topic, nine independent factors were selected and their influence is to be tested onto behavioral intention to use mobile payments. The questionnaires need to be handed out for data collection for correlation analyses to track the relationship between the nine independent variables and the dependent variable — behavioral intention to use mobile payments. The second correlation analysis between behavioral intention to mobile payments and mobile payment usage is also to be checked together with the two moderating variables — age and level of education.",
"title": ""
},
{
"docid": "f8a5fb5f323f036d38959f97815337a5",
"text": "OBJECTIVE\nEarly screening of autism increases the chance of receiving timely intervention. Using the Parent Report Questionnaires is effective in screening autism. The Q-CHAT is a new instrument that has shown several advantages than other screening tools. Because there is no adequate tool for the early screening of autistic traits in Iranian children, we aimed to investigate the adequacy of the Persian translation of Q-CHAT.\n\n\nMETHOD\nAt first, we prepared the Persian translation of the Quantitative Checklist for Autism in Toddlers (Q-CHAT). After that, an appropriate sample was selected and the check list was administered. Our sample included 100 children in two groups (typically developing and autistic children) who had been selected conveniently. Pearson's r was used to determine test-retest reliability, and Cronbach's alpha coefficient was used to explore the internal consistency of Q-CHAT. We used the receiver operating characteristics curve (ROC) to investigate whether Q-CHAT can adequately discriminate between typically developing and ASD children or not. Data analysis was carried out by SPSS 19.\n\n\nRESULT\nThe typically developing group consisted of 50 children with the mean age of 27.14 months, and the ASD group included50 children with the mean age of 29.62 months. The mean of the total score for the typically developing group was 22.4 (SD=6.26) on Q-CHAT and it was 50.94 (SD=12.35) for the ASD group, which was significantly different (p=0.00).The Cronbach's alpha coefficient of the checklist was 0.886, and test-retest reliability was calculated as 0.997 (p<0.01). The estimated area under the curve (AUC) was 0.971. It seems that the total score equal to 30 can be a good cut point to identify toddlers who are at risk of autism (sensitivity= 0.96 and specificity= 0.90).\n\n\nCONCLUSION\nThe Persian translation of Q-CHAT has good reliability and predictive validity and can be used as a screening tool to detect 18 to 24 months old children who are at risk of autism.",
"title": ""
}
] |
scidocsrr
|
ed78e93aa4b80295df49117026e7ca2b
|
Integrating Automated Fingerprint-Based Attendance into a University Portal System
|
[
{
"docid": "83e4ee7cf7a82fcb8cb77f7865d67aa8",
"text": "A meta-analysis of the relationship between class attendance in college and college grades reveals that attendance has strong relationships with both class grades (k = 69, N = 21,195, r = .44) and GPA (k = 33, N = 9,243, r = .41). These relationships make class attendance a better predictor of college grades than any other known predictor of academic performance, including scores on standardized admissions tests such as the SAT, high school GPA, study habits, and study skills. Results also show that class attendance explains large amounts of unique variance in college grades because of its relative independence from SAT scores and high school GPA and weak relationship with student characteristics such as conscientiousness and motivation. Mandatory attendance policies appear to have a small positive impact on average grades (k = 3, N = 1,421, d = .21). Implications for theoretical frameworks of student academic performance and educational policy are discussed. Many college instructors exhort their students to attend class as frequently as possible, arguing that high levels of class attendance are likely to increase learning and improve student grades. Such arguments may hold intuitive appeal and are supported by findings linking class attendance to both learning (e.g., Jenne, 1973) and better grades (e.g., Moore et al., 2003), but both students and some educational researchers appear to be somewhat skeptical of the importance of class attendance. This skepticism is reflected in high class absenteeism rates ranging from 18. This article aims to help resolve the debate regarding the importance of class attendance by providing a quantitative review of the literature investigating the relationship of class attendance with both college grades and student characteristics that may influence attendance. 273 At a theoretical level class attendance fits well into frameworks that emphasize the joint role of cognitive ability and motivation in determining learning and work performance (e.g., Kanfer & Ackerman, 1989). Specifically, cognitive ability and motivation influence academic outcomes via two largely distinct mechanisms— one mechanism related to information processing and the other mechanism being behavioral in nature. Cognitive ability influences the degree to which students are able to process, integrate, and remember material presented to them (Humphreys, 1979), a mechanism that explains the substantial predictive validity of SAT scores for college grades (e. & Ervin, 2000). Noncognitive attributes such as conscientiousness and achievement motivation are thought to influence grades via their influence on behaviors that facilitate the understanding and …",
"title": ""
},
{
"docid": "b114ebfd30146d8fcb175db42b5e898e",
"text": "Smartphones are becoming more preferred companions to users than desktops or notebooks. Knowing that smartphones are most popular with users at the age around 26, using smartphones to speed up the process of taking attendance by university instructors would save lecturing time and hence enhance the educational process. This paper proposes a system that is based on a QR code, which is being displayed for students during or at the beginning of each lecture. The students will need to scan the code in order to confirm their attendance. The paper explains the high level implementation details of the proposed system. It also discusses how the system verifies student identity to eliminate false registrations. Keywords—Mobile Computing; Attendance System; Educational System; GPS",
"title": ""
},
{
"docid": "b4e56855d6f41c5829b441a7d2765276",
"text": "College student attendance management of class plays an important position in the work of management of college student, this can help to urge student to class on time, improve learning efficiency, increase learning grade, and thus entirely improve the education level of the school. Therefore, colleges need an information system platform of check attendance management of class strongly to enhance check attendance management of class using the information technology which gathers the basic information of student automatically. According to current reality and specific needs of check attendance and management system of college students and the exist device of the system. Combined with the study of college attendance system, this paper gave the node design of check attendance system of class which based on RFID on the basic of characteristics of embedded ARM and RFID technology.",
"title": ""
},
{
"docid": "6fd89ac5ec4cfd0f6c28e01c8d94ff7a",
"text": "This paper describes the development of a student attendance system based on Radio Frequency Identification (RFID) technology. The existing conventional attendance system requires students to manually sign the attendance sheet every time they attend a class. As common as it seems, such system lacks of automation, where a number of problems may arise. This include the time unnecessarily consumed by the students to find and sign their name on the attendance sheet, some students may mistakenly or purposely signed another student's name and the attendance sheet may got lost. Having a system that can automatically capture student's attendance by flashing their student card at the RFID reader can really save all the mentioned troubles. This is the main motive of our system and in addition having an online system accessible anywhere and anytime can greatly help the lecturers to keep track of their students' attendance. Looking at a bigger picture, deploying the system throughout the academic faculty will benefit the academic management as students' attendance to classes is one of the key factor in improving the quality of teaching and monitoring their students' performance. Besides, this system provides valuable online facilities for easy record maintenance offered not only to lecturers but also to related academic management staffs especially for the purpose of students' progress monitoring.",
"title": ""
}
] |
[
{
"docid": "0837ca7bd6e28bb732cfdd300ccecbca",
"text": "In our previous research we have made literature analysis and discovered possible mind map application areas. We have pointed out why currently developed software and methods are not adequate and why we are developing a new one. We have defined system architecture and functionality that our software would have. After that, we proceeded with text-mining algorithm development and testing after which we have concluded with our plans for further research. In this paper we will give basic notions about previously published article and present our custom developed software for automatic mind map generation. This software will be tested. Generated mind maps will be critically analyzed. The paper will be concluded with research summary and possible further research and software improvement.",
"title": ""
},
{
"docid": "1e042aca14a3412a4772761109cb6c10",
"text": "With increasing quality requirements for multimedia communications, audio codecs must maintain both high quality and low delay. Typically, audio codecs offer either low delay or high quality, but rarely both. We propose a codec that simultaneously addresses both these requirements, with a delay of only 8.7 ms at 44.1 kHz. It uses gain-shape algebraic vector quantization in the frequency domain with time-domain pitch prediction. We demonstrate that the proposed codec operating at 48 kb/s and 64 kb/s out-performs both G.722.1C and MP3 and has quality comparable to AAC-LD, despite having less than one fourth of the algorithmic delay of these codecs.",
"title": ""
},
{
"docid": "cccf05e566b64afb7f9a16ffcf41f013",
"text": "This paper provides new insight into maximizing F1 measures in the context of binary classification and also in the context of multilabel classification. The harmonic mean of precision and recall, the F1 measure is widely used to evaluate the success of a binary classifier when one class is rare. Micro average, macro average, and per instance average F1 measures are used in multilabel classification. For any classifier that produces a real-valued output, we derive the relationship between the best achievable F1 value and the decision-making threshold that achieves this optimum. As a special case, if the classifier outputs are well-calibrated conditional probabilities, then the optimal threshold is half the optimal F1 value. As another special case, if the classifier is completely uninformative, then the optimal behavior is to classify all examples as positive. When the actual prevalence of positive examples is low, this behavior can be undesirable. As a case study, we discuss the results, which can be surprising, of maximizing F1 when predicting 26,853 labels for Medline documents.",
"title": ""
},
{
"docid": "ef5cf3dfbff25b438d37111c13ecbbc1",
"text": "A program, EMOD2D, in C++, is developed to compute the apparent resistivities of two-dimensional geological structures for magnetotelluric (MT) H-polarization case, using finite difference technique. Five C++ classes with member functions are designed to compute apparent resistivitiy values. The program utilizes object oriented programming features such as multiple inheritance, encapsulation to allow the user to easily deploy and modify the given classes. This program has been used to study the responses of various ore deposit models. This modelling study is relevant to: (i) search for mineral deposits underlying conductive or resistive overburden, (ii) understand the response pattern for the ore body with different depths and conductivities, and (iii) study the response pattern for the ore body when it is in contact and without contact with the overburden. The program is compiled on Borland C++. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9a55e9bafa98f01a0ea7f36a9764f8c2",
"text": "AIM\nTo determine socio-demographic features and criminal liability of individuals who committed filicide in Turkey.\n\n\nMETHOD\nThe study involved 85 cases of filicide evaluated by the 4th Specialized Board of the Institute of Forensic Medicine in Istanbul in the 1995-2000 period. We assessed the characteristics of parents who committed filicide (age, sex, education level, employment status, and criminal liability) and children victims (age, sex, own or stepchild), as well as the causes of death.\n\n\nRESULTS\nThere were 85 parents who committed filicide (41 fathers and 44 mothers) and 96 children victims. The mean age of mothers who committed filicide (52% of filicides) was 26.5-/+7.7 years, and the mean age of fathers (48% of filicides) was 36.1-/+10.0 years (t=-5.00, p<0.001). Individuals diagnosed with psychiatric disturbances, such as schizophrenia (61%), major depression (22%), imbecility (10%), and mild mental retardation (7%), were not subject to criminal liability. Almost half of parents who committed filicide were unemployed and illiterate.\n\n\nCONCLUSION\nFilicide in Turkey was equally committed by mothers and fathers. More than half of the parents were diagnosed with psychiatric disorders and came from disadvantageous socioeconomic environments, where unemployment and illiteracy rates are highly above the average of Turkey.",
"title": ""
},
{
"docid": "88334287928e86a89ed8d9e5974a5d6b",
"text": "Reading is the gateway to success in education. It is the heartbeat of all courses offered in institutions. It is therefore crucial to investigate Colleges of Education students reading habits and how to improve the skill. The study was a descriptive survey with a validated questionnaire on “Reading Habits among Colleges of Education students in the Information Age” (RHCESIA). A total number of two hundred (200) students were used from the two Colleges of Education in Oyo town, with gender and age as the moderating variables. The findings showed that almost all the respondents understand the importance of reading. 65.5% love to read from their various fields of specialization on a daily basis while 25.0% love reading from their fields of specialization every week. The study confirmed that good reading habits enhance academic performance. The study recommended that courses on communication skills should be included for the first year (100 level) students and prose work and fiction such as novels should be a compulsory course for second year students (200 level)",
"title": ""
},
{
"docid": "79c14cc420caa8db93bc74916ce5bb4d",
"text": "Hadoop has become the de facto platform for large-scale data analysis in commercial applications, and increasingly so in scientific applications. However, Hadoop's byte stream data model causes inefficiencies when used to process scientific data that is commonly stored in highly-structured, array-based binary file formats resulting in limited scalability of Hadoop applications in science. We introduce Sci-Hadoop, a Hadoop plugin allowing scientists to specify logical queries over array-based data models. Sci-Hadoop executes queries as map/reduce programs defined over the logical data model. We describe the implementation of a Sci-Hadoop prototype for NetCDF data sets and quantify the performance of five separate optimizations that address the following goals for several representative aggregate queries: reduce total data transfers, reduce remote reads, and reduce unnecessary reads. Two optimizations allow holistic aggregate queries to be evaluated opportunistically during the map phase; two additional optimizations intelligently partition input data to increase read locality, and one optimization avoids block scans by examining the data dependencies of an executing query to prune input partitions. Experiments involving a holistic function show run-time improvements of up to 8x, with drastic reductions of IO, both locally and over the network.",
"title": ""
},
{
"docid": "e5b7435bd9b761e85bf3ffed0c4a8ee0",
"text": "A novel method of analysis for a linear series-fed microstrip antenna array is developed, which is based on a set of canonical coefficients defined for the elements of the array. This method accounts for the mutual coupling between the elements, and allows for a design that has an arbitrary amplitude and phase of the radiating patch currents. This method has the simplicity of a CAD approach while maintaining an accuracy close to that of a full-wave method. The coefficients involved in the formulation are determined by full-wave simulation on either a single patch element or on two patch elements (for mutual coupling calculations).",
"title": ""
},
{
"docid": "78f1b3a8b9aeff9fb860b46d6a2d8eab",
"text": "We study the possibility to extend the concept of linguistic data summaries employing the notion of bipolarity. Yager's linguistic summaries may be derived using a fuzzy linguistic querying interface. We look for a similar analogy between bipolar queries and the extended form of linguistic summaries. The general concept of bipolar query, and its special interpretation are recalled, which turns out to be applicable to accomplish our goal. Some preliminary results are presented and possible directions of further research are pointed out.",
"title": ""
},
{
"docid": "bda2541d2c2a5a5047b29972cb1536f6",
"text": "Fog is an emergent architecture for computing, storage, control, and networking that distributes these services closer to end users along the cloud-to-things continuum. It covers both mobile and wireline scenarios, traverses across hardware and software, resides on network edge but also over access networks and among end users, and includes both data plane and control plane. As an architecture, it supports a growing variety of applications, including those in the Internet of Things (IoT), fifth-generation (5G) wireless systems, and embedded artificial intelligence (AI). This survey paper summarizes the opportunities and challenges of fog, focusing primarily in the networking context of IoT.",
"title": ""
},
{
"docid": "b23e34b3e2571379cafa7c34cdf532e7",
"text": "This article describes the change in partial discharge (PD) pattern of high voltage rotating machines and the change in the tan /spl delta/ as a function of the applied test voltage during the aging processes as caused by the application of different stresses on stator bars. It also compares the PD patterns associated with internal, slot, and end-winding discharges, which were produced in well-controlled laboratory conditions. In addition, the influence of different temperature conditions on the partial discharge activities are shown. The investigations in this work were performed on model stator bars under laboratory conditions, and the results might be different from those obtained for complete machines, as rotating machines are complex PD test objects, and for example, the detected PD signals in a complete machine significantly depend on the transmission path from the PD source to the measurement device.",
"title": ""
},
{
"docid": "12519f0131b8d451654ea790c977acd0",
"text": "In the early 1980s, Scandinavian software designers who sought to make systems design more participatory and democratic turned to prototyping. The \"Scandinavian challenge\" of making computers more democratic inspired others who became interested in user-centered design; information designers on both sides of the Atlantic began to employ prototyping as a way to encourage user participation and feedback in various design approaches. But, as European and North American researchers have pointed out, prototyping is seen as meeting very different needs in Scandinavia and in the US. Thus design approaches that originate on either side of the Atlantic have implemented prototyping quite differently, have deployed it to meet quite different goals, and have tended to understand prototyping results in different ways.These differences are typically glossed over in technical communication research. Technical communicators have lately become quite excited about prototyping's potential to help design documentation, but the technical communication literature shows little critical awareness of the methodological differences between Scandinavian and US prototyping. In this presentation, I map out some of these differences by comparing prototyping in a variety of design approaches originating in Scandinavia and the US, such as mock-ups, cooperative prototyping, CARD, PICTIVE, and contextual design. Finally, I discuss implications for future technical communication research involving prototyping.",
"title": ""
},
{
"docid": "826ad745258d73a9dc75c4d0938ae3bc",
"text": "Classification problems with a large number of classes inevitably involve overlapping or similar classes. In such cases it seems reasonable to allow the learning algorithm to make mistakes on similar classes, as long as the true class is still among the top-k (say) predictions. Likewise, in applications such as search engine or ad display, we are allowed to present k predictions at a time and the customer would be satisfied as long as her interested prediction is included. Inspired by the recent work of [15], we propose a very generic, robust multiclass SVM formulation that directly aims at minimizing a weighted and truncated combination of the ordered prediction scores. Our method includes many previous works as special cases. Computationally, using the Jordan decomposition Lemma we show how to rewrite our objective as the difference of two convex functions, based on which we develop an efficient algorithm that allows incorporating many popular regularizers (such as the l2 and l1 norms). We conduct extensive experiments on four real large-scale visual category recognition datasets, and obtain very promising performances.",
"title": ""
},
{
"docid": "b2180f74eb86fd589ff9799a0491bf20",
"text": "We propose a novel document clustering method which aims to cluster the documents into different semantic classes. The document space is generally of high dimensionality and clustering in such a high dimensional space is often infeasible due to the curse of dimensionality. By using locality preserving indexing (LPI), the documents can be projected into a lower-dimensional semantic space in which the documents related to the same semantics are close to each other. Different from previous document clustering methods based on latent semantic indexing (LSI) or nonnegative matrix factorization (NMF), our method tries to discover both the geometric and discriminating structures of the document space. Theoretical analysis of our method shows that LPI is an unsupervised approximation of the supervised linear discriminant analysis (LDA) method, which gives the intuitive motivation of our method. Extensive experimental evaluations are performed on the Reuters-21578 and TDT2 data sets.",
"title": ""
},
{
"docid": "4c711149abc3af05a8e55e52eefddd97",
"text": "Scanning a halftone image introduces halftone artifacts, known as Moire patterns, which significantly degrade the image quality. Printers that use amplitude modulation (AM) screening for halftone printing position dots in a periodic pattern. Therefore, frequencies relating half toning arc easily identifiable in the frequency domain. This paper proposes a method for de screening scanned color halftone images using a custom band reject filter designed to isolate and remove only the frequencies related to half toning while leaving image edges sharp without image segmentation or edge detection. To enable hardware acceleration, the image is processed in small overlapped windows. The windows arc filtered individually in the frequency domain, then pieced back together in a method that does not show blocking artifacts.",
"title": ""
},
{
"docid": "f160e297ece985bd23b72cc5eef1b11d",
"text": "We propose to exploit reconstruction as a layer-local training signal for deep learning. Reconstructions can be propagated in a form of target propagation playing a role similar to back-propagation but helping to reduce the reliance on derivatives in order to perform credit assignment across many levels of possibly strong nonlinearities (which is difficult for back-propagation). A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood. By generalizing gradients, target propagation may also allow to train deep networks with discrete hidden units. If the auto-encoder takes both a representation of input and target (or of any side information) in input, then its reconstruction of input representation provides a target towards a representation that is more likely, conditioned on all the side information. A deep auto-encoder decoding path generalizes gradient propagation in a learned way that can could thus handle not just infinitesimal changes but larger, discrete changes, hopefully allowing credit assignment through a long chain of non-linear operations. In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and disentangling factors. The motivations and theoretical justifications for this approach are laid down in this paper, along with conjectures that will have to be verified either mathematically or experimentally, including a hypothesis stating that such auto-encoder mediated target propagation could play in brains the role of credit assignment through many non-linear, noisy and discrete transformations.",
"title": ""
},
{
"docid": "19ee2109c1b0bab578252dc23f3603c6",
"text": "When querying a news video archive, the users are interested in retrieving precise answers in the form of a summary that best answers the query. However, current video retrieval systems, including the search engines on the web, are designed to retrieve documents instead of precise answers. This research explores the use of question answering (QA) techniques to support personalized news video retrieval. Users interact with our system, VideoQA, using short natural language questions with implicit constraints on contents, context, duration, and genre of expected videos. VideoQA returns short precise news video summaries as answers. The main contributions of this research are: (a) the extension of QA technology to support QA in news video; and (b) the use of multi-modal features, including visual, audio, textual, and external resources, to help correct speech recognition errors and to perform precise question answering. The system has been tested on 7 days of news video and has been found to be effective.",
"title": ""
},
{
"docid": "e55067bddff5f7f3cb646d02342f419c",
"text": "Over the last two decades there have been several process models proposed (and used) for data and information fusion. A common theme of these models is the existence of multiple levels of processing within the data fusion process. In the 1980’s three models were adopted: the intelligence cycle, the JDL model and the Boyd control. The 1990’s saw the introduction of the Dasarathy model and the Waterfall model. However, each of these models has particular advantages and disadvantages. A new model for data and information fusion is proposed. This is the Omnibus model, which draws together each of the previous models and their associated advantages whilst managing to overcome some of the disadvantages. Where possible the terminology used within the Omnibus model is aimed at a general user of data fusion technology to allow use by a distributed audience.",
"title": ""
},
{
"docid": "70745e8cdf957b1388ab38a485e98e60",
"text": "Network studies of large-scale brain connectivity have begun to reveal attributes that promote the segregation and integration of neural information: communities and hubs. Network communities are sets of regions that are strongly interconnected among each other while connections between members of different communities are less dense. The clustered connectivity of network communities supports functional segregation and specialization. Network hubs link communities to one another and ensure efficient communication and information integration. This review surveys a number of recent reports on network communities and hubs, and their role in integrative processes. An emerging focus is the shifting balance between segregation and integration over time, which manifest in continuously changing patterns of functional interactions between regions, circuits and systems.",
"title": ""
},
{
"docid": "458470e18ce2ab134841f76440cfdc2b",
"text": "Dependency trees help relation extraction models capture long-range relations between words. However, existing dependency-based models either neglect crucial information (e.g., negation) by pruning the dependency trees too aggressively, or are computationally inefficient because it is difficult to parallelize over different tree structures. We propose an extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel. To incorporate relevant information while maximally removing irrelevant content, we further apply a novel pruning strategy to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold. The resulting model achieves state-of-the-art performance on the large-scale TACRED dataset, outperforming existing sequence and dependency-based neural models. We also show through detailed analysis that this model has complementary strengths to sequence models, and combining them further improves the state of the art.",
"title": ""
}
] |
scidocsrr
|
04fe4878ee940d2ad8e9c724085c56dc
|
How well do Computers Solve Math Word Problems? Large-Scale Dataset Construction and Evaluation
|
[
{
"docid": "ceb0a6ed0dd50b0a9f2973674f23f3bd",
"text": "This paper presents a semantic parsing and reasoning approach to automatically solving math word problems. A new meaning representation language is designed to bridge natural language text and math expressions. A CFG parser is implemented based on 9,600 semi-automatically created grammar rules. We conduct experiments on a test set of over 1,500 number word problems (i.e., verbally expressed number problems) and yield 95.4% precision and 60.2% recall.",
"title": ""
}
] |
[
{
"docid": "f6f5540d9479d42b7d0ccd108d81e3eb",
"text": "Falls are one of the major causes leading to injury of elderly people. Using wearable devices for fall detection has a high cost and may cause inconvenience to the daily lives of the elderly. In this paper, we present an automated fall detection approach that requires only a low-cost depth camera. Our approach combines two computer vision techniques-shape-based fall characterization and a learning-based classifier to distinguish falls from other daily actions. Given a fall video clip, we extract curvature scale space (CSS) features of human silhouettes at each frame and represent the action by a bag of CSS words (BoCSS). Then, we utilize the extreme learning machine (ELM) classifier to identify the BoCSS representation of a fall from those of other actions. In order to eliminate the sensitivity of ELM to its hyperparameters, we present a variable-length particle swarm optimization algorithm to optimize the number of hidden neurons, corresponding input weights, and biases of ELM. Using a low-cost Kinect depth camera, we build an action dataset that consists of six types of actions (falling, bending, sitting, squatting, walking, and lying) from ten subjects. Experimenting with the dataset shows that our approach can achieve up to 91.15% sensitivity, 77.14% specificity, and 86.83% accuracy. On a public dataset, our approach performs comparably to state-of-the-art fall detection methods that need multiple cameras.",
"title": ""
},
{
"docid": "d99747fb44a839a2ab8765c1176e4c77",
"text": "The aim of this paper is to explore text topic influence in authorship attribution. Specifically, we test the widely accepted belief that stylometric variables commonly used in authorship attribution are topic-neutral and can be used in multi-topic corpora. In order to investigate this hypothesis, we created a special corpus, which was controlled for topic and author simultaneously. The corpus consists of 200 Modern Greek newswire articles written by two authors in two different topics. Many commonly used stylometric variables were calculated and for each one we performed a two-way ANOVA test, in order to estimate the main effects of author, topic and the interaction between them. The results showed that most of the variables exhibit considerable correlation with the text topic and their exploitation in authorship analysis should be done with caution.",
"title": ""
},
{
"docid": "1bf796a1b7e802076e25b9d0742a7f91",
"text": "Modern computing devices and user interfaces have necessitated highly interactive querying. Some of these interfaces issue a large number of dynamically changing and continuous queries to the backend. In others, users expect to inspect results during the query formulation process, in order to guide or help them towards specifying a full-fledged query. Thus, users end up issuing a fast-changing workload to the underlying database. In such situations, the user's query intent can be thought of as being in flux. In this paper, we show that the traditional query execution engines are not well-suited for this new class of highly interactive workloads. We propose a novel model to interpret the variability of likely queries in a workload. We implemented a cyclic scan-based approach to process queries from such workloads in an efficient and practical manner while reducing the overall system load. We evaluate and compare our methods with traditional systems and demonstrate the scalability of our approach, enabling thousands of queries to run simultaneously within interactive response times given low memory and CPU requirements.",
"title": ""
},
{
"docid": "525da162b6490472f644cab024f6da7a",
"text": "Direct processing of raw high-dimensional data such as images and video by machine learning systems is impractical both due to prohibitive power consumption and the “curse of dimensionality,” which makes learning tasks exponentially more difficult as dimension increases. Deep machine learning (DML) mimics the hierarchical presentation of information in the human brain to achieve robust automated feature extraction, reducing the dimension of such data. However, the computational complexity of DML systems limits large-scale implementations in standard digital computers. Custom analog or mixed-mode signal processors have been reported to yield much higher energy efficiency than DSP [1-4], presenting the means of overcoming these limitations. However, the use of volatile digital memory in [1-3] precludes their use in intermittently-powered devices, and the required interfacing and internal A/D/A conversions add power and area overhead. Nonvolatile storage is employed in [4], but the lack of learning capability requires task-specific programming before operation, and precludes online adaptation.",
"title": ""
},
{
"docid": "6a74c2d26f5125237929031cf1ccf204",
"text": "Harnessing crowds can be a powerful mechanism for increasing innovation. However, current approaches to crowd innovation rely on large numbers of contributors generating ideas independently in an unstructured way. We introduce a new approach called distributed analogical idea generation, which aims to make idea generation more effective and less reliant on chance. Drawing from the literature in cognitive science on analogy and schema induction, our approach decomposes the creative process in a structured way amenable to using crowds. In three experiments we show that distributed analogical idea generation leads to better ideas than example-based approaches, and investigate the conditions under which crowds generate good schemas and ideas. Our results have implications for improving creativity and building systems for distributed crowd innovation.",
"title": ""
},
{
"docid": "35f61df81a2a31f68f2e5dd0501bcca4",
"text": "We present a generative framework for generalized zero-shot learning where the training and test classes are not necessarily disjoint. Built upon a variational autoencoder based architecture, consisting of a probabilistic encoder and a probabilistic conditional decoder, our model can generate novel exemplars from seen/unseen classes, given their respective class attributes. These exemplars can subsequently be used to train any off-the-shelf classification model. One of the key aspects of our encoder-decoder architecture is a feedback-driven mechanism in which a discriminator (a multivariate regressor) learns to map the generated exemplars to the corresponding class attribute vectors, leading to an improved generator. Our model's ability to generate and leverage examples from unseen classes to train the classification model naturally helps to mitigate the bias towards predicting seen classes in generalized zero-shot learning settings. Through a comprehensive set of experiments, we show that our model outperforms several state-of-the-art methods, on several benchmark datasets, for both standard as well as generalized zero-shot learning.",
"title": ""
},
{
"docid": "09dbfbd77307b0cd152772618c40e083",
"text": "Textbook Question Answering (TQA) [1] is a newly proposed task to answer arbitrary questions in middle school curricula, which has particular challenges to understand the long essays in additional to the images. Bilinear models [2], [3] are effective at learning high-level associations between questions and images, but are inefficient to handle the long essays. In this paper, we propose an Essay-anchor Attentive Multi-modal Bilinear pooling (EAMB), a novel method to encode the long essays into the joint space of the questions and images. The essay-anchors, embedded from the keywords, represent the essay information in a latent space. We propose a novel network architecture to pay special attention on the keywords in the questions, consequently encoding the essay information into the question features, and thus the joint space with the images. We then use the bilinear models to extract the multi-modal interactions to obtain the answers. EAMB successfully utilizes the redundancy of the pre-trained word embedding space to represent the essay-anchors. This avoids the extra learning difficulties from exploiting large network structures. Quantitative and qualitative experiments show the outperforming effects of EAMB on the TQA dataset.",
"title": ""
},
{
"docid": "2a941df686c9fe0986448b77e40a5b74",
"text": "What is the relationship between brain and behavior? The answer to this question necessitates characterizing the mapping between structure and function. The aim of this paper is to discuss broad issues surrounding the link between structure and function in the brain that will motivate a network perspective to understanding this question. However, as others in the past, I argue that a network perspective should supplant the common strategy of understanding the brain in terms of individual regions. Whereas this perspective is needed for a fuller characterization of the mind-brain, it should not be viewed as panacea. For one, the challenges posed by the many-to-many mapping between regions and functions is not dissolved by the network perspective. Although the problem is ameliorated, one should not anticipate a one-to-one mapping when the network approach is adopted. Furthermore, decomposition of the brain network in terms of meaningful clusters of regions, such as the ones generated by community-finding algorithms, does not by itself reveal \"true\" subnetworks. Given the hierarchical and multi-relational relationship between regions, multiple decompositions will offer different \"slices\" of a broader landscape of networks within the brain. Finally, I described how the function of brain regions can be characterized in a multidimensional manner via the idea of diversity profiles. The concept can also be used to describe the way different brain regions participate in networks.",
"title": ""
},
{
"docid": "c99ae6f1009fbbedb30eccc91d3ec83b",
"text": "This paper describes a new open-source cross-platform 'C' library for audio input and output. It is designed to simplify the porting of audio applications between various platforms, and also to simplify the development of audio programs in general by hiding the complexities of device interfacing. The API was worked out through community discussions on the music-dsp mailing list. A number of people have contributed to the development of the API and are listed on the web-site. Implementations of PortAudio for Windows MME and DirectSound, the Macintosh Sound Manager, and Unix OSS have been developed and are freely available on the web. Support for other platforms is being planned. The paper describes the use of PortAudio and discusses the issues involved in its development including the design philosophy, latency, callbacks versus blocking read/write calls, and efficiency .",
"title": ""
},
{
"docid": "a8d709ee5c0a9cd32b5e59c8d73394ca",
"text": "Spectrum awareness is currently one of the most challenging problems in cognitive radio (CR) design. Detection and classification of very low SNR signals with relaxed information on the signal parameters being detected is critical for proper CR functionality as it enables the CR to react and adapt to the changes in its radio environment. In this work, the cycle frequency domain profile (CDP) is used for signal detection and preprocessing for signal classification. Signal features are extracted from CDP using a threshold-test method. For classification, a Hidden Markov Model (HMM) has been used to process extracted signal features due to its robust pattern-matching capability. We also investigate the effects of varied observation length on signal detection and classification. It is found that the CDP-based detector and the HMM-based classifier can detect and classify incoming signals at a range of low SNRs.",
"title": ""
},
{
"docid": "77f408e456970e32551767e847ca1c19",
"text": "Many graph analytics problems can be solved via iterative algorithms where the solutions are often characterized by a set of steady-state conditions. Different algorithms respect to different set of fixed point constraints, so instead of using these traditional algorithms, can we learn an algorithm which can obtain the same steady-state solutions automatically from examples, in an effective and scalable way? How to represent the meta learner for such algorithm and how to carry out the learning? In this paper, we propose an embedding representation for iterative algorithms over graphs, and design a learning method which alternates between updating the embeddings and projecting them onto the steadystate constraints. We demonstrate the effectiveness of our framework using a few commonly used graph algorithms, and show that in some cases, the learned algorithm can handle graphs with more than 100,000,000 nodes in a single machine.",
"title": ""
},
{
"docid": "f19057578e0fce86e57d762d5805e676",
"text": "A polymer network of intranuclear lamin filaments underlies the nuclear envelope and provides mechanical stability to the nucleus in metazoans. Recent work demonstrates that the expression of A-type lamins scales positively with the stiffness of the cellular environment, thereby coupling nuclear and extracellular mechanics. Using the spectrin-actin network at the erythrocyte plasma membrane as a model, we contemplate how the relative stiffness of the nuclear scaffold impinges on the growing number of interphase-specific nuclear envelope remodeling events, including recently discovered, nuclear envelope-specialized quality control mechanisms. We suggest that a stiffer lamina impedes these remodeling events, necessitating local lamina remodeling and/or concomitant scaling of the efficacy of membrane-remodeling machineries that act at the nuclear envelope.",
"title": ""
},
{
"docid": "54176f9184a42a9f92e0f3f529b20cd9",
"text": "In recent years, convolutional neural networks (CNNs) are leading the way in many computer vision tasks, such as image classification, object detection, and face recognition. In order to produce more refined semantic image segmentation, we survey the powerful CNNs and novel elaborate layers, structures and strategies, especially including those that have achieved the state-of-the-art results on the Pascal VOC 2012 semantic segmentation challenge. Moreover, we discuss their different working stages and various mechanisms to utilize the structural and contextual information in the image and feature spaces. Finally, combining some popular underlying referential methods in homologous problems, we propose several possible directions and approaches to incorporate existing effective methods as components to enhance CNNs for the segmentation of specific semantic objects.",
"title": ""
},
{
"docid": "ee37a743edd1b87d600dcf2d0050ca18",
"text": "Recommender systems play a crucial role in mitigating the problem of information overload by suggesting users' personalized items or services. The vast majority of traditional recommender systems consider the recommendation procedure as a static process and make recommendations following a fixed strategy. In this paper, we propose a novel recommender system with the capability of continuously improving its strategies during the interactions with users. We model the sequential interactions between users and a recommender system as a Markov Decision Process (MDP) and leverage Reinforcement Learning (RL) to automatically learn the optimal strategies via recommending trial-and-error items and receiving reinforcements of these items from users' feedback. Users' feedback can be positive and negative and both types of feedback have great potentials to boost recommendations. However, the number of negative feedback is much larger than that of positive one; thus incorporating them simultaneously is challenging since positive feedback could be buried by negative one. In this paper, we develop a novel approach to incorporate them into the proposed deep recommender system (DEERS) framework. The experimental results based on real-world e-commerce data demonstrate the effectiveness of the proposed framework. Further experiments have been conducted to understand the importance of both positive and negative feedback in recommendations.",
"title": ""
},
{
"docid": "fee8bbb0103a40bcfc30f9e71ae1b3e8",
"text": "Under certain circumstances, consumers are willing to pay a premium for privacy. We explore how choice architecture affects smartphone users’ stated willingness to install applications that request varying permissions. We performed two experiments to gauge smartphone users’ stated willingness to pay premiums to limit their personal information exposure when installing new applications. We found that when participants were comparison shopping between multiple applications that performed similar functionality, a quarter of our sample responded that they were willing to pay a $1.50 premium for the application that requested the fewest permissions—though only when viewing the requested permissions of each application side-by-side. In a second experiment, we more closely simulated the user experience by asking them to valuate a single application that featured multiple sets of permissions based on five between-subjects conditions. In this scenario, the requested permissions had a much smaller impact on participants’ responses. Our results suggest that many smartphone users are concerned with their privacy and are willing to pay premiums for applications that are less likely to request access to personal information. We propose improvements in choice architecture for smartphone application markets that could result in decreased satisficing and increased rational behavior.",
"title": ""
},
{
"docid": "019854be19420ba5e6badcd9adbb7dea",
"text": "We present a new shared-memory parallel algorithm and implementation called FASCIA for the problems of approximate sub graph counting and sub graph enumeration. The problem of sub graph counting refers to determining the frequency of occurrence of a given sub graph (or template) within a large network. This is a key graph analytic with applications in various domains. In bioinformatics, sub graph counting is used to detect and characterize local structure (motifs) in protein interaction networks. Exhaustive enumeration and exact counting is extremely compute-intensive, with running time growing exponentially with the number of vertices in the template. In this work, we apply the color coding technique to determine approximate counts of non-induced occurrences of the sub graph in the original network. Color coding gives a fixed-parameter algorithm for this problem, using a dynamic programming-based counting approach. Our new contributions are a multilevel shared-memory parallelization of the counting scheme and several optimizations to reduce the memory footprint. We show that approximate counts can be obtained for templates with up to 12 vertices, on networks with up to millions of vertices and edges. Prior work on this problem has only considered out-of-core parallelization on distributed platforms. With our new counting scheme, data layout optimizations, and multicore parallelism, we demonstrate a significant speedup over the current state-of-the-art for sub graph counting.",
"title": ""
},
{
"docid": "5e601792447020020aa02ee539b3a2cf",
"text": "The recently proposed neural network joint model (NNJM) (Devlin et al., 2014) augments the n-gram target language model with a heuristically chosen source context window, achieving state-of-the-art performance in SMT. In this paper, we give a more systematic treatment by summarizing the relevant source information through a convolutional architecture guided by the target information. With different guiding signals during decoding, our specifically designed convolution+gating architectures can pinpoint the parts of a source sentence that are relevant to predicting a target word, and fuse them with the context of entire source sentence to form a unified representation. This representation, together with target language words, are fed to a deep neural network (DNN) to form a stronger NNJM. Experiments on two NIST Chinese-English translation tasks show that the proposed model can achieve significant improvements over the previous NNJM by up to +1.01 BLEU points on average.",
"title": ""
},
{
"docid": "ce7d164774826897e9d7386ec9159bba",
"text": "The homomorphic encryption problem has been an open one for three decades. Recently, Gentry has proposed a full solution. Subsequent works have made improvements on it. However, the time complexities of these algorithms are still too high for practical use. For example, Gentry’s homomorphic encryption scheme takes more than 900 seconds to add two 32 bit numbers, and more than 67000 seconds to multiply them. In this paper, we develop a non-circuit based symmetric-key homomorphic encryption scheme. It is proven that the security of our encryption scheme is equivalent to the large integer factorization problem, and it can withstand an attack with up to lnpoly chosen plaintexts for any predetermined , where is the security parameter. Multiplication, encryption, and decryption are almost linear in , and addition is linear in . Performance analyses show that our algorithm runs multiplication in 108 milliseconds and addition in a tenth of a millisecond for = 1024 and = 16. We further consider practical multiple-user data-centric applications. Existing homomorphic encryption schemes only consider one master key. To allow multiple users to retrieve data from a server, all users need to have the same key. In this paper, we propose to transform the master encryption key into different user keys and develop a protocol to support correct and secure communication between the users and the server using different user keys. In order to prevent collusion between some user and the server to derive the master key, one or more key agents can be added to mediate the interaction.",
"title": ""
},
{
"docid": "dd05688335b4240bbc40919870e30f39",
"text": "In this tool report, we present an overview of the Watson system, a Semantic Web search engine providing various functionalities not only to find and locate ontologies and semantic data online, but also to explore the content of these semantic documents. Beyond the simple facade of a search engine for the Semantic Web, we show that the availability of such a component brings new possibilities in terms of developing semantic applications that exploit the content of the Semantic Web. Indeed, Watson provides a set of APIs containing high level functions for finding, exploring and querying semantic data and ontologies that have been published online. Thanks to these APIs, new applications have emerged that connect activities such as ontology construction, matching, sense disambiguation and question answering to the Semantic Web, developed by our group and others. In addition, we also describe Watson as a unprecedented research platform for the study the Semantic Web, and of formalised knowledge in general.",
"title": ""
}
] |
scidocsrr
|
6b259af65188a35e5a54d332048f1d78
|
Issues in Cloud Computing
|
[
{
"docid": "8448f57118fb3db90a4f793cbebc1bc8",
"text": "Motivated by increased concern over energy consumption in modern data centers, we propose a new, distributed computing platform called Nano Data Centers (NaDa). NaDa uses ISP-controlled home gateways to provide computing and storage services and adopts a managed peer-to-peer model to form a distributed data center infrastructure. To evaluate the potential for energy savings in NaDa platform we pick Video-on-Demand (VoD) services. We develop an energy consumption model for VoD in traditional and in NaDa data centers and evaluate this model using a large set of empirical VoD access data. We find that even under the most pessimistic scenarios, NaDa saves at least 20% to 30% of the energy compared to traditional data centers. These savings stem from energy-preserving properties inherent to NaDa such as the reuse of already committed baseline power on underutilized gateways, the avoidance of cooling costs, and the reduction of network energy consumption as a result of demand and service co-localization in NaDa.",
"title": ""
}
] |
[
{
"docid": "58e6b3b63b2210da621aabd891dbc627",
"text": "The precise role of orbitofrontal cortex (OFC) in affective processing is still debated. One view suggests OFC represents stimulus reward value and supports learning and relearning of stimulus-reward associations. An alternate view implicates OFC in behavioral control after rewarding or punishing feedback. To discriminate between these possibilities, we used event-related functional magnetic resonance imaging in subjects performing a reversal task in which, on each trial, selection of the correct stimulus led to a 70% probability of receiving a monetary reward and a 30% probability of obtaining a monetary punishment. The incorrect stimulus had the reverse contingency. In one condition (choice), subjects had to choose which stimulus to select and switch their response to the other stimulus once contingencies had changed. In another condition (imperative), subjects had simply to track the currently rewarded stimulus. In some regions of OFC and medial prefrontal cortex, activity was related to valence of outcome, whereas in adjacent areas activity was associated with behavioral choice, signaling maintenance of the current response strategy on a subsequent trial. Caudolateral OFC-anterior insula was activated by punishing feedback preceding a switch in stimulus in both the choice and imperative conditions, indicating a possible role for this region in signaling a change in reward contingencies. These results suggest functional heterogeneity within the OFC, with a role for this region in representing stimulus-reward values, signaling changes in reinforcement contingencies and in behavioral control.",
"title": ""
},
{
"docid": "cf30e30d7683fd2b0dec2bd6cc354620",
"text": "As online courses such as MOOCs become increasingly popular, there has been a dramatic increase for the demand for methods to facilitate this type of organisation. While resources for new courses are often freely available, they are generally not suitably organised into easily manageable units. In this paper, we investigate how state-of-the-art topic segmentation models can be utilised to automatically transform unstructured text into coherent sections, which are suitable for MOOCs content browsing. The suitability of this method with regards to course organisation is confirmed through experiments with a lecture corpus, configured explicitly according to MOOCs settings. Experimental results demonstrate the reliability and scalability of this approach over various academic disciplines. The findings also show that the topic segmentation model which used discourse cues displayed the best results overall.",
"title": ""
},
{
"docid": "e273298153872073e463662b5d6d8931",
"text": "The lack of readily-available large corpora of aligned monolingual sentence pairs is a major obstacle to the development of Statistical Machine Translation-based paraphrase models. In this paper, we describe the use of annotated datasets and Support Vector Machines to induce larger monolingual paraphrase corpora from a comparable corpus of news clusters found on the World Wide Web. Features include: morphological variants; WordNet synonyms and hypernyms; loglikelihood-based word pairings dynamically obtained from baseline sentence alignments; and formal string features such as word-based edit distance. Use of this technique dramatically reduces the Alignment Error Rate of the extracted corpora over heuristic methods based on position of the sentences in the text.",
"title": ""
},
{
"docid": "1186bb5c96eebc26ce781d45fae7768d",
"text": "Essential genes are required for the viability of an organism. Accurate and rapid identification of new essential genes is of substantial theoretical interest to synthetic biology and has practical applications in biomedicine. Fractals provide facilitated access to genetic structure analysis on a different scale. In this study, machine learning-based methods using solely fractal features are presented and the problem of predicting essential genes in bacterial genomes is evaluated. Six fractal features were investigated to learn the parameters of five supervised classification methods for the binary classification task. The optimal parameters of these classifiers are determined via grid-based searching technique. All the currently available identified genes from the database of essential genes were utilized to build the classifiers. The fractal features were proven to be more robust and powerful in the prediction performance. In a statistical sense, the ELM method shows superiority in predicting the essential genes. Non-parameter tests of the average AUC and ACC showed that the fractal feature is much better than other five compared features sets. Our approach is promising and convenient to identify new bacterial essential genes.",
"title": ""
},
{
"docid": "c8f2aaa7c7aa874e4578005ad8b219c4",
"text": "The geometrical and electrical features of the Vivaldi antenna are studied in the light of the frequency-independent antenna theory. A scaling principle is derived for the exponential tapering of the antenna, and a closed-form model for the current distribution is provided. Such theoretical results are in good agreement with several numerical simulations performed by using the NEC2 code. Furthermore, a practical feeding system, based on a double-Y balun, is developed and tested to obtain a more systematic approach to the design of the aforesaid antennas",
"title": ""
},
{
"docid": "93f1ee5523f738ab861bcce86d4fc906",
"text": "Semantic role labeling (SRL) is one of the basic natural language processing (NLP) problems. To this date, most of the successful SRL systems were built on top of some form of parsing results (Koomen et al., 2005; Palmer et al., 2010; Pradhan et al., 2013), where pre-defined feature templates over the syntactic structure are used. The attempts of building an end-to-end SRL learning system without using parsing were less successful (Collobert et al., 2011). In this work, we propose to use deep bi-directional recurrent network as an end-to-end system for SRL. We take only original text information as input feature, without using any syntactic knowledge. The proposed algorithm for semantic role labeling was mainly evaluated on CoNLL-2005 shared task and achieved F1 score of 81.07. This result outperforms the previous state-of-the-art system from the combination of different parsing trees or models. We also obtained the same conclusion with F1 = 81.27 on CoNLL2012 shared task. As a result of simplicity, our model is also computationally efficient that the parsing speed is 6.7k tokens per second. Our analysis shows that our model is better at handling longer sentences than traditional models. And the latent variables of our model implicitly capture the syntactic structure of a sentence.",
"title": ""
},
{
"docid": "4cd8a9f4dbe713be59b540968b5114f7",
"text": "ConvNets and ImageNet have driven the recent success of deep learning for image classification. However, the marked slowdown in performance improvement combined with the lack of robustness of neural networks to adversarial examples and their tendency to exhibit undesirable biases question the reliability of these methods. This work investigates these questions from the perspective of the end-user by using human subject studies and explanations. The contribution of this study is threefold. We first experimentally demonstrate that the accuracy and robustness of ConvNets measured on ImageNet are vastly underestimated. Next, we show that explanations can mitigate the impact of misclassified adversarial examples from the perspective of the end-user. We finally introduce a novel tool for uncovering the undesirable biases learned by a model. These contributions also show that explanations are a valuable tool both for improving our understanding of ConvNets’ predictions and for designing more reliable models.",
"title": ""
},
{
"docid": "58b4320c2cf52c658275eaa4748dede5",
"text": "Backing-out and heading-out maneuvers in perpendicular or angle parking lots are one of the most dangerous maneuvers, especially in cases where side parked cars block the driver view of the potential traffic flow. In this paper, a new vision-based Advanced Driver Assistance System (ADAS) is proposed to automatically warn the driver in such scenarios. A monocular grayscale camera was installed at the back-right side of a vehicle. A Finite State Machine (FSM) defined according to three CAN Bus variables and a manual signal provided by the user is used to handle the activation/deactivation of the detection module. The proposed oncoming traffic detection module computes spatio-temporal images from a set of predefined scan-lines which are related to the position of the road. A novel spatio-temporal motion descriptor is proposed (STHOL) accounting for the number of lines, their orientation and length of the spatio-temporal images. Some parameters of the proposed descriptor are adapted for nighttime conditions. A Bayesian framework is then used to trigger the warning signal using multivariate normal density functions. Experiments are conducted on image data captured from a vehicle parked at different location of an urban environment, including both daytime and nighttime lighting conditions. We demonstrate that the proposed approach provides robust results maintaining processing rates close to real time.",
"title": ""
},
{
"docid": "4e91c356aedd067ea2cb2ed01b3fb137",
"text": "With the availability of very large, relatively inexpensive main memories, it is becoming possible keep large databases resident in main memory In this paper we consider the changes necessary to permit a relational database system to take advantage of large amounts of main memory We evaluate AVL vs B+-tree access methods for main memory databases, hash-based query processing strategies vs sort-merge, and study recovery issues when most or all of the database fits in main memory As expected, B+-trees are the preferred storage mechanism unless more than 80--90% of the database fits in main memory A somewhat surprising result is that hash based query processing strategies are advantageous for large memory situations",
"title": ""
},
{
"docid": "c12d595a944aa592fd3a1414fa873f93",
"text": "Central nervous system cytotoxicity is linked to neurodegenerative disorders. The objective of the study was to investigate whether monosodium glutamate (MSG) neurotoxicity can be reversed by natural products, such as ginger or propolis, in male rats. Four different groups of Wistar rats were utilized in the study. Group A served as a normal control, whereas group B was orally administered with MSG (100 mg/kg body weight, via oral gavage). Two additional groups, C and D, were given MSG as group B along with oral dose (500 mg/kg body weight) of either ginger or propolis (600 mg/kg body weight) once a day for two months. At the end, the rats were sacrificed, and the brain tissue was excised and levels of neurotransmitters, ß-amyloid, and DNA oxidative marker 8-OHdG were estimated in the brain homogenates. Further, formalin-fixed and paraffin-embedded brain sections were used for histopathological evaluation. The results showed that MSG increased lipid peroxidation, nitric oxide, neurotransmitters, and 8-OHdG as well as registered an accumulation of ß-amyloid peptides compared to normal control rats. Moreover, significant depletions of glutathione, superoxide dismutase, and catalase as well as histopathological alterations in the brain tissue of MSG-treated rats were noticed in comparison with the normal control. In contrast, treatment with ginger greatly attenuated the neurotoxic effects of MSG through suppression of 8-OHdG and β-amyloid accumulation as well as alteration of neurotransmitter levels. Further improvements were also noticed based on histological alterations and reduction of neurodegeneration in the brain tissue. A modest inhibition of the neurodegenerative markers was observed by propolis. The study clearly indicates a neuroprotective effect of ginger and propolis against MSG-induced neurodegenerative disorders and these beneficial effects could be attributed to the polyphenolic compounds present in these natural products.",
"title": ""
},
{
"docid": "998f9c2694de2affff63c06f20f8b9c1",
"text": "In this paper we investigate the image aesthetics classification problem, aka, automatically classifying an image into low or high aesthetic quality, which is quite a challenging problem beyond image recognition. Deep convolutional neural network (DCNN) methods have recently shown promising results for image aesthetics assessment. Currently, a powerful inception module is proposed which shows very high performance in object classification. However, the inception module has not been taken into consideration for the image aesthetics assessment problem. In this paper, we propose a novel DCNN structure codenamed ILGNet for image aesthetics classification, which introduces the Inception module and connects intermediate Local layers to the Global layer for the output. Besides, we use a pre-trained image classification CNN called GoogLeNet on the ImageNet dataset and fine tune our connected local and global layer on the large scale aesthetics assessment AVA dataset [1]. The experimental results show that the proposed ILGNet outperforms the state of the art results in image aesthetics assessment in the AVA benchmark.",
"title": ""
},
{
"docid": "8ca6e0b5c413cc228af0d64ce8cf9d3b",
"text": "On January 8, a Database Column reader asked for our views on new distributed database research efforts, and we'll begin here with our views on MapReduce. This is a good time to discuss it, since the recent trade press has been filled with news of the revolution of so-called \"cloud computing.\" This paradigm entails harnessing large numbers of (low-end) processors working in parallel to solve a computing problem. In effect, this suggests constructing a data center by lining up a large number of \"jelly beans\" rather than utilizing a much smaller number of high-end servers.",
"title": ""
},
{
"docid": "e8246712bb8c4e793697b9933ab8b4f6",
"text": "In this paper we utilize a dimensional emotion representation named Resonance-Arousal-Valence to express music emotion and inverse exponential function to represent emotion decay process. The relationship between acoustic features and their emotional impact reflection based on this representation has been well constructed. As music well expresses feelings, through the users' historical playlist in a session, we utilize the Conditional Random Fields to compute the probabilities of different emotion states, choosing the largest as the predicted user's emotion state. In order to recommend music based on the predicted user's emotion, we choose the optimized ranked music list that has the highest emotional similarities to the music invoking the predicted emotion state in the playlist for recommendation. We utilize our minimization iteration algorithm to assemble the optimized ranked recommended music list. The experiment results show that the proposed emotion-based music recommendation paradigm is effective to track the user's emotions and recommend music fitting his emotional state.",
"title": ""
},
{
"docid": "c4cfd9364c271e0af23a03c28f5c95ad",
"text": "Due to the different posture and view angle, the image will appear some objects that do not exist in another image of the same person captured by another camera. The region covered by new items adversely improved the difficulty of person re-identification. Therefore, we named these regions as Damaged Region (DR). To overcome the influence of DR, we propose a new way to extract feature based on the local region that divides both in the horizontal and vertical directions. Before splitting the image, we enlarge it with direction to increase the useful information, potentially reducing the impact of different viewing angles. Then each divided region is a separated part, and the results of the adjacent regions will be compared. As a result the region that gets a higher score is selected as the valid one, and which gets the lower score caused by pose variation and items occlusion will be invalid. Extensive experiments carried out on three person re-identification benchmarks, including VIPeR, PRID2011, CUHK01, clearly show the significant and consistent improvements over the state-of-the-art methods.",
"title": ""
},
{
"docid": "d789793e01a87e27ece43384ca0dd972",
"text": "Plastics have transformed everyday life; usage is increasing and annual production is likely to exceed 300 million tonnes by 2010. In this concluding paper to the Theme Issue on Plastics, the Environment and Human Health, we synthesize current understanding of the benefits and concerns surrounding the use of plastics and look to future priorities, challenges and opportunities. It is evident that plastics bring many societal benefits and offer future technological and medical advances. However, concerns about usage and disposal are diverse and include accumulation of waste in landfills and in natural habitats, physical problems for wildlife resulting from ingestion or entanglement in plastic, the leaching of chemicals from plastic products and the potential for plastics to transfer chemicals to wildlife and humans. However, perhaps the most important overriding concern, which is implicit throughout this volume, is that our current usage is not sustainable. Around 4 per cent of world oil production is used as a feedstock to make plastics and a similar amount is used as energy in the process. Yet over a third of current production is used to make items of packaging, which are then rapidly discarded. Given our declining reserves of fossil fuels, and finite capacity for disposal of waste to landfill, this linear use of hydrocarbons, via packaging and other short-lived applications of plastic, is simply not sustainable. There are solutions, including material reduction, design for end-of-life recyclability, increased recycling capacity, development of bio-based feedstocks, strategies to reduce littering, the application of green chemistry life-cycle analyses and revised risk assessment approaches. Such measures will be most effective through the combined actions of the public, industry, scientists and policymakers. There is some urgency, as the quantity of plastics produced in the first 10 years of the current century is likely to approach the quantity produced in the entire century that preceded.",
"title": ""
},
{
"docid": "d485607db19e3defa000b24a59b1074a",
"text": "In the past years we have witnessed an explosive growth of the data and information on the World Wide Web, which makes it difficult for normal users to find the information that they are interested in. On the other hand, the majority of the data and resources are very unpopular, which can be considered as “hidden information”, and are very difficult to find. By building a bridge between the users and the objects and constructing their similarities, the Personal Recommender System (PRS) can recommend the objects that the users are potentially interested in. PRS plays an important role in not only social and economic life but also scientific analysis. The interdisciplinary PRS attracts attention from the communities of information science, computational mathematics, statistical physics, management science, and consumer behaviors, etc. In fact, PRS is one of the most efficient tools to solve the information overload problem. According to the recommendation algorithms, we introduce four typical systems, including the collaborating filtering system, the content-based system, the structure-based system, and the hybrid system. In addition, some improved algorithms are proposed to overcome the limitations of traditional systems. This review article may shed some light on the study of PRS from different backgrounds.",
"title": ""
},
{
"docid": "c75836bf10114bd568745dfaba611be0",
"text": "The present paper continues our investigations in the field of Supercapacitors or Electrochemical Double Layer Capacitors, briefly named EDLCs. The series connection of EDLCs is usual in order to obtain higher voltage levels. The inherent uneven state of charge (SOC) and manufacturing dispersions determine during charging at constant current that one of the capacitors reaches first the rated voltage levels and could, by further charging, be damaged. The balancing circuit with resistors and transistors used to bypass the charging current can be improved using the proposed circuit. We present here a complex variant, based on integrated circuit acting similar to a microcontroller. The circuit is adapted from the circuits investigated in the last 7–8 years for the batteries, especially for Lithium-ion type. The test board built around the circuit is performant, energy efficient and can be further improved to ensure the balancing control for larger capacitances.",
"title": ""
},
{
"docid": "9f469cdc1864aad2026630a29c210c1f",
"text": "This paper proposes an asymptotically optimal hybrid beamforming solution for large antenna arrays by exploiting the properties of the singular vectors of the channel matrix. It is shown that the elements of the channel matrix with Rayleigh fading follow a normal distribution when large antenna arrays are employed. The proposed beamforming algorithm is effective in both sparse and rich propagation environments, and is applicable for both point-to-point and multiuser scenarios. In addition, a closed-form expression and a lower bound for the achievable rates are derived when analog and digital phase shifters are employed. It is shown that the performance of the hybrid beamformers using phase shifters with more than 2-bit resolution is comparable with analog phase shifting. A novel phase shifter selection scheme that reduces the power consumption at the phase shifter network is proposed when the wireless channel is modeled by Rayleigh fading. Using this selection scheme, the spectral efficiency can be increased as the power consumption in the phase shifter network reduces. Compared with the scenario that all of the phase shifters are in operation, the simulation results indicate that the spectral efficiency increases when up to 50% of phase shifters are turned OFF.",
"title": ""
},
{
"docid": "f981f9a15062f4187dfa7ac71f19d54a",
"text": "Background\nSoccer is one of the most widely played sports in the world. However, soccer players have an increased risk of lower limb injury. These injuries may be caused by both modifiable and non-modifiable factors, justifying the adoption of an injury prevention program such as the Fédération Internationale de Football Association (FIFA) 11+. The purpose of this study was to evaluate the efficacy of the FIFA 11+ injury prevention program for soccer players.\n\n\nMethodology\nThis meta-analysis was based on the PRISMA 2015 protocol. A search using the keywords \"FIFA,\" \"injury prevention,\" and \"football\" found 183 articles in the PubMed, MEDLINE, LILACS, SciELO, and ScienceDirect databases. Of these, 6 studies were selected, all of which were randomized clinical trials.\n\n\nResults\nThe sample consisted of 6,344 players, comprising 3,307 (52%) in the intervention group and 3,037 (48%) in the control group. The FIFA 11+ program reduced injuries in soccer players by 30%, with an estimated relative risk of 0.70 (95% confidence interval, 0.52-0.93, p = 0.01). In the intervention group, 779 (24%) players had injuries, while in the control group, 1,219 (40%) players had injuries. However, this pattern was not homogeneous throughout the studies because of clinical and methodological differences in the samples. This study showed no publication bias.\n\n\nConclusion\nThe FIFA 11+ warm-up program reduced the risk of injury in soccer players by 30%.",
"title": ""
},
{
"docid": "ef142067a29f8662e36d68ee37c07bce",
"text": "The lack of assessment tools to analyze serious games and insufficient knowledge on their impact on players is a recurring critique in the field of game and media studies, education science and psychology. Although initial empirical studies on serious games usage deliver discussable results, numerous questions remain unacknowledged. In particular, questions regarding the quality of their formal conceptual design in relation to their purpose mostly stay uncharted. In the majority of cases the designers' good intentions justify incoherence and insufficiencies in their design. In addition, serious games are mainly assessed in terms of the quality of their content, not in terms of their intention-based design. This paper argues that analyzing a game's formal conceptual design, its elements, and their relation to each other based on the game's purpose is a constructive first step in assessing serious games. By outlining the background of the Serious Game Design Assessment Framework and exemplifying its use, a constructive structure to examine purpose-based games is introduced. To demonstrate how to assess the formal conceptual design of serious games we applied the SGDA Framework to the online games \"Sweatshop\" (2011) and \"ICED\" (2008).",
"title": ""
}
] |
scidocsrr
|
f4a0c4e0bffa5e0e47db1a7f268dc27e
|
BLEWS: Using Blogs to Provide Context for News Articles
|
[
{
"docid": "77125ee1f92591489ee5d933710cc1f1",
"text": "Subjectivity in natural language refers to aspects of language used to express opinions, evaluations, and speculations. There are numerous natural language processing applications for which subjectivity analysis is relevant, including information extraction and text categorization. The goal of this work is learning subjective language from corpora. Clues of subjectivity are generated and tested, including low-frequency words, collocations, and adjectives and verbs identified using distributional similarity. The features are also examined working together in concert. The features, generated from different data sets using different procedures, exhibit consistency in performance in that they all do better and worse on the same data sets. In addition, this article shows that the density of subjectivity clues in the surrounding context strongly affects how likely it is that a word is subjective, and it provides the results of an annotation study assessing the subjectivity of sentences with high-density features. Finally, the clues are used to perform opinion piece recognition (a type of text categorization and genre detection) to demonstrate the utility of the knowledge acquired in this article.",
"title": ""
}
] |
[
{
"docid": "71b09fba5c4054af268da7c0037253e6",
"text": "Recurrent neural networks are now the state-of-the-art in natural language processing because they can build rich contextual representations and process texts of arbitrary length. However, recent developments on attention mechanisms have equipped feedforward networks with similar capabilities, hence enabling faster computations due to the increase in the number of operations that can be parallelized. We explore this new type of architecture in the domain of question-answering and propose a novel approach that we call Fully Attention Based Information Retriever (FABIR). We show that FABIR achieves competitive results in the Stanford Question Answering Dataset (SQuAD) while having fewer parameters and being faster at both learning and inference than rival methods.",
"title": ""
},
{
"docid": "c55c339eb53de3a385df7d831cb4f24b",
"text": "Massive Open Online Courses (MOOCs) have gained tremendous popularity in the last few years. Thanks to MOOCs, millions of learners from all over the world have taken thousands of high-quality courses for free. Putting together an excellent MOOC ecosystem is a multidisciplinary endeavour that requires contributions from many different fields. Artificial intelligence (AI) and data mining (DM) are two such fields that have played a significant role in making MOOCs what they are today. By exploiting the vast amount of data generated by learners engaging in MOOCs, DM improves our understanding of the MOOC ecosystem and enables MOOC practitioners to deliver better courses. Similarly, AI, supported by DM, can greatly improve student experience and learning outcomes. In this survey paper, we first review the state-of-the-art artificial intelligence and data mining research applied to MOOCs, emphasising the use of AI and DM tools and techniques to improve student engagement, learning outcomes, and our understanding of the MOOC ecosystem. We then offer an overview of key trends and important research to carry out in the fields of AI and DM so that MOOCs can reach their full potential.",
"title": ""
},
{
"docid": "653fee86af651e13e0d26fed35ef83e4",
"text": "Small ducted fan autonomous vehicles have potential for several applications, especially for missions in urban environments. This paper discusses the use of dynamic inversion with neural network adaptation to provide an adaptive controller for the GTSpy, a small ducted fan autonomous vehicle based on the Micro Autonomous Systems’ Helispy. This approach allows utilization of the entire low speed flight envelope with a relatively poorly understood vehicle. A simulator model is constructed from a force and moment analysis of the vehicle, allowing for a validation of the controller in preparation for flight testing. Data from flight testing of the system is provided.",
"title": ""
},
{
"docid": "eea57066c7cd0b778188c2407c8365f3",
"text": "For over two decades, video streaming over the Internet has received a substantial amount of attention from both academia and industry. Starting from the design of transport protocols for streaming video, research interests have later shifted to the peer-to-peer paradigm of designing streaming protocols at the application layer. More recent research has focused on building more practical and scalable systems, using Dynamic Adaptive Streaming over HTTP. In this article, we provide a retrospective view of the research results over the past two decades, with a focus on peer-to-peer streaming protocols and the effects of cloud computing and social media.",
"title": ""
},
{
"docid": "d72e4df2e396a11ae7130ca7e0b2fb56",
"text": "Advances in location-acquisition and wireless communication technologies have led to wider availability of spatio-temporal (ST) data, which has unique spatial properties (i.e. geographical hierarchy and distance) and temporal properties (i.e. closeness, period and trend). In this paper, we propose a <u>Deep</u>-learning-based prediction model for <u>S</u>patio-<u>T</u>emporal data (DeepST). We leverage ST domain knowledge to design the architecture of DeepST, which is comprised of two components: spatio-temporal and global. The spatio-temporal component employs the framework of convolutional neural networks to simultaneously model spatial near and distant dependencies, and temporal closeness, period and trend. The global component is used to capture global factors, such as day of the week, weekday or weekend. Using DeepST, we build a real-time crowd flow forecasting system called UrbanFlow1. Experiment results on diverse ST datasets verify DeepST's ability to capture ST data's spatio-temporal properties, showing the advantages of DeepST beyond four baseline methods.",
"title": ""
},
{
"docid": "92a0fb602276952962762b07e7cd4d2b",
"text": "Representation of video is a vital problem in action recognition. This paper proposes Stacked Fisher Vectors (SFV), a new representation with multi-layer nested Fisher vector encoding, for action recognition. In the first layer, we densely sample large subvolumes from input videos, extract local features, and encode them using Fisher vectors (FVs). The second layer compresses the FVs of subvolumes obtained in previous layer, and then encodes them again with Fisher vectors. Compared with standard FV, SFV allows refining the representation and abstracting semantic information in a hierarchical way. Compared with recent mid-level based action representations, SFV need not to mine discriminative action parts but can preserve mid-level information through Fisher vector encoding in higher layer. We evaluate the proposed methods on three challenging datasets, namely Youtube, J-HMDB, and HMDB51. Experimental results demonstrate the effectiveness of SFV, and the combination of the traditional FV and SFV outperforms stateof-the-art methods on these datasets with a large margin.",
"title": ""
},
{
"docid": "635da218aa9a1b528fbc378844b393fd",
"text": "A variety of nonlinear, including semidefinite, relaxations have been developed in recent years for nonconvex optimization problems. Their potential can be realized only if they can be solved with sufficient speed and reliability. Unfortunately, state-of-the-art nonlinear programming codes are significantly slower and numerically unstable compared to linear programming software. In this paper, we facilitate the reliable use of nonlinear convex relaxations in global optimization via a polyhedral branch-and-cut approach. Our algorithm exploits convexity, either identified automatically or supplied through a suitable modeling language construct, in order to generate polyhedral cutting planes and relaxations for multivariate nonconvex problems. We prove that, if the convexity of a univariate or multivariate function is apparent by decomposing it into convex subexpressions, our relaxation constructor automatically exploits this convexity in a manner that is much superior to developing polyhedral outer approximators for the original function. The convexity of functional expressions that are composed to form nonconvex expressions is also automatically exploited. Root-node relaxations are computed for 87 problems from globallib and minlplib, and detailed computational results are presented for globally solving 26 of these problems with BARON 7.2, which implements the proposed techniques. The use of cutting planes for these problems reduces root-node relaxation gaps by up to 100% and expedites the solution process, often by several orders of magnitude.",
"title": ""
},
{
"docid": "961c4da65983926a8bc06189f873b006",
"text": "By studying two well known hypotheses in economics, this paper illustrates how emergent properties can be shown in an agent-based artificial stock market. The two hypotheses considered are the efficient market hypothesis and the rational expectations hypothesis. We inquire whether the macrobehavior depicted by these two hypotheses is consistent with our understanding of the microbehavior. In this agent-based model, genetic programming is applied to evolving a population of traders learning over time. We first apply a series of econometric tests to show that the EMH and the REH can be satisfied with some portions of the artificial time series. Then, by analyzing traders’ behavior, we show that these aggregate results cannot be interpreted as a simple scaling-up of individual behavior. A conjecture based on sunspot-like signals is proposed to explain why macrobehavior can be very different from microbehavior. We assert that the huge search space attributable to genetic programming can induce sunspot-like signals, and we use simulated evolved complexity of forecasting rules and Granger causality tests to examine this assertion. © 2002 Elsevier Science B.V. All rights reserved. JEL classification: G12: asset pricing; G14: information and market efficiency; D83: search, learning, and information",
"title": ""
},
{
"docid": "7d57caa810120e1590ad277fb8113222",
"text": "Cancer is increasing the total number of unexpected deaths around the world. Until now, cancer research could not significantly contribute to a proper solution for the cancer patient, and as a result, the high death rate is uncontrolled. The present research aim is to extract the significant prevention factors for particular types of cancer. To find out the prevention factors, we first constructed a prevention factor data set with an extensive literature review on bladder, breast, cervical, lung, prostate and skin cancer. We subsequently employed three association rule mining algorithms, Apriori, Predictive apriori and Tertius algorithms in order to discover most of the significant prevention factors against these specific types of cancer. Experimental results illustrate that Apriori is the most useful association rule-mining algorithm to be used in the discovery of prevention factors.",
"title": ""
},
{
"docid": "61ad7938355b899b2934bed1d5777e95",
"text": "Erythema annulare centrifugum (EAC) is a disease of unknown etiology, although it has been variously associated with hypersensitivity reactions, infections, hormonal disorders, rheumatological and liver diseases, dysproteinemias, drugs, and occult tumors. García-Muret et al described a subtype of EAC with annual relapses that occurred in the summer.1 Nonetheless, in our hospital we have observed how 2 patients with longstanding EAC presented a clear clinical improvement in response to natural exposure to sunlight during the summer months. The first patient was a woman aged 22 years, with no known diseases, who presented an 8-year history of lesions on the trunk and upper and lower limbs. The asymptomatic and sometimes slightly pruritic lesions underwent episodes of centrifugal spread in winter. Examination revealed several erythematous plaques varying in size between 2 cm and 8 cm. The larger lesions were annular, with erythematous borders that were slightly more raised and trailing scale (Figure 1). A culture of scales from the lesions was negative on 2 occasions. The Spanish Contact Dermatitis and Skin Allergy Research Group (GEIDAC) standard battery of patch tests were all negative. Complete blood count, basic blood chemistry, antibody test, chest x-ray, and abdominal ultrasound results were normal. Superficial perivascular dermatitis was reported on the 2 occasions biopsies were performed. The patient had failed to respond to treatment with topical corticosteroids and antifungal agents. To prevent possible postinflammatory hyperpigmentation the patient had avoided sunbathing during the summer months. However, her last revision revealed that the lesions had completely disappeared following continuous sun exposure during her holidays (Figure 2). The second patient was a man aged 27 years. Since the age of 16 years he had presented with occasional flare-ups, on the trunk and limbs, of erythematous lesions with centrifugal spread and a scaly border. Routine blood tests and antinuclear antibodies were normal or negative, cultures were negative, and a histopathology study merely showed nonspecific chronic dermatitis. Flare-ups were not associated with any triggering factor, and the lesions had not responded to treatment with antifungal agents or topical corticosteroids. Nonetheless, the patient’s condition had improved during the summer, coinciding with exposure to sunlight. EAC, which was originally described by Darier in 1916, presents as annular plaques with clear central areas and slightly raised erythematous borders with trailing scale. Centrifugal growth gives rise to polycyclic patterns in the plaques. The disease follows a chronic course marked by exacerbations and remissions. The most frequent lesion",
"title": ""
},
{
"docid": "0a65c096f91206c868f05bea9acc28fd",
"text": "This paper presents a review on recent developments in BLDC motor controllers and studies on four quadrant operation of BLDC drive along with active PFC. The main areas reviewed include Sensor-less control, Direct Torque Control (DTC), Fuzzy logic control, controller for four quadrant operation and active Power Factor Corrected (PFC) converter fed BLDC motor drive. A comprehensive study has been done on four quadrant operation and active PFC converter fed BLDC motor drive with simulation in MATLAB/SIMULINK. The proposed control algorithm for four quadrant operation detects the speed reversal requirement and changes the quadrant of operation accordingly. In PFC converter fed BLDC motor drive, a Boost converter working in continuous current mode is designed to improve the supply power factor.",
"title": ""
},
{
"docid": "1a6e9229f6bc8f6dc0b9a027e1d26607",
"text": "− This work illustrates an analysis of Rogowski coils for power applications, when operating under non ideal measurement conditions. The developed numerical model, validated by comparison with other methods and experiments, enables to investigate the effects of the geometrical and constructive parameters on the measurement behavior of the coil.",
"title": ""
},
{
"docid": "3f07c471245b2e8cc369bc591a035201",
"text": "Test automation is a widely-used approach to reduce the cost of manual software testing. However, if it is not planned or conducted properly, automated testing would not necessarily be more cost effective than manual testing. Deciding what parts of a given System Under Test (SUT) should be tested in an automated fashion and what parts should remain manual is a frequently-asked and challenging question for practitioner testers. In this study, we propose a search-based approach for deciding what parts of a given SUT should be tested automatically to gain the highest Return On Investment (ROI). This work is the first systematic approach for this problem, and significance of our approach is that it considers automation in the entire testing process (i.e., from test-case design, to test scripting, to test execution, and test-result evaluation). The proposed approach has been applied in an industrial setting in the context of a software product used in the oil and gas industry in Canada. Among the results of the case study is that, when planned and conducted properly using our decision-support approach, test automation provides the highest ROI. In this study, we show that if automation decision is taken effectively, test-case design, test execution, and test evaluation can result in about 307%, 675%, and 41% ROI in 10 rounds of using automated test suites.",
"title": ""
},
{
"docid": "0d11c7f94973be05d906f94238d706e4",
"text": "Head-Mounted Displays (HMDs) combined with 3-or-more Degree-of-Freedom (DoF) input enable rapid manipulation of stereoscopic 3D content. However, such input is typically performed with hands in midair and therefore lacks precision and stability. Also, recent consumer-grade HMDs suffer from limited angular resolution and/or limited field-of-view as compared to a desktop monitor. We present the DualCAD system that implements two solutions to these problems. First, the user may freely switch at runtime between an augmented reality HMD mode, and a traditional desktop mode with precise 2D mouse input and an external desktop monitor. Second, while in the augmented reality HMD mode, the user holds a smartphone in their non-dominant hand that is tracked with 6 DoF, allowing it to be used as a complementary high-resolution display as well as an alternative input device for stylus or multitouch input. Two novel bimanual interaction techniques that leverage the properties of the smartphone are presented. We also report initial user feedback.",
"title": ""
},
{
"docid": "d8c5ff196db9acbea12e923b2dcef276",
"text": "MoS<sub>2</sub>-graphene-based hybrid structures are biocompatible and useful in the field of biosensors. Herein, we propose a heterostructured MoS<sub>2</sub>/aluminum (Al) film/MoS<sub>2</sub>/graphene as a highly sensitive surface plasmon resonance (SPR) biosensor based on the Otto configuration. The sensitivity of the proposed biosensor is enhanced by using three methods. First, prisms of different refractive index have been discussed and it is found that sensitivity can be enhanced by using a low refractive index prism. Second, the influence of the thickness of the air layer on the sensitivity is analyzed and the optimal thickness of air is obtained. Finally, the sensitivity improvement and mechanism by using molybdenum disulfide (MoS<sub>2</sub>)–graphene hybrid structure is revealed. The maximum sensitivity ∼ 190.83°/RIU is obtained with six layers of MoS<sub>2</sub> coating on both surfaces of Al thin film.",
"title": ""
},
{
"docid": "405022c5a2ca49973eaaeb1e1ca33c0f",
"text": "BACKGROUND\nPreanalytical factors are the main source of variation in clinical chemistry testing and among the major determinants of preanalytical variability, sample hemolysis can exert a strong influence on result reliability. Hemolytic samples are a rather common and unfavorable occurrence in laboratory practice, as they are often considered unsuitable for routine testing due to biological and analytical interference. However, definitive indications on the analytical and clinical management of hemolyzed specimens are currently lacking. Therefore, the present investigation evaluated the influence of in vitro blood cell lysis on routine clinical chemistry testing.\n\n\nMETHODS\nNine aliquots, prepared by serial dilutions of homologous hemolyzed samples collected from 12 different subjects and containing a final concentration of serum hemoglobin ranging from 0 to 20.6 g/L, were tested for the most common clinical chemistry analytes. Lysis was achieved by subjecting whole blood to an overnight freeze-thaw cycle.\n\n\nRESULTS\nHemolysis interference appeared to be approximately linearly dependent on the final concentration of blood-cell lysate in the specimen. This generated a consistent trend towards overestimation of alanine aminotransferase (ALT), aspartate aminotransferase (AST), creatinine, creatine kinase (CK), iron, lactate dehydrogenase (LDH), lipase, magnesium, phosphorus, potassium and urea, whereas mean values of albumin, alkaline phosphatase (ALP), chloride, gamma-glutamyltransferase (GGT), glucose and sodium were substantially decreased. Clinically meaningful variations of AST, chloride, LDH, potassium and sodium were observed in specimens displaying mild or almost undetectable hemolysis by visual inspection (serum hemoglobin < 0.6 g/L). The rather heterogeneous and unpredictable response to hemolysis observed for several parameters prevented the adoption of reliable statistic corrective measures for results on the basis of the degree of hemolysis.\n\n\nCONCLUSION\nIf hemolysis and blood cell lysis result from an in vitro cause, we suggest that the most convenient corrective solution might be quantification of free hemoglobin, alerting the clinicians and sample recollection.",
"title": ""
},
{
"docid": "10a2fefd81b61e3184d3fdc018ff42ab",
"text": "Recently, models based on deep neural networks have dominated the fields of scene text detection and recognition. In this paper, we investigate the problem of scene text spotting, which aims at simultaneous text detection and recognition in natural images. An end-to-end trainable neural network model for scene text spotting is proposed. The proposed model, named as Mask TextSpotter, is inspired by the newly published work Mask R-CNN. Different from previous methods that also accomplish text spotting with end-to-end trainable deep neural networks, Mask TextSpotter takes advantage of simple and smooth end-to-end learning procedure, in which precise text detection and recognition are acquired via semantic segmentation. Moreover, it is superior to previous methods in handling text instances of irregular shapes, for example, curved text. Experiments on ICDAR2013, ICDAR2015 and Total-Text demonstrate that the proposed method achieves state-of-the-art results in both scene text detection and end-to-end text recognition tasks.",
"title": ""
},
{
"docid": "9ffb4220530a4758ea6272edf6e7e531",
"text": "Process mining allows analysts to exploit logs of historical executions of business processes to extract insights regarding the actual performance of these processes. One of the most widely studied process mining operations is automated process discovery. An automated process discovery method takes as input an event log, and produces as output a business process model that captures the control-flow relations between tasks that are observed in or implied by the event log. Various automated process discovery methods have been proposed in the past two decades, striking different tradeoffs between scalability, accuracy, and complexity of the resulting models. However, these methods have been evaluated in an ad-hoc manner, employing different datasets, experimental setups, evaluation measures, and baselines, often leading to incomparable conclusions and sometimes unreproducible results due to the use of closed datasets. This article provides a systematic review and comparative evaluation of automated process discovery methods, using an open-source benchmark and covering 12 publicly-available real-life event logs, 12 proprietary real-life event logs, and nine quality metrics. The results highlight gaps and unexplored tradeoffs in the field, including the lack of scalability of some methods and a strong divergence in their performance with respect to the different quality metrics used.",
"title": ""
},
{
"docid": "b7bf40c61ff4c73a8bbd5096902ae534",
"text": "—In therapeutic and functional applications transcutaneous electrical stimulation (TES) is still the most frequently applied technique for muscle and nerve activation despite the huge efforts made to improve implantable technologies. Stimulation electrodes play the important role in interfacing the tissue with the stimulation unit. Between the electrode and the excitable tissue there are a number of obstacles in form of tissue resistivities and permittivities that can only be circumvented by magnetic fields but not by electric fields and currents. However, the generation of magnetic fields needed for the activation of excitable tissues in the human body requires large and bulky equipment. TES devices on the other hand can be built cheap, small and light weight. The weak part in TES is the electrode that cannot be brought close enough to the excitable tissue and has to fulfill a number of requirements to be able to act as efficient as possible. The present review article summarizes the most important factors that influence efficient TES, presents and discusses currently used electrode materials, designs and configurations, and points out findings that have been obtained through modeling, simulation and testing.",
"title": ""
},
{
"docid": "43f1cc712b3803ef7ac8273136dbe75d",
"text": "Improved understanding of the anatomy and physiology of the aging face has laid the foundation for adopting an earlier and more comprehensive approach to facial rejuvenation, shifting the focus from individual wrinkle treatment and lift procedures to a holistic paradigm that considers the entire face and its structural framework. This article presents an overview of a comprehensive method to address facial aging. The key components to the reported strategy for improving facial cosmesis include, in addition to augmentation of volume loss, protection with sunscreens and antioxidants; promotion of epidermal cell turnover with techniques such as superficial chemical peels; microlaser peels and microdermabrasion; collagen stimulation and remodeling via light, ultrasound, or radiofrequency (RF)-based methods; and muscle control with botulinum toxin. For the treatment of wrinkles and for the augmentation of pan-facial dermal lipoatrophy, several types of fillers and volumizers including hyaluronic acid (HA), autologous fat, and calcium hydroxylapatite (CaHA) or injectable poly-l-lactic acid (PLLA) are available. A novel bimodal, trivector technique to restore structural facial volume loss that combines supraperiosteal depot injections of volume-depleted fat pads and dermal/subcutaneous injections for panfacial lipoatrophy with PLLA is presented. The combination of treatments with fillers; toxins; light-, sound-, and RF-based technologies; and surgical procedures may help to forestall the facial aging process and provide more natural results than are possible with any of these techniques alone. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .",
"title": ""
}
] |
scidocsrr
|
0e0752aac332e84c8b3da94ec64fec5f
|
Young Citizens and Civic Learning : Two Paradigms of Citizenship in the Digital Age
|
[
{
"docid": "9d6a0b31bf2b64f1ec624222a2222e2a",
"text": "This is the translation of a paper by Marc Prensky, the originator of the famous metaphor digital natives digital immigrants. Here, ten years after the birth of that successful metaphor, Prensky outlines that, while the distinction between digital natives and immigrants will progressively become less important, new concepts will be needed to represent the continuous evolution of the relationship between man and digital technologies. In this paper Prensky introduces the concept of digital wisdom, a human quality which develops as a result of the empowerment that the natural human skills can receive through a creative and clever use of digital technologies. KEY-WORDS Digital natives, digital immigrants, digital wisdom, digital empowerment. Prensky M. (2010). H. Sapiens Digitale: dagli Immigrati digitali e nativi digitali alla saggezza digitale. TD-Tecnologie Didattiche, 50, pp. 17-24 17 I problemi del mondo d’oggi non possono essere risolti facendo ricorso allo stesso tipo di pensiero che li ha creati",
"title": ""
}
] |
[
{
"docid": "907888b819c7f65fe34fb8eea6df9c93",
"text": "Most time-series datasets with multiple data streams have (many) missing measurements that need to be estimated. Most existing methods address this estimation problem either by interpolating within data streams or imputing across data streams; we develop a novel approach that does both. Our approach is based on a deep learning architecture that we call a Multidirectional Recurrent Neural Network (M-RNN). An M-RNN differs from a bi-directional RNN in that it operates across streams in addition to within streams, and because the timing of inputs into the hidden layers is both lagged and advanced. To demonstrate the power of our approach we apply it to a familiar real-world medical dataset and demonstrate significantly improved performance.",
"title": ""
},
{
"docid": "80c21770ada160225e17cb9673fff3b3",
"text": "This paper describes a model to address the task of named-entity recognition on Indonesian microblog messages due to its usefulness for higher-level tasks or text mining applications on Indonesian microblogs. We view our task as a sequence labeling problem using machine learning approach. We also propose various word-level and orthographic features, including the ones that are specific to the Indonesian language. Finally, in our experiment, we compared our model with a baseline model previously proposed for Indonesian formal documents, instead of microblog messages. Our contribution is two-fold: (1) we developed NER tool for Indonesian microblog messages, which was never addressed before, (2) we developed NER corpus containing around 600 Indonesian microblog messages available for future development.",
"title": ""
},
{
"docid": "c7db01ee84fc2d5f6e7861bfc1705027",
"text": "Markov Random Fields (MRFs), a formulation widely used in generative image modeling, have long been plagued by the lack of expressive power. This issue is primarily due to the fact that conventional MRFs formulations tend to use simplistic factors to capture local patterns. In this paper, we move beyond such limitations, and propose a novel MRF model that uses fully-connected neurons to express the complex interactions among pixels. Through theoretical analysis, we reveal an inherent connection between this model and recurrent neural networks, and thereon derive an approximated feed-forward network that couples multiple RNNs along opposite directions. This formulation combines the expressive power of deep neural networks and the cyclic dependency structure of MRF in a unified model, bringing the modeling capability to a new level. The feed-forward approximation also allows it to be efficiently learned from data. Experimental results on a variety of low-level vision tasks show notable improvement over state-of-the-arts.",
"title": ""
},
{
"docid": "5b617701a4f2fa324ca7e3e7922ce1c4",
"text": "Open circuit voltage of a silicon solar cell is around 0.6V. A solar module is constructed by connecting a number of cells in series to get a practically usable voltage. Partial shading of a Solar Photovoltaic Module (SPM) is one of the main causes of overheating of shaded cells and reduced energy yield of the module. The present work is a study of harmful effects of partial shading on the performance of a PV module. A PSPICE simulation model that represents 36 cells PV module under partial shaded conditions has been used to test several shading profiles and results are presented.",
"title": ""
},
{
"docid": "24ea64d86683370bd39c084f3ac94f94",
"text": "Natural Language Understanding (NLU) systems need to encode human generated text (or speech) and reason over it at a deep semantic level. Any NLU system typically involves two main components: The first is an encoder, which composes words (or other basic linguistic units) within the input utterances to compute encoded representations, that are then used as features in the second component, a predictor, to reason over the encoded inputs and produce the desired output. We argue that performing these two steps over the utterances alone is seldom sufficient for understanding language, as the utterances themselves do not contain all the information needed for understanding them. We identify two kinds of additional knowledge needed to fill the gaps: background knowledge and contextual knowledge. The goal of this thesis is to build end-to-end NLU systems that encode inputs along with relevant background knowledge, and reason about them in the presence of contextual knowledge. The first part of the thesis deals with background knowledge. While distributional methods for encoding inputs have been used to represent meaning of words in the context of other words in the input, there are other aspects of semantics that are out of their reach. These are related to commonsense or real world information which is part of shared human knowledge but is not explicitly present in the input. We address this limitation by having the encoders also encode background knowledge, and present two approaches for doing so. The first is by modeling the selectional restrictions verbs place on their semantic role fillers. We use this model to encode events, and show that these event representations are useful in detecting newswire anomalies. Our second approach towards augmenting distributional methods is to use external knowledge bases like WordNet. We compute ontologygrounded token-level representations of words and show that they are useful in predicting prepositional phrase attachments and textual entailment. The second part of the thesis focuses on contextual knowledge. Machine comprehension tasks require interpreting input utterances in the context of other structured or unstructured information. This can be challenging for multiple reasons. Firstly, given some task-specific data, retrieving the relevant contextual knowledge from it can be a serious problem. Secondly, even when the relevant contextual knowledge is provided, reasoning over it might require executing a complex series of operations depending on the structure of the context and the compositionality of the input language. To handle reasoning over contexts, we first describe a type constrained neural semantic parsing framework for question answering (QA). We achieve state of the art performance on WIKITABLEQUESTIONS, a dataset with highly compositional questions over semi-structured tables. Proposed work in this area includes application of this framework to QA in other domains with weaker supervision. To address the challenge of retrieval, we propose to build neural network models with explicit memory components that can adaptively reason and learn to retrieve relevant context given a question.",
"title": ""
},
{
"docid": "d56855e068a4524fda44d93ac9763cab",
"text": "greatest cause of mortality from cardiovascular disease, after myocardial infarction and cerebrovascular stroke. From hospital epidemiological data it has been calculated that the incidence of PE in the USA is 1 per 1,000 annually. The real number is likely to be larger, since the condition goes unrecognised in many patients. Mortality due to PE has been estimated to exceed 15% in the first three months after diagnosis. PE is a dramatic and life-threatening complication of deep venous thrombosis (DVT). For this reason, the prevention, diagnosis and treatment of DVT is of special importance, since symptomatic PE occurs in 30% of those affected. If asymptomatic episodes are also included, it is estimated that 50-60% of DVT patients develop PE. DVT and PE are manifestations of the same entity, namely thromboembolic disease. If we extrapolate the epidemiological data from the USA to Greece, which has a population of about ten million, 20,000 new cases of thromboembolic disease may be expected annually. Of these patients, PE will occur in 10,000, of which 6,000 will have symptoms and 900 will die during the first trimester.",
"title": ""
},
{
"docid": "be9fc2798c145abe70e652b7967c3760",
"text": "Given semantic descriptions of object classes, zero-shot learning aims to accurately recognize objects of the unseen classes, from which no examples are available at the training stage, by associating them to the seen classes, from which labeled examples are provided. We propose to tackle this problem from the perspective of manifold learning. Our main idea is to align the semantic space that is derived from external information to the model space that concerns itself with recognizing visual features. To this end, we introduce a set of \"phantom\" object classes whose coordinates live in both the semantic space and the model space. Serving as bases in a dictionary, they can be optimized from labeled data such that the synthesized real object classifiers achieve optimal discriminative performance. We demonstrate superior accuracy of our approach over the state of the art on four benchmark datasets for zero-shot learning, including the full ImageNet Fall 2011 dataset with more than 20,000 unseen classes.",
"title": ""
},
{
"docid": "a249375471d58592f1911f2a285aa945",
"text": "The existing state-of-the-art in the field of intrusion detection systems (IDSs) generally involves some use of machine learning algorithms. However, the computer security community is growing increasingly aware that a sophisticated adversary could target the learning module of these IDSs in order to circumvent future detections. Consequently, going forward, robustness of machine-learning based IDSs against adversarial manipulation (i.e., poisoning) will be the key factor for the overall success of these systems in the real world. In our work, we focus on adaptive IDSs that use anomaly-based detection to identify malicious activities in an information system. To be able to evaluate the susceptibility of these IDSs to deliberate adversarial poisoning, we have developed a novel framework for their performance testing under adversarial contamination. We have also studied the viability of using deep autoencoders in the detection of anomalies in adaptive IDSs, as well as their overall robustness against adversarial poisoning. Our experimental results show that our proposed autoencoder-based IDS outperforms a generic PCA-based counterpart by more than 15% in terms of detection accuracy. The obtained results concerning the detection ability of the deep autoencoder IDS under adversarial contamination, compared to that of the PCA-based IDS, are also encouraging, with the deep autoencoder IDS maintaining a more stable detection in parallel to limiting the contamination of its training dataset to just bellow 2%.",
"title": ""
},
{
"docid": "9c61e4971829a799b6e979f1b6d69387",
"text": "This work examines humanoid social robots in Japan and the North America with a view to comparing and contrasting the projects cross culturally. In North America, I look at the work of Cynthia Breazeal at the Massachusetts Institute of Technology and her sociable robot project: Kismet. In Japan, at the Osaka University, I consider the project of Hiroshi Ishiguro: Repliée-Q2. I first distinguish between utilitarian and affective social robots. Then, drawing on published works of Breazeal and Ishiguro I examine the proposed vision of each project. Next, I examine specific characteristics (embodied and social intelligence, morphology and aesthetics, and moral equivalence) of Kismet and Repliée with a view to comparing the underlying concepts associated with each. These features are in turn connected to the societal preconditions of robots generally. Specifically, the role that history of robots, theology/spirituality, and popular culture plays in the reception and attitude toward robots is considered.",
"title": ""
},
{
"docid": "34dde046a67d74938e0729802a215361",
"text": "can be found at: Small Group Research Additional services and information for http://sgr.sagepub.com/cgi/alerts Email Alerts: http://sgr.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: http://sgr.sagepub.com/cgi/content/refs/33/4/439 SAGE Journals Online and HighWire Press platforms): (this article cites 25 articles hosted on the Citations",
"title": ""
},
{
"docid": "5034b76d2a50d3955ccb9255fa054af9",
"text": "This paper proposed a highly sensitive micro-force sensor using curved PVDF for fetal heart rate monitoring long-termly. Based on the finite element method, numerical simulations were conducted to compare the straight and curved PVDF films in the aspects of the sensitivity for the dynamic excitation. The results showed that the peak voltages of the sensors varied remarkably with the curvature of the PVDF film. The maximum magnitude of the peak voltage response occurred at a certain value of the curvature. In the experiments, the voltage curves of the sensors were also recorded by an oscilloscope to study the effects of the mass on the surface of the sensors and validate the linearity and sensitivity of the sensors. The results showed that the sensitivity of the sensors up to about 60mV/N, which met the needs of fetal heart rate monitoring.",
"title": ""
},
{
"docid": "6f3bfd9b592654ca451eb5850e5684bc",
"text": "Mammals and birds have evolved three primary, discrete, interrelated emotion-motivation systems in the brain for mating, reproduction, and parenting: lust, attraction, and male-female attachment. Each emotion-motivation system is associated with a specific constellation of neural correlates and a distinct behavioral repertoire. Lust evolved to initiate the mating process with any appropriate partner; attraction evolved to enable individuals to choose among and prefer specific mating partners, thereby conserving their mating time and energy; male-female attachment evolved to enable individuals to cooperate with a reproductive mate until species-specific parental duties have been completed. The evolution of these three emotion-motivation systems contribute to contemporary patterns of marriage, adultery, divorce, remarriage, stalking, homicide and other crimes of passion, and clinical depression due to romantic rejection. This article defines these three emotion-motivation systems. Then it discusses an ongoing project using functional magnetic resonance imaging of the brain to investigate the neural circuits associated with one of these emotion-motivation systems, romantic attraction.",
"title": ""
},
{
"docid": "ad1ca3881dca3ee3fa218be90503815b",
"text": "Motivated by recent successes in neural machine translation and image caption generation, we present an end-to-end system to recognize Online Handwritten Mathematical Expressions (OHMEs). Our system has three parts: a convolution neural network for feature extraction, a bidirectional LSTM for encoding extracted features, and an LSTM and an attention model for generating target LaTex. For recognizing complex structures, our system needs large data for training. We propose local and global distortion models for generating OHMEs from the CROHME database. We evaluate the end-to-end system on the CROHME database and the generated databases. The experiential results show that the end-to-end system achieves 28.09% and 35.19% recognition rates on CROHME without and with the generated data, respectively.",
"title": ""
},
{
"docid": "d9599c4140819670a661bd4955680bb7",
"text": "The paper assesses the demand for rural electricity services and contrasts it with the technology options available for rural electrification. Decentralised Distributed Generation can be economically viable as reflected by case studies reported in literature and analysed in our field study. Project success is driven by economically viable technology choice; however it is largely contingent on organisational leadership and appropriate institutional structures. While individual leadership can compensate for deployment barriers, we argue that a large scale roll out of rural electrification requires an alignment of economic incentives and institutional structures to implement, operate and maintain the scheme. This is demonstrated with the help of seven case studies of projects across north India. 1 Introduction We explore the contribution that decentralised and renewable energy technologies can make to rural electricity supply in India. We take a case study approach, looking at seven sites across northern India where renewable energy technologies have been established to provide electrification for rural communities. We supplement our case studies with stakeholder interviews and household surveys, estimating levels of demand for electricity services from willingness and ability to pay. We also assess the overall viability of Distributed Decentralised Generation (DDG) projects by investigating the costs of implementation as well as institutional and organisational barriers to their operation and replication. Renewable energy technologies represent some of the most promising options available for distributed and decentralised electrification. Demand for reliable electricity services is significant. It represents a key driver behind economic development and raising basic standards of living. This is especially applicable to rural India home to 70% of the nation's population and over 25% of the world's poor. Access to reliable and affordable electricity can help support income-generating activity and allow utilisation of modern appliances and agricultural equipment whilst replacing inefficient and polluting kerosene lighting. Presently only around 55% of households are electrified (MOSPI 2006) leaving over 20 million households without power. The supply of electricity across India currently lacks both quality and quantity with an extensive shortfall in supply, a poor record for outages, high levels of transmission and distribution (T&D) losses and an overall need for extended and improved infrastructure (GoI 2006). The Indian Government recently outlined an ambitious plan for 100% village level electrification by the end of 2007 and total household electrification by 2012. To achieve this, a major programme of grid extension and strengthening of the rural electricity infrastructure has been initiated under …",
"title": ""
},
{
"docid": "35573aa0ad09b67298659d72f3d38329",
"text": "Prior distributions play a crucial role in Bayesian approaches to clustering. Two commonly-used prior distributions are the Dirichlet and Pitman-Yor processes. In this paper, we investigate the predictive probabilities that underlie these processes, and the implicit “rich-get-richer” characteristic of the resulting partitions. We explore an alternative prior for nonparametric Bayesian clustering—the uniform process—for applications where the “rich-get-richer” property is undesirable. We also explore the cost of this process: partitions are no longer exchangeable with respect to the ordering of variables. We present new asymptotic and simulation-based results for the clustering characteristics of the uniform process and compare these with known results for the Dirichlet and Pitman-Yor processes. We compare performance on a real document clustering task, demonstrating the practical advantage of the uniform process despite its lack of exchangeability over orderings.",
"title": ""
},
{
"docid": "d6bbec8d1426cacba7f8388231f04add",
"text": "This paper presents a novel multiple-frequency resonant inverter for induction heating (IH) applications. By adopting a center tap transformer, the proposed resonant inverter can give load switching frequency as twice as the isolated-gate bipolar transistor (IGBT) switching frequency. The structure and the operation of the proposed topology are described in order to demonstrate how the output frequency of the proposed resonant inverter is as twice as the switching frequency of IGBTs. In addition to this, the IGBTs in the proposed topology work in zero-voltage switching during turn-on phase of the switches. The new topology is verified by the experimental results using a prototype for IH applications. Moreover, increased efficiency of the proposed inverter is verified by comparison with conventional designs.",
"title": ""
},
{
"docid": "6f5afc38b09fa4fd1e47d323cfe850c9",
"text": "In the past several years there has been extensive research into honeypot technologies, primarily for detection and information gathering against external threats. However, little research has been done for one of the most dangerous threats, the advance insider, the trusted individual who knows your internal organization. These individuals are not after your systems, they are after your information. This presentation discusses how honeypot technologies can be used to detect, identify, and gather information on these specific threats.",
"title": ""
},
{
"docid": "3415fb5e9b994d6015a17327fc0fe4f4",
"text": "A human stress monitoring patch integrates three sensors of skin temperature, skin conductance, and pulsewave in the size of stamp (25 mm × 15 mm × 72 μm) in order to enhance wearing comfort with small skin contact area and high flexibility. The skin contact area is minimized through the invention of an integrated multi-layer structure and the associated microfabrication process; thus being reduced to 1/125 of that of the conventional single-layer multiple sensors. The patch flexibility is increased mainly by the development of flexible pulsewave sensor, made of a flexible piezoelectric membrane supported by a perforated polyimide membrane. In the human physiological range, the fabricated stress patch measures skin temperature with the sensitivity of 0.31 Ω/°C, skin conductance with the sensitivity of 0.28 μV/0.02 μS, and pulse wave with the response time of 70 msec. The skin-attachable stress patch, capable to detect multimodal bio-signals, shows potential for application to wearable emotion monitoring.",
"title": ""
},
{
"docid": "94af221c857462b51e14f527010fccde",
"text": "The immunology of the hygiene hypothesis of allergy is complex and involves the loss of cellular and humoral immunoregulatory pathways as a result of the adoption of a Western lifestyle and the disappearance of chronic infectious diseases. The influence of diet and reduced microbiome diversity now forms the foundation of scientific thinking on how the allergy epidemic occurred, although clear mechanistic insights into the process in humans are still lacking. Here we propose that barrier epithelial cells are heavily influenced by environmental factors and by microbiome-derived danger signals and metabolites, and thus act as important rheostats for immunoregulation, particularly during early postnatal development. Preventive strategies based on this new knowledge could exploit the diversity of the microbial world and the way humans react to it, and possibly restore old symbiotic relationships that have been lost in recent times, without causing disease or requiring a return to an unhygienic life style.",
"title": ""
}
] |
scidocsrr
|
20ff494622f79b14dd513677718adfe2
|
Semantic Complex Event Processing over End-to-End Data Flows
|
[
{
"docid": "da5e40683054b89d619712c31a3384e5",
"text": "The Los Angeles Smart Grid Project aims to use informatics techniques to bring about a quantum leap in the way demand response load optimization is performed in utilities. Semantic information integration, from sources as diverse as Internet-connected smart meters and social networks, is a linchpin to support the advanced analytics and mining algorithms required for this. In association with it, semantic complex event processing system will allow consumer and utility managers to easily specify and enact energy policies continuously. We present the information systems architecture for the project that is under development, and discuss research issues that emerge from having to design a system that supports 1.4 million customers and a rich ecosystem of Smart Grid applications from users, third party vendors, the utility and regulators.",
"title": ""
},
{
"docid": "44f829c853c1cdd1cf2a0bd2622015bb",
"text": "Alert is an extension architecture designed for transforming a passive SQL DBMS into. an active DBMS. The salient features of the design of Alert are reusing, to the extent possible, the passive DBMS technology, and making minimal changes to the language and implementation of the passive DBMS. Alert provides a layered architecture that allows the semantics of a variety of production rule languages to be supported on top. Rules may be specified on userdefined as well as built-in operations. Both synchronous and asynchronous event, monit,oring are possible. This paper presents the design of Alert and its implementation in the Starburst extensible DBMS.",
"title": ""
}
] |
[
{
"docid": "75ada81b42b5fac12a3aa7344ae7377e",
"text": "Inheritance is well-known and accepted as a mechanism for reuse in object-oriented languages. Unfortunately, due to the coarse granularity of inheritance, it may be difficult to decompose an application into an optimal class hierarchy that maximizes software reuse. Existing schemes based on single inheritance, multiple inheritance, or mixins, all pose numerous problems for reuse. To overcome these problems we propose traits, pure units of reuse consisting only of methods. We develop a formal model of traits that establishes how traits can be composed, either to form other traits, or to form classes. We also outline an experimental validation in which we apply traits to refactor a nontrivial application into composable units.",
"title": ""
},
{
"docid": "a7e5f9cf618d6452945cb6c4db628bbb",
"text": "we present a motion capture device to measure in real-time table tennis strokes. A six degree-of-freedom sensing device, inserted into the racket handle, measures 3D acceleration and 3-axis angular velocity values at a high sampling rate. Data are wirelessly transmitted to a computer in real-time. This flexible system allows for recording and analyzing kinematics information on the motion of the racket, along with synchronized video and sound recordings. Recorded gesture data are analyzed using several algorithms we developed to segment and extract movement features, and to build a reference motion database.",
"title": ""
},
{
"docid": "2f0eb4a361ff9f09bda4689a1f106ff2",
"text": "The growth of Quranic digital publishing increases the need to develop a better framework to authenticate Quranic quotes with the original source automatically. This paper aims to demonstrate the significance of the quote authentication approach. We propose an approach to verify the e-citation of the Quranic quote as compared with original texts from the Quran. In this paper, we will concentrate mainly on discussing the Algorithm to verify the fundamental text for Quranic quotes.",
"title": ""
},
{
"docid": "02d200da4d0af8ac55852b0b8fe5a8f0",
"text": "Full-duplex relaying is more spectrally efficient than half-duplex relaying as only one channel use is needed per two hops. However, it is crucial to minimize relay self-interference to render full duplex feasible. For this purpose, we analyze a broad range of multiple-input multiple-output (MIMO) mitigation schemes: natural isolation, time-domain cancellation, and spatial suppression. Cancellation subtracts replicated interference signal from the relay input while suppression reserves spatial dimensions for receive and transmit filtering. Spatial suppression can be achieved by antenna subset selection, null-space projection, i.e., receiving and transmitting in orthogonal subspaces, or joint transmit and receive beam selection to support more spatial streams by choosing the minimum eigenmodes for overlapping subspaces. In addition, minimum mean square error (MMSE) filtering can be employed to maintain the desired signal quality, which is inherent for cancellation, and the combination of time- and spatial-domain processing may be better than either alone. Targeting at minimal interference power, we solve optimal filters for each scheme in the cases of joint, separate and independent design. The performance of mitigation schemes is evaluated and compared by simulations. The results confirm that self-interference can be mitigated effectively also in the presence of imperfect side information.",
"title": ""
},
{
"docid": "d0f1064f022f3a3c85a2a76f56f43dbb",
"text": "Increasing amount of online music content has opened new opportunities for implementing new effective information access services – commonly known as music recommender systems – that support music navigation, discovery, sharing, and formation of user communities. In the recent years the new research area of contextual (or situational) music recommendation and retrieval has emerged. The basic idea is to retrieve and suggest music depending on the user’s actual situation, for instance emotional state, or any other contextual conditions that might influence the user’s perception of music. Despite the high potential of such idea, the development of real-world applications that retrieve or recommend music depending on the user’s context is still in its early stages. This survey illustrates various tools and techniques that can be used for addressing the research challenges posed by context-aware music retrieval and recommendation. This survey covers a broad range of topics, starting from classical music information retrieval (MIR) and recommender system (RS) techniques, and then focusing on context-aware music applications as well as the newer trends of affective and social computing applied to the music domain.",
"title": ""
},
{
"docid": "ab05a100cfdb072f65f7dad85b4c5aea",
"text": "Expanding retrieval practice refers to the idea that gradually increasing the spacing interval between repeated tests ought to promote optimal long-term retention. Belief in the superiority of this technique is widespread, but empirical support is scarce. In addition, virtually all research on expanding retrieval has examined the learning of word pairs in paired-associate tasks. We report two experiments in which we examined the learning of text materials with expanding and equally spaced retrieval practice schedules. Subjects studied brief texts and recalled them in an initial learning phase. We manipulated the spacing of the repeated recall tests and examined final recall 1 week later. Overall we found that (1) repeated testing enhanced retention more than did taking a single test, (2) testing with feedback (restudying the passages) produced better retention than testing without feedback, but most importantly (3) there were no differences between expanding and equally spaced schedules of retrieval practice. Repeated retrieval enhanced long-term retention, but how the repeated tests were spaced did not matter.",
"title": ""
},
{
"docid": "471af6726ec78126fcf46f4e42b666aa",
"text": "A new thermal tuning circuit for optical ring modulators enables demonstration of an optical chip-to-chip link for the first time with monolithically integrated photonic devices in a commercial 45nm SOI process, without any process changes. The tuning circuit uses independent 1/0 level-tracking and 1/0 bit counting to remain resilient against laser self-heating transients caused by non-DC-balanced transmit data. A 30fJ/bit transmitter and 374fJ/bit receiver with 6μApk-pk photocurrent sensitivity complete the 5Gb/s link. The thermal tuner consumes 275fJ/bit and achieves a 600 GHz tuning range with a heater tuning efficiency of 3.8μW/GHz.",
"title": ""
},
{
"docid": "57bd8c0c2742027de4b599b129506154",
"text": "Software instrumentation is a powerful and flexible technique for analyzing the dynamic behavior of programs. By inserting extra code in an application, it is possible to study the performance and correctness of programs and systems. Pin is a software system that performs run-time binary instrumentation of unmodified applications. Pin provides an API for writing custom instrumentation, enabling its use in a wide variety of performance analysis tasks such as workload characterization, program tracing, cache modeling, and simulation. Most of the prior work on instrumentation systems has focused on executing Unix applications, despite the ubiquity and importance of Windows applications. This paper identifies the Windows-specific obstacles for implementing a process-level instrumentation system, describes a comprehensive, robust solution, and discusses some of the alternatives. The challenges lie in managing the kernel/application transitions, injecting the runtime agent into the process, and isolating the instrumentation from the application. We examine Pin's overhead on typical Windows applications being instrumented with simple tools up to commercial program analysis products. The biggest factor affecting performance is the type of analysis performed by the tool. While the proprietary nature of Windows makes measurement and analysis difficult, Pin opens the door to understanding program behavior.",
"title": ""
},
{
"docid": "2754f8f6357c15c6bc4e479e3823c288",
"text": "The world wide annual expenditures for cosmetics is estimated at U.S.$18 billion, and many players in the field are competing aggressively to capture more and more market. Hence, companies are interested to know about consumer’s attitude towards cosmetics so as to devise strategies to win over competition. The main purpose of this article is to investigate the influence of attitude on cosmetics buying behaviour. The research question is “what kind of attitudes do the customers have towards buying behaviour of cosmetic products?” A questionnaire was developed and distributed to female consumers in Bangalore city by using convenience sampling method. 118 completed questionnaires were returned and then 100 valid were analyzed by using ANOVA, mean and standard deviation. The result of the study confirms that age, occupation, marital status have positive influence towards cosmetic products. But income does not have any influence on the attitude towards cosmetic products.",
"title": ""
},
{
"docid": "ec2d9c12a906eb999e7a178d0f672073",
"text": "Passive-dynamic walkers are simple mechanical devices, composed of solid parts connected by joints, that walk stably down a slope. They have no motors or controllers, yet can have remarkably humanlike motions. This suggests that these machines are useful models of human locomotion; however, they cannot walk on level ground. Here we present three robots based on passive-dynamics, with small active power sources substituted for gravity, which can walk on level ground. These robots use less control and less energy than other powered robots, yet walk more naturally, further suggesting the importance of passive-dynamics in human locomotion.",
"title": ""
},
{
"docid": "5fc6b0e151762560c8f09d0fe6983ca2",
"text": "The increasing popularity of wearable devices that continuously capture video, and the prevalence of third-party applications that utilize these feeds have resulted in a new threat to privacy. In many situations, sensitive objects/regions are maliciously (or accidentally) captured in a video frame by third-party applications. However, current solutions do not allow users to specify and enforce fine grained access control over video feeds.\n In this paper, we describe MarkIt, a computer vision based privacy marker framework, that allows users to specify and enforce fine grained access control over video feeds. We present two example privacy marker systems -- PrivateEye and WaveOff. We conclude with a discussion of the computer vision, privacy and systems challenges in building a comprehensive system for fine grained access control over video feeds.",
"title": ""
},
{
"docid": "8fac18c1285875aee8e7a366555a4ca3",
"text": "Automatic speech recognition (ASR) has been under the scrutiny of researchers for many years. Speech Recognition System is the ability to listen what we speak, interpreter and perform actions according to spoken information. After so many detailed study and optimization of ASR and various techniques of features extraction, accuracy of the system is still a big challenge. The selection of feature extraction techniques is completely based on the area of study. In this paper, a detailed theory about features extraction techniques like LPC and LPCC is examined. The goal of this paper is to study the comparative analysis of features extraction techniques like LPC and LPCC.",
"title": ""
},
{
"docid": "fb79df27fa2a5b1af8d292af8d53af6e",
"text": "This paper presents a proportional integral derivative (PID) controller with a derivative filter coefficient to control a twin rotor multiple input multiple output system (TRMS), which is a nonlinear system with two degrees of freedom and cross couplings. The mathematical modeling of TRMS is done using MATLAB/Simulink. The simulation results are compared with the results of conventional PID controller. The results of proposed PID controller with derivative filter shows better transient and steady state response as compared to conventional PID controller.",
"title": ""
},
{
"docid": "cd98932832d8821a98032ae6bbef2576",
"text": "An open-loop stereophonic acoustic echo suppression (SAES) method without preprocessing is presented for teleconferencing systems, where the Wiener filter in the short-time Fourier transform (STFT) domain is employed. Instead of identifying the echo path impulse responses with adaptive filters, the proposed algorithm estimates the echo spectra from the stereo signals using two weighting functions. The spectral modification technique originally proposed for noise reduction is adopted to remove the echo from the microphone signal. Moreover, a priori signal-to-echo ratio (SER) based Wiener filter is used as the gain function to achieve a trade-off between musical noise reduction and computational load for real-time operations. Computer simulation shows the effectiveness and the robustness of the proposed method in several different scenarios.",
"title": ""
},
{
"docid": "924ae8652ad7240eca8a2ca195c01575",
"text": "We propose a novel value-aware quantization which applies aggressively reduced precision to the majority of data while separately handling a small amount of large data in high precision, which reduces total quantization errors under very low precision. We present new techniques to apply the proposed quantization to training and inference. The experiments show that our method with 3-bit activations (with 2% of large ones) can give the same training accuracy as full-precision one while offering significant (41.6% and 53.7%) reductions in the memory cost of activations in ResNet-152 and Inception-v3 compared with the state-of-the-art method. Our experiments also show that deep networks such as Inception-v3, ResNet-101 and DenseNet-121 can be quantized for inference with 4-bit weights and activations (with 1% 16-bit data) within 1% top-1 accuracy drop.",
"title": ""
},
{
"docid": "de1d3377aafd684385a332a03d4b6267",
"text": "It has recently been suggested that brain areas crucial for mentalizing, including the medial prefrontal cortex (mPFC), are not activated exclusively during mentalizing about the intentions, beliefs, morals or traits of the self or others, but also more generally during cognitive reasoning including relational processing about objects. Contrary to this notion, a meta-analysis of cognitive reasoning tasks demonstrates that the core mentalizing areas are not systematically recruited during reasoning, but mostly when these tasks describe some human agency or general evaluative and enduring traits about humans, and much less so when these social evaluations are absent. There is a gradient showing less mPFC activation as less mentalizing content is contained in the stimulus material used in reasoning tasks. Hence, it is more likely that cognitive reasoning activates the mPFC because inferences about social agency and mind are involved.",
"title": ""
},
{
"docid": "e07377cb36e31c8190d5ac96f3891f2a",
"text": "We offer a new metric for big data platforms, COST, or the Configuration that Outperforms a Single Thread. The COST of a given platform for a given problem is the hardware configuration required before the platform outperforms a competent single-threaded implementation. COST weighs a system’s scalability against the overheads introduced by the system, and indicates the actual performance gains of the system, without rewarding systems that bring substantial but parallelizable overheads. We survey measurements of data-parallel systems recently reported in SOSP and OSDI, and find that many systems have either a surprisingly large COST, often hundreds of cores, or simply underperform one thread for all of their reported configurations.",
"title": ""
},
{
"docid": "902bd3d85a67476348f0aee20bc15964",
"text": "Crowdfunding has received a great deal of attention of late, as a promising avenue to fostering entrepreneurship and innovation. A notable aspect of shifting these financial activities to an online setting is that this brings increased visibility and traceability to a potentially sensitive activity. Most crowdfunding platforms maintain a public record of transactions, though a good number also provide transaction-level information controls, enabling users to conceal information as they see fit. We explore the impact of these information control mechanisms on crowdfunder behavior, acknowledging possible positive (e.g., comfort) and negative (e.g., privacy priming) impacts, employing a randomized experiment at a leading crowdfunding platform. Reducing access to information controls increases conversion rates (by ~5%), yet it also decreases average contribution amounts ($5.81 decline). We offer interpretations for these effects along with some empirical support, and we discuss implications for crowdfunding platform design.",
"title": ""
},
{
"docid": "33db6128f85c9300487beb8c00366df0",
"text": "Recurrent neural networks (RNN), convolutional neural networks (CNN) and selfattention networks (SAN) are commonly used to produce context-aware representations. RNN can capture long-range dependency but is hard to parallelize and not time-efficient. CNN focuses on local dependency but does not perform well on some tasks. SAN can model both such dependencies via highly parallelizable computation, but memory requirement grows rapidly in line with sequence length. In this paper, we propose a model, called “bi-directional block self-attention network (Bi-BloSAN)”, for RNN/CNN-free sequence encoding. It requires as little memory as RNN but with all the merits of SAN. Bi-BloSAN splits the entire sequence into blocks, and applies an intra-block SAN to each block for modeling local context, then applies an inter-block SAN to the outputs for all blocks to capture long-range dependency. Thus, each SAN only needs to process a short sequence, and only a small amount of memory is required. Additionally, we use feature-level attention to handle the variation of contexts around the same word, and use forward/backward masks to encode temporal order information. On nine benchmark datasets for different NLP tasks, Bi-BloSAN achieves or improves upon state-of-the-art accuracy, and shows better efficiency-memory trade-off than existing RNN/CNN/SAN.",
"title": ""
},
{
"docid": "de1f5c84419787885e6c9d4c3dbd5f78",
"text": "Signed algorithms and scatter/gather I/O have garnered tremendous interest from both system administrators and biologists in the last several years. After years of confirmed research into the producer-consumer problem, we demonstrate the refinement of IPv4. In this work, we concentrate our efforts on proving that online algorithms and XML can interfere to accomplish this aim.",
"title": ""
}
] |
scidocsrr
|
c62b1f1af2bc05477a8089ff832b7d04
|
802.11 Denial-of-Service Attacks: Real Vulnerabilities and Practical Solutions
|
[
{
"docid": "326cb7464df9c9361be4e27d82f61455",
"text": "We implemented an attack against WEP, the link-layer security protocol for 802.11 networks. The attack was described in a recent paper by Fluhrer, Mantin, and Shamir. With our implementation, and permission of the network administrator, we were able to recover the 128 bit secret key used in a production network, with a passive attack. The WEP standard uses RC4 IVs improperly, and the attack exploits this design failure. This paper describes the attack, how we implemented it, and some optimizations to make the attack more efficient. We conclude that 802.11 WEP is totally insecure, and we provide some recommendations.",
"title": ""
}
] |
[
{
"docid": "248adf4ee726dce737b7d0cbe3334ea3",
"text": "People can often find themselves out of their depth when they face knowledge-based problems, such as faulty technology, or medical concerns. This can also happen in everyday domains that users are simply inexperienced with, like cooking. These are common exploratory search conditions, where users don’t quite know enough about the domain to know if they are submitting a good query, nor if the results directly resolve their need or can be translated to do so. In such situations, people turn to their friends for help, or to forums like StackOverflow, so that someone can explain things to them and translate information to their specific need. This short paper describes work-in-progress within a Google-funded project focusing on Search Literacy in these situations, where improved search skills will help users to learn as they search, to search better, and to better comprehend the results. Focusing on the technology-problem domain, we present initial results from a qualitative study of questions asked and answers given in StackOverflow, and present plans for designing search engine support to help searchers learn as they search.",
"title": ""
},
{
"docid": "44f91387bef2faf4964fa97ba53292db",
"text": "In this work, a nonlinear model predictive controller is developed for a batch polymerization process. The physical model of the process is parameterized along a desired trajectory resulting in a trajectory linearized piecewise model (a multiple linear model bank) and the parameters are identified for an experimental polymerization reactor. Then, a multiple model adaptive predictive controller is designed for thermal trajectory tracking of the MMA polymerization. The input control signal to the process is constrained by the maximum thermal power provided by the heaters. The constrained optimization in the model predictive controller is solved via genetic algorithms to minimize a DMC cost function in each sampling interval.",
"title": ""
},
{
"docid": "6c0f3240b86677a0850600bf68e21740",
"text": "In this article, we revisit two popular convolutional neural networks in person re-identification (re-ID): verification and identification models. The two models have their respective advantages and limitations due to different loss functions. Here, we shed light on how to combine the two models to learn more discriminative pedestrian descriptors. Specifically, we propose a Siamese network that simultaneously computes the identification loss and verification loss. Given a pair of training images, the network predicts the identities of the two input images and whether they belong to the same identity. Our network learns a discriminative embedding and a similarity measurement at the same time, thus taking full usage of the re-ID annotations. Our method can be easily applied on different pretrained networks. Albeit simple, the learned embedding improves the state-of-the-art performance on two public person re-ID benchmarks. Further, we show that our architecture can also be applied to image retrieval. The code is available at https://github.com/layumi/2016_person_re-ID.",
"title": ""
},
{
"docid": "fae60b86d98a809f876117526106719d",
"text": "Big Data security analysis is commonly used for the analysis of large volume security data from an organisational perspective, requiring powerful IT infrastructure and expensive data analysis tools. Therefore, it can be considered to be inaccessible to the vast majority of desktop users and is difficult to apply to their rapidly growing data sets for security analysis. A number of commercial companies offer a desktop-oriented big data security analysis solution; however, most of them are prohibitive to ordinary desktop users with respect to cost and IT processing power. This paper presents an intuitive and inexpensive big data security analysis approach using Computational Intelligence (CI) techniques for Windows desktop users, where the combination of Windows batch programming, EmEditor and R are used for the security analysis. The simulation is performed on a real dataset with more than 10 million observations, which are collected from Windows Firewall logs to demonstrate how a desktop user can gain insight into their abundant and untouched data and extract useful information to prevent their system from current and future security threats. This CI-based big data security analysis approach can also be extended to other types of security logs such as event logs, application logs and web logs.",
"title": ""
},
{
"docid": "13cfc33bd8611b3baaa9be37ea9d627e",
"text": "Some of the more difficult to define aspects of the therapeutic process (empathy, compassion, presence) remain some of the most important. Teaching them presents a challenge for therapist trainees and educators alike. In this study, we examine our beginning practicum students' experience of learning mindfulness meditation as a way to help them develop therapeutic presence. Through thematic analysis of their journal entries a variety of themes emerged, including the effects of meditation practice, the ability to be present, balancing being and doing modes in therapy, and the development of acceptance and compassion for themselves and for their clients. Our findings suggest that mindfulness meditation may be a useful addition to clinical training.",
"title": ""
},
{
"docid": "b1e431f48c52a267c7674b5526d9ee23",
"text": "Publish/subscribe is a distributed interaction paradigm well adapted to the deployment of scalable and loosely coupled systems.\n Apache Kafka and RabbitMQ are two popular open-source and commercially-supported pub/sub systems that have been around for almost a decade and have seen wide adoption. Given the popularity of these two systems and the fact that both are branded as pub/sub systems, two frequently asked questions in the relevant online forums are: how do they compare against each other and which one to use?\n In this paper, we frame the arguments in a holistic approach by establishing a common comparison framework based on the core functionalities of pub/sub systems. Using this framework, we then venture into a qualitative and quantitative (i.e. empirical) comparison of the common features of the two systems. Additionally, we also highlight the distinct features that each of these systems has. After enumerating a set of use cases that are best suited for RabbitMQ or Kafka, we try to guide the reader through a determination table to choose the best architecture given his/her particular set of requirements.",
"title": ""
},
{
"docid": "2f20e5792104b67143b7dcc43954317e",
"text": "Resource Description Framework (RDF) was designed with the initial goal of developing metadata for the Internet. While the Internet is a conglomeration of many interconnected networks and computers, most of today's best RDF storage solutions are confined to a single node. Working on a single node has significant scalability issues, especially considering the magnitude of modern day data. In this paper we introduce a scalable RDF data management system that uses Accumulo, a Google Bigtable variant. We introduce storage methods, indexing schemes, and query processing techniques that scale to billions of triples across multiple nodes, while providing fast and easy access to the data through conventional query mechanisms such as SPARQL. Our performance evaluation shows that in most cases, our system outperforms existing distributed RDF solutions, even systems much more complex than ours.",
"title": ""
},
{
"docid": "1c337dd1935eac802be148b7cb9e671f",
"text": "In this paper, we propose generating artificial data that retain statistical properties of real data as the means of providing privacy for the original dataset. We use generative adversarial networks to draw privacy-preserving artificial data samples and derive an empirical method to assess the risk of information disclosure in a differential-privacy-like way. Our experiments show that we are able to generate labelled data of high quality and use it to successfully train and validate supervised models. Finally, we demonstrate that our approach significantly reduces vulnerability of such models to model inversion attacks.",
"title": ""
},
{
"docid": "67808f54305bc2bb2b3dd666f8b4ef42",
"text": "Sensing devices are becoming the source of a large portion of the Web data. To facilitate the integration of sensed data with data from other sources, both sensor stream sources and data are being enriched with semantic descriptions, creating Linked Stream Data. Despite its enormous potential, little has been done to explore Linked Stream Data. One of the main characteristics of such data is its “live” nature, which prohibits existing Linked Data technologies to be applied directly. Moreover, there is currently a lack of tools to facilitate publishing Linked Stream Data and making it available to other applications. To address these issues we have developed the Linked Stream Middleware (LSM), a platform that brings together the live real world sensed data and the Semantic Web. A LSM deployment is available at http://lsm.deri.ie/. It provides many functionalities such as: i) wrappers for real time data collection and publishing; ii) a web interface for data annotation and visualisation; and iii) a SPARQL endpoint for querying unified Linked Stream Data and Linked Data. In this paper we describe the system architecture behind LSM, provide details how Linked Stream Data is generated, and demonstrate the benefits of the platform by showcasing its interface.",
"title": ""
},
{
"docid": "47dc7c546c4f0eb2beb1b251ef9e4a81",
"text": "In this paper we describe AMT, a tool for monitoring temporal properties of continuous signals. We first introduce S TL /PSL, a specification formalism based on the industrial standard language P SL and the real-time temporal logic MITL , extended with constructs that allow describing behaviors of real-valued variables. The tool automatically builds property observers from an STL /PSL specification and checks, in an offlineor incrementalfashion, whether simulation traces satisfy the property. The AMT tool is validated through a Fla sh memory case-study.",
"title": ""
},
{
"docid": "3989aa85b78b211e3d6511cf5fb607bd",
"text": "The specific requirements of UAV-photogrammetry necessitate particular solutions for system development, which have mostly been ignored or not assessed adequately in recent studies. Accordingly, this paper presents the methodological and experimental aspects of correctly implementing a UAV-photogrammetry system. The hardware of the system consists of an electric-powered helicopter, a high-resolution digital camera and an inertial navigation system. The software of the system includes the in-house programs specifically designed for camera calibration, platform calibration, system integration, on-board data acquisition, flight planning and on-the-job self-calibration. The detailed features of the system are discussed, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The developed system is extensively tested for precise modeling of the challenging environment of an open-pit gravel mine. The accuracy of the results is evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy are assessed. The experiments demonstrated that 1.55 m horizontal and 3.16 m vertical absolute modeling accuracy could be achieved via direct geo-referencing, which was improved to 0.4 cm and 1.7 cm after indirect geo-referencing.",
"title": ""
},
{
"docid": "e1429e1dd862d3687d75c4aac63ae907",
"text": "Relational DBMSs remain the main data management technology, despite the big data analytics and no-SQL waves. On the other hand, for data analytics in a broad sense, there are plenty of non-DBMS tools including statistical languages, matrix packages, generic data mining programs and large-scale parallel systems, being the main technology for big data analytics. Such large-scale systems are mostly based on the Hadoop distributed file system and MapReduce. Thus it would seem a DBMS is not a good technology to analyze big data, going beyond SQL queries, acting just as a reliable and fast data repository. In this survey, we argue that is not the case, explaining important research that has enabled analytics on large databases inside a DBMS. However, we also argue DBMSs cannot compete with parallel systems like MapReduce to analyze web-scale text data. Therefore, each technology will keep influencing each other. We conclude with a proposal of long-term research issues, considering the \"big data analytics\" trend.",
"title": ""
},
{
"docid": "ec90e30c0ae657f25600378721b82427",
"text": "We use deep max-pooling convolutional neural networks to detect mitosis in breast histology images. The networks are trained to classify each pixel in the images, using as context a patch centered on the pixel. Simple postprocessing is then applied to the network output. Our approach won the ICPR 2012 mitosis detection competition, outperforming other contestants by a significant margin.",
"title": ""
},
{
"docid": "6f166a5ba1916c5836deb379481889cd",
"text": "Microbial activities drive the global nitrogen cycle, and in the past few years, our understanding of nitrogen cycling processes and the micro-organisms that mediate them has changed dramatically. During this time, the processes of anaerobic ammonium oxidation (anammox), and ammonia oxidation within the domain Archaea, have been recognized as two new links in the global nitrogen cycle. All available evidence indicates that these processes and organisms are critically important in the environment, and particularly in the ocean. Here we review what is currently known about the microbial ecology of anaerobic and archaeal ammonia oxidation, highlight relevant unknowns and discuss the implications of these discoveries for the global nitrogen and carbon cycles.",
"title": ""
},
{
"docid": "deb1c65a6e2dfb9ab42f28c74826309c",
"text": "Large knowledge bases consisting of entities and relationships between them have become vital sources of information for many applications. Most of these knowledge bases adopt the Semantic-Web data model RDF as a representation model. Querying these knowledge bases is typically done using structured queries utilizing graph-pattern languages such as SPARQL. However, such structured queries require some expertise from users which limits the accessibility to such data sources. To overcome this, keyword search must be supported. In this paper, we propose a retrieval model for keyword queries over RDF graphs. Our model retrieves a set of subgraphs that match the query keywords, and ranks them based on statistical language models. We show that our retrieval model outperforms the-state-of-the-art IR and DB models for keyword search over structured data using experiments over two real-world datasets.",
"title": ""
},
{
"docid": "18247ea0349da81fe2cf93b3663b081f",
"text": "Nowadays, more and more companies migrate business from their own servers to the cloud. With the influx of computational requests, datacenters consume tremendous energy every day, attracting great attention in the energy efficiency dilemma. In this paper, we investigate the energy-aware resource management problem in cloud datacenters, where green energy with unpredictable capacity is connected. Via proposing a robust blockchain-based decentralized resource management framework, we save the energy consumed by the request scheduler. Moreover, we propose a reinforcement learning method embedded in a smart contract to further minimize the energy cost. Because the reinforcement learning method is informed from the historical knowledge, it relies on no request arrival and energy supply. Experimental results on Google cluster traces and real-world electricity price show that our approach is able to reduce the datacenters cost significantly compared with other benchmark algorithms.",
"title": ""
},
{
"docid": "8109594325601247cdb253dbb76b9592",
"text": "Disturbance compensation is one of the major problems in control system design. Due to external disturbance or model uncertainty that can be treated as disturbance, all control systems are subject to disturbances. When it comes to networked control systems, not only disturbances but also time delay is inevitable where controllers are remotely connected to plants through communication network. Hence, simultaneous compensation for disturbance and time delay is important. Prior work includes a various combinations of smith predictor, internal model control, and disturbance observer tailored to simultaneous compensation of both time delay and disturbance. In particular, simplified internal model control simultaneously compensates for time delay and disturbances. But simplified internal model control is not applicable to the plants that have two poles at the origin. We propose a modified simplified internal model control augmented with disturbance observer which simultaneously compensates time delay and disturbances for the plants with two poles at the origin. Simulation results are provided.",
"title": ""
},
{
"docid": "098da928abe37223e0eed0c6bf0f5747",
"text": "With the proliferation of social media, fashion inspired from celebrities, reputed designers as well as fashion influencers has shortned the cycle of fashion design and manufacturing. However, with the explosion of fashion related content and large number of user generated fashion photos, it is an arduous task for fashion designers to wade through social media photos and create a digest of trending fashion. Designers do not just wish to have fashion related photos at one place but seek search functionalities that can let them search photos with natural language queries such as ‘red dress’, ’vintage handbags’, etc in order to spot the trends. This necessitates deep parsing of fashion photos on social media to localize and classify multiple fashion items from a given fashion photo. While object detection competitions such as MSCOCO have thousands of samples for each of the object categories, it is quite difficult to get large labeled datasets for fast fashion items. Moreover, state-of-the-art object detectors [2, 7, 9] do not have any functionality to ingest large amount of unlabeled data available on social media in order to fine tune object detectors with labeled datasets. In this work, we show application of a generic object detector [11], that can be pretrained in an unsupervised manner, on 24 categories from recently released Open Images V4 dataset. We first train the base architecture of the object detector using unsupervisd learning on 60K unlabeled photos from 24 categories gathered from social media, and then subsequently fine tune it on 8.2K labeled photos from Open Images V4 dataset. On 300 × 300 image inputs, we achieve 72.7% mAP on a test dataset of 2.4K photos while performing 11% to 17% better as compared to the state-of-the-art object detectors. We show that this improvement is due to our choice of architecture that lets us do unsupervised learning and that performs significantly better in identifying small objects. 1",
"title": ""
},
{
"docid": "dc20d4cac40923be1ba1a706e1fb5abf",
"text": "We have implemented and evaluated a method to populate a company ontology, focusing on hierarchical relations such as acquisitions or subsidiaries. Our method searches for information about user-specified companies on the Internet using a search engine API (Google Custom Search API). From the resulted snippets we identify companies using machine learning and extract relations between them using a set of manually defined semantic patterns. We developed filtering methods both for companies and unlikely relations and from the set of company and relation instances we build this way, we construct an ontology addressing identity matching and consistency problems in a company-specific manner. We achieved a precision of 77 to 93 percent, depending on the evaluated relations.",
"title": ""
},
{
"docid": "c22d7b209a107c501aa09e7d16a93008",
"text": "With a growing number of courses offered online and degrees offered through the Internet, there is a considerable interest in online education, particularly as it relates to the quality of online instruction. The major concerns are centering on the following questions: What will be the new role for instructors in online education? How will students' learning outcomes be assured and improved in online learning environment? How will effective communication and interaction be established with students in the absence of face-to-face instruction? How will instructors motivate students to learn in the online learning environment? This paper will examine new challenges and barriers for online instructors, highlight major themes prevalent in the literature related to “quality control or assurance” in online education, and provide practical strategies for instructors to design and deliver effective online instruction. Recommendations will be made on how to prepare instructors for quality online instruction.",
"title": ""
}
] |
scidocsrr
|
fc89fd5e68d65a511a3253cddb093338
|
Linking Virtual Machine Mobility to User Mobility
|
[
{
"docid": "15fa73633d6ec7539afc91bb1f45098f",
"text": "Continued advances in mobile networks and positioning technologies have created a strong market push for location-based applications. Examples include location-aware emergency response, location-based advertisement, and location-based entertainment. An important challenge in the wide deployment of location-based services (LBSs) is the privacy-aware management of location information, providing safeguards for location privacy of mobile clients against vulnerabilities for abuse. This paper describes a scalable architecture for protecting the location privacy from various privacy threats resulting from uncontrolled usage of LBSs. This architecture includes the development of a personalized location anonymization model and a suite of location perturbation algorithms. A unique characteristic of our location privacy architecture is the use of a flexible privacy personalization framework to support location k-anonymity for a wide range of mobile clients with context-sensitive privacy requirements. This framework enables each mobile client to specify the minimum level of anonymity that it desires and the maximum temporal and spatial tolerances that it is willing to accept when requesting k-anonymity-preserving LBSs. We devise an efficient message perturbation engine to implement the proposed location privacy framework. The prototype that we develop is designed to be run by the anonymity server on a trusted platform and performs location anonymization on LBS request messages of mobile clients such as identity removal and spatio-temporal cloaking of the location information. We study the effectiveness of our location cloaking algorithms under various conditions by using realistic location data that is synthetically generated from real road maps and traffic volume data. Our experiments show that the personalized location k-anonymity model, together with our location perturbation engine, can achieve high resilience to location privacy threats without introducing any significant performance penalty.",
"title": ""
}
] |
[
{
"docid": "feb649029daef80f2ecf33221571a0b1",
"text": "The National Airspace System (NAS) is a large and complex system with thousands of interrelated components: administration, control centers, airports, airlines, aircraft, passengers, etc. The complexity of the NAS creates many difficulties in management and control. One of the most pressing problems is flight delay. Delay creates high cost to airlines, complaints from passengers, and difficulties for airport operations. As demand on the system increases, the delay problem becomes more and more prominent. For this reason, it is essential for the Federal Aviation Administration to understand the causes of delay and to find ways to reduce delay. Major contributing factors to delay are congestion at the origin airport, weather, increasing demand, and air traffic management (ATM) decisions such as the Ground Delay Programs (GDP). Delay is an inherently stochastic phenomenon. Even if all known causal factors could be accounted for, macro-level national airspace system (NAS) delays could not be predicted with certainty from micro-level aircraft information. This paper presents a stochastic model that uses Bayesian Networks (BNs) to model the relationships among different components of aircraft delay and the causal factors that affect delays. A case study on delays of departure flights from Chicago O’Hare international airport (ORD) to Hartsfield-Jackson Atlanta International Airport (ATL) reveals how local and system level environmental and human-caused factors combine to affect components of delay, and how these components contribute to the final arrival delay at the destination airport.",
"title": ""
},
{
"docid": "4aa17982590e86fea90267e4386e2ef1",
"text": "There are many promising psychological interventions on the horizon, but there is no clear methodology for preparing them to be scaled up. Drawing on design thinking, the present research formalizes a methodology for redesigning and tailoring initial interventions. We test the methodology using the case of fixed versus growth mindsets during the transition to high school. Qualitative inquiry and rapid, iterative, randomized \"A/B\" experiments were conducted with ~3,000 participants to inform intervention revisions for this population. Next, two experimental evaluations showed that the revised growth mindset intervention was an improvement over previous versions in terms of short-term proxy outcomes (Study 1, N=7,501), and it improved 9th grade core-course GPA and reduced D/F GPAs for lower achieving students when delivered via the Internet under routine conditions with ~95% of students at 10 schools (Study 2, N=3,676). Although the intervention could still be improved even further, the current research provides a model for how to improve and scale interventions that begin to address pressing educational problems. It also provides insight into how to teach a growth mindset more effectively.",
"title": ""
},
{
"docid": "02a130ee46349366f2df347119831e5c",
"text": "Low power ad hoc wireless networks operate in conditions where channels are subject to fading. Cooperative diversity mitigates fading in these networks by establishing virtual antenna arrays through clustering the nodes. A cluster in a cooperative diversity network is a collection of nodes that cooperatively transmits a single packet. There are two types of clustering schemes: static and dynamic. In static clustering all nodes start and stop transmission simultaneously, and nodes do not join or leave the cluster while the packet is being transmitted. Dynamic clustering allows a node to join an ongoing cooperative transmission of a packet as soon as the packet is received. In this paper we take a broad view of the cooperative network by examining packet flows, while still faithfully implementing the physical layer at the bit level. We evaluate both clustering schemes using simulations on large multi-flow networks. We demonstrate that dynamically-clustered cooperative networks substantially outperform both statically-clustered cooperative networks and classical point-to-point networks.",
"title": ""
},
{
"docid": "c4816cafb042e6d96caee6af90583422",
"text": "Software Defined Networking (SDN) is an emerging network control paradigm focused on logical centralization and programmability. At the same time, distributed routing protocols, most notably OSPF and IS-IS, are still prevalent in IP networks, as they provide shortest path routing, fast topological convergence after network failures, and, perhaps most importantly, the confidence based on decades of reliable operation. Therefore, a hybrid SDN/OSPF operation remains a desirable proposition. In this paper, we propose a new method of hybrid SDN/OSPF operation. Our method is different from other hybrid approaches, as it uses SDN nodes to partition an OSPF domain into sub-domains thereby achieving the traffic engineering capabilities comparable to full SDN operation. We place SDN-enabled routers as subdomain border nodes, while the operation of the OSPF protocol continues unaffected. In this way, the SDN controller can tune routing protocol updates for traffic engineering purposes before they are flooded into sub-domains. While local routing inside sub-domains remains stable at all times, inter-sub-domain routes can be optimized by determining the routes in each traversed sub-domain. As the majority of traffic in non-trivial topologies has to traverse multiple subdomains, our simulation results confirm that a few SDN nodes allow traffic engineering up to a degree that renders full SDN deployment unnecessary.",
"title": ""
},
{
"docid": "1febb341f4fa0227683f3edbe8b95ff3",
"text": "Distributed representation learned with neural networks has recently shown to be effective in modeling natural languages at fine granularities such as words, phrases, and even sentences. Whether and how such an approach can be extended to help model larger spans of text, e.g., documents, is intriguing, and further investigation would still be desirable. This paper aims to enhance neural network models for such a purpose. A typical problem of document-level modeling is automatic summarization, which aims to model documents in order to generate summaries. In this paper, we propose neural models to train computers not just to pay attention to specific regions and content of input documents with attention models, but also distract them to traverse between different content of a document so as to better grasp the overall meaning for summarization. Without engineering any features, we train the models on two large datasets. The models achieve the state-of-the-art performance, and they significantly benefit from the distraction modeling, particularly when input documents are long.",
"title": ""
},
{
"docid": "0cbd3587fe466a13847e94e29bb11524",
"text": "The cloud heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems, but is it the ultimate solution for extending such systems' battery lifetimes?",
"title": ""
},
{
"docid": "d7ee317d2dad7fbf6b12fa2842fc2662",
"text": "The ability of a robot vision system to capture informative images is greatly affected by the condition of lighting in the scene. This paper reveals the importance of active lighting control for robotic manipulation and proposes novel strategies for good visual interpretation of objects in the workspace. Good illumination means that it helps to get images with large signal-to-noise ratio, wide range of linearity, high image contrast, and true color rendering of the object's natural properties. It should also avoid occurrences of highlight and extreme intensity unbalance. If only passive illumination is used, the robot often gets poor images where no appropriate algorithms can be used to extract useful information. A fuzzy controller is further developed to maintain the lighting level suitable for robotic manipulation and guidance in dynamic environments. As carried out in this paper, with both examples of numerical simulations and practical experiments, it promises satisfactory results with the proposed idea of active lighting control.",
"title": ""
},
{
"docid": "97ef62d13180ee6bb44ec28ff3b3d53e",
"text": "Glioblastoma tumour cells release microvesicles (exosomes) containing mRNA, miRNA and angiogenic proteins. These microvesicles are taken up by normal host cells, such as brain microvascular endothelial cells. By incorporating an mRNA for a reporter protein into these microvesicles, we demonstrate that messages delivered by microvesicles are translated by recipient cells. These microvesicles are also enriched in angiogenic proteins and stimulate tubule formation by endothelial cells. Tumour-derived microvesicles therefore serve as a means of delivering genetic information and proteins to recipient cells in the tumour environment. Glioblastoma microvesicles also stimulated proliferation of a human glioma cell line, indicating a self-promoting aspect. Messenger RNA mutant/variants and miRNAs characteristic of gliomas could be detected in serum microvesicles of glioblastoma patients. The tumour-specific EGFRvIII was detected in serum microvesicles from 7 out of 25 glioblastoma patients. Thus, tumour-derived microvesicles may provide diagnostic information and aid in therapeutic decisions for cancer patients through a blood test.",
"title": ""
},
{
"docid": "b14ab40b4267ba8c69e755614e798f0b",
"text": "To enhance the treatment of relations in biomedical ontologies we advance a methodology for providing consistent and unambiguous formal definitions of the relational expressions used in such ontologies in a way designed to assist developers and users in avoiding errors in coding and annotation. The resulting Relation Ontology can promote interoperability of ontologies and support new types of automated reasoning about the spatial and temporal dimensions of biological and medical phenomena.",
"title": ""
},
{
"docid": "1149a32a286e053780fb0bd19e9d6446",
"text": "Website aesthetics have been recognized as an influential moderator of people's behavior and perception. However, what users perceive as \"good design\" is subject to individual preferences, questioning the feasibility of universal design guidelines. To better understand how people's visual preferences differ, we collected 2.4 million ratings of the visual appeal of websites from nearly 40 thousand participants of diverse backgrounds. We address several gaps in the knowledge about design preferences of previously understudied groups. Among other findings, our results show that the level of colorfulness and visual complexity at which visual appeal is highest strongly varies: Females, for example, liked colorful websites more than males. A high education level generally lowers this preference for colorfulness. Russians preferred a lower visual complexity, and Macedonians liked highly colorful designs more than any other country in our dataset. We contribute a computational model and estimates of peak appeal that can be used to support rapid evaluations of website design prototypes for specific target groups.",
"title": ""
},
{
"docid": "cf3a350e303c7f50aca7b9e4b14292c6",
"text": "In this study, a full-wave-based circuit model of an interdigital capacitor in the form of a coplanar waveguide (CPW) configuration is studied by using numerical thru-reflect-line calibration techniques, which can easily be combined with commercial electromagnetic simulation software. Subsequently, a slow-wave line coupler is proposed and realized for size miniaturization by loading different interdigital capacitors onto a coupled CPW line, which has equal phase velocities for both odd and even modes. An experimental prototype is fabricated, and measured results show a good agreement with simulated ones, thereby demonstrating an excellent performance.",
"title": ""
},
{
"docid": "e1d9ff28da38fcf8ea3a428e7990af25",
"text": "The Autonomous car is a complex topic, different technical fields like: Automotive engineering, Control engineering, Informatics, Artificial Intelligence etc. are involved in solving the human driver replacement with an artificial (agent) driver. The problem is even more complicated because usually, nowadays, having and driving a car defines our lifestyle. This means that the mentioned (major) transformation is also a cultural issue. The paper will start with the mentioned cultural aspects related to a self-driving car and will continue with the big picture of the system.",
"title": ""
},
{
"docid": "5228454ef59c012b079885b2cce0c012",
"text": "As a contribution to the HICSS 50 Anniversary Conference, we proposed a new mini-track on Text Mining in Big Data Analytics. This mini-track builds on the successful HICSS Workshop on Text Mining and recognizes the growing importance of unstructured text as a data source for descriptive and predictive analytics in research on collaboration systems and technologies. In this initial iteration of the mini-track, we have accepted three papers that cover conceptual issues, methodological approaches to social media, and the development of categorization models and dictionaries useful in a corporate context. The minitrack highlights the potential of an interdisciplinary research community within the HICSS collaboration systems and technologies track.",
"title": ""
},
{
"docid": "660465cbd4bd95108a2381ee5a97cede",
"text": "In this paper we discuss the design and implementation of an automated usability evaluation method for iOS applications. In contrast to common usability testing methods, it is not explicitly necessary to involve an expert or subjects. These circumstances reduce costs, time and personnel expenditures. Professionals are replaced by the automation tool while test participants are exchanged with consumers of the launched application. Interactions of users are captured via a fully automated capturing framework which creates a record of user interactions for each session and sends them to a central server. A usability problem is defined as a sequence of interactions and pattern recognition specified by interaction design patterns is applied to find these problems. Nevertheless, it falls back to the user input for accurate results. Similar to the problem, the solution of the problem is based on the HCI design pattern. An evaluation shows the functionality of our approach compared to a traditional usability evaluation method.",
"title": ""
},
{
"docid": "8a45e83904913f8e4fbb7c59ff5d056c",
"text": "The present article examines the nature and function of human agency within the conceptual model of triadic reciprocal causation. In analyzing the operation of human agency in this interactional causal structure, social cognitive theory accords a central role to cognitive, vicarious, self-reflective, and self-regulatory processes. The issues addressed concern the psychological mechanisms through which personal agency is exercised, the hierarchical structure of self-regulatory systems, eschewal of the dichotomous construal of self as agent and self as object, and the properties of a nondualistic but nonreductional conception of human agency. The relation of agent causality to the fundamental issues of freedom and determinism is also analyzed.",
"title": ""
},
{
"docid": "99bd8339f260784fff3d0a94eb04f6f4",
"text": "Reinforcement learning algorithms discover policies that maximize reward, but do not necessarily guarantee safety during learning or execution phases. We introduce a new approach to learn optimal policies while enforcing properties expressed in temporal logic. To this end, given the temporal logic specification that is to be obeyed by the learning system, we propose to synthesize a reactive system called a shield. The shield monitors the actions from the learner and corrects them only if the chosen action causes a violation of the specification. We discuss which requirements a shield must meet to preserve the convergence guarantees of the learner. Finally, we demonstrate the versatility of our approach on several challenging reinforcement learning scenarios.",
"title": ""
},
{
"docid": "0d97f83850eeca0ea631476273cb1628",
"text": "We study how external versus internal innovations promote economic growth through a tractable endogenous growth framework with multiple innovation sizes, multi-product firms, and entry/exit. Firms invest in external R&D to acquire new product lines and in internal R&D to improve their existing product lines. A baseline model derives the theoretical implications of weaker scaling for external R&D versus internal R&D, and the resulting predictions align with observed empirical regularities for innovative firms. Quantifying a generalized model for the recent U.S. economy using matched Census Bureau and patent data, we observe a modest departure for external R&D from perfect scaling frameworks. JEL Classification: O31, O33, O41, L16.",
"title": ""
},
{
"docid": "9e9ca921df1a2a8b8ddb37d1ca7be41d",
"text": "The quantity of data transmitted in the network intensified rapidly with the increased dependency on social media applications, sensors for data acquisitions and smartphones utilizations. Typically, such data is unstructured and originates from multiple sources in different format. Consequently, the abstraction of data for rendering is difficult, that lead to the development of a computing system that is able to store data in unstructured format and support distributed parallel computing. To data, there exist approaches to handle big data using NoSQL. This paper provides a review and the comparison between NoSQL and Relational Database Management System (RDBMS). By reviewing each approach, the mechanics of NoSQL systems can be clearly distinguished from the RDBMS. Basically, such systems rely on multiple factors, that include the query language, architecture, data model and consumer API. This paper also defines the application that matches the system and subsequently able to accurately correlates to a specific NoSQL system.",
"title": ""
},
{
"docid": "07fcbe8f7d4fe201e7c9f8ccd091ba64",
"text": "Program visualizations help students understand the runtime behavior of other programs. They are educational tools to complement lectures or replace inefficient static drawings. A recent survey found 46 program visualizations developed from 1979 to 2012 reported that their effectiveness is unclear. They also evaluated learner engagement strategies implemented by visualization systems, but other learning principles were not considered. Learning principles are potential key factors in the success of program visualization as learning tools. In this paper, we identified 16 principles that may contribute to the effectiveness of a learning tool based on Vygotsky's learning theory. We hypothesize that some of these principles could be supported by incorporating visual concrete allegories and gamification. We conducted a literature review to know if these principles are supported by existing solutions. We found six new systems between 2012 and 2015. Very few systems consider a learning theory as theoretical framework. Only two out the 16 learning principles are supported by existing visualizations. All systems use unconnected visual metaphors, two use concrete visual metaphors, and one implemented a gamification principle. We expect that using concrete visual allegories and gamification in future program visualizations will significantly improve their effectiveness.",
"title": ""
},
{
"docid": "27ee9fff25914a4b63979f2a5cc8255e",
"text": "Personalized tutoring feedback is a powerful method that expert human tutors apply when helping students to optimize their learning. Thus, research on tutoring feedback strategies tailoring feedback according to important factors of the learning process has been recognized as a promising issue in the field of computer-based adaptive educational technologies. Our paper seeks to contribute to this area of research by addressing the following aspects: First, to investigate how students’ gender, prior knowledge, and motivational characteristics relate to learning outcomes (knowledge gain and changes in motivation). Second, to investigate the impact of these student characteristics on how tutoring feedback strategies varying in content (procedural vs. conceptual) and specificity (concise hints vs. elaborated explanations) of tutoring feedback messages affect students’ learning and motivation. Third, to explore the influence of the feedback parameters and student characteristics on students’ immediate postfeedback behaviour (skipping vs. trying to accomplish a task, and failing vs. succeeding in providing a correct answer). To address these issues, detailed log-file analyses of an experimental study have been conducted. In this study, 124 sixth and seventh graders have been exposed to various tutoring feedback strategies while working on multi-trial error correction tasks in the domain of fraction arithmetic. The web-based intelligent learning environment ActiveMath was used to present the fraction tasks and trace students’ progress and activities. The results reveal that gender is an important factor for feedback efficiency: Male students achieve significantly lower knowledge gains than female students under all tutoring feedback conditions (particularly, under feedback strategies starting with a conceptual hint). Moreover, perceived competence declines from preto post-test significantly more for boys than for girls. Yet, the decline in perceived competence is not accompanied by a decline in intrinsic motivation, which, instead, increases significantly from preto post-test. With regard to the post-feedback behaviour, the results indicate that students skip further attempts more frequently after conceptual than after procedural feedback messages. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
0761cd1d93dcb2426e09d2165fec1158
|
Generate Believable Causal Plots with User Preferences Using Constrained Monte Carlo Tree Search
|
[
{
"docid": "90ca336fa0d6aae07914f03df9bbc2ad",
"text": "Planning-based techniques are a very powerful tool for automated story generation. However, as the number of possible actions increases, traditional planning techniques suffer from a combinatorial explosion due to large branching factors. In this work, we apply Monte Carlo Tree Search (MCTS) techniques to generate stories in domains with large numbers of possible actions (100+). Our approach employs a Bayesian story evaluation method to guide the planning towards believable stories that reach a user defined goal. We generate stories in a novel domain with different type of story goals. Our approach shows an order of magnitude improvement in performance over traditional search techniques.",
"title": ""
},
{
"docid": "56b706edc6d1b6a2ff64770cb3f79c2e",
"text": "The ancient oriental game of Go has long been considered a grand challenge for artificial intelligence. For decades, computer Go has defied the classical methods in game tree search that worked so successfully for chess and checkers. However, recent play in computer Go has been transformed by a new paradigm for tree search based on Monte-Carlo methods. Programs based on Monte-Carlo tree search now play at human-master levels and are beginning to challenge top professional players. In this paper, we describe the leading algorithms for Monte-Carlo tree search and explain how they have advanced the state of the art in computer Go.",
"title": ""
},
{
"docid": "b3d49bd191e0432e4306ee08b49e4c7c",
"text": "ConceptNet is a knowledge representation project, providing a large semantic graph that describes general human knowledge and how it is expressed in natural language. This paper presents the latest iteration, ConceptNet 5, including its fundamental design decisions, ways to use it, and evaluations of its coverage and accuracy.",
"title": ""
},
{
"docid": "6605397ad283fd4d353150d9066f8e6e",
"text": "In this paper we present our continuing efforts to generate narrative using a character-centric approach. In particular we discuss the advantages of explicitly representing the emergent event sequence in order to be able to exert influence on it and generate stories that ‘retell’ the emergent narrative. Based on a narrative distinction between fabula, plot and presentation, we make a first step by presenting a model based on story comprehension that can capture the fabula, and show how it can be used for the automatic creation of stories.",
"title": ""
}
] |
[
{
"docid": "9754e309c6fb4805618d6ba4c18b5615",
"text": "Deep neural networks (NN) are extensively used for machine learning tasks such as image classification, perception and control of autonomous systems. Increasingly, these deep NNs are also been deployed in high-assurance applications. Thus, there is a pressing need for developing techniques to verify neural networks to check whether certain user-expected properties are satisfied. In this paper, we study a specific verification problem of computing a guaranteed range for the output of a deep neural network given a set of inputs represented as a convex polyhedron. Range estimation is a key primitive for verifying deep NNs. We present an efficient range estimation algorithm that uses a combination of local search and linear programming problems to efficiently find the maximum and minimum values taken by the outputs of the NN over the given input set. In contrast to recently proposed “monolithic” optimization approaches, we use local gradient descent to repeatedly find and eliminate local minima of the function. The final global optimum is certified using a mixed integer programming instance. We implement our approach and compare it with Reluplex, a recently proposed solver for deep neural networks. We demonstrate the effectiveness of the proposed approach for verification of NNs used in automated control as well as those used in classification.",
"title": ""
},
{
"docid": "759b85bd270afb908ce2b4f23e0f5269",
"text": "In this paper we discuss λ-policy iteration, a method for exact and approximate dynamic programming. It is intermediate between the classical value iteration (VI) and policy iteration (PI) methods, and it is closely related to optimistic (also known as modified) PI, whereby each policy evaluation is done approximately, using a finite number of VI. We review the theory of the method and associated questions of bias and exploration arising in simulation-based cost function approximation. We then discuss various implementations, which offer advantages over well-established PI methods that use LSPE(λ), LSTD(λ), or TD(λ) for policy evaluation with cost function approximation. One of these implementations is based on a new simulation scheme, called geometric sampling, which uses multiple short trajectories rather than a single infinitely long trajectory.",
"title": ""
},
{
"docid": "db3c163d64f258d1bca43650d05b9672",
"text": "Little is known about what causes anti-social behavior online. The paper at hand analyzes vandalism and damage in Wikipedia with regard to the time it is conducted and the country it originates from. First, we identify vandalism and damaging edits via ex post facto evidence by mining Wikipedia’s revert graph. Second, we geolocate the cohort of edits from anonymous Wikipedia editors using their associated IP addresses and edit times, showing the feasibility of reliable historic geolocation with respect to country and time zone, even under limited geolocation data. Third, we conduct the first spatiotemporal analysis of vandalism on Wikipedia. Our analysis reveals significant differences for vandalism activities during the day, and for different days of the week, seasons, countries of origin, as well as Wikipedia’s languages. For the analyzed countries, the ratio is typically highest at nonsummer workday mornings, with additional peaks after break times. We hence assume that Wikipedia vandalism is linked to labor, perhaps serving as relief from stress or boredom, whereas cultural differences have a large effect. Our results open up avenues for new research on collaborative writing at scale, and advanced technologies to identify and handle antisocial behavior in online communities.",
"title": ""
},
{
"docid": "210ec3c86105f496087c7b012619e1d3",
"text": "An ultra compact projection system based on a high brightness OLEd micro display is developed. System design and realization of a prototype are presented. This OLEd pico projector with a volume of about 10 cm3 can be integrated into portable systems like mobile phones or PdAs. The Fraunhofer IPMS developed the high brightness monochrome OLEd micro display. The Fraunhofer IOF desig ned the specific projection lens [1] and in tegrated the OLEd and the projection optic to a full functional pico projection system. This article provides a closer look on the technology and its possibilities.",
"title": ""
},
{
"docid": "23c8dd52480d1193b2728b05c9458080",
"text": "This article presents an overview of highway cooperative collision avoidance (CCA), which is an emerging vehicular safety application using the IEEE- and ASTM-adopted Dedicated Short Range Communication (DSRC) standard. Along with a description of the DSRC architecture, we introduce the concept of CCA and its implementation requirements in the context of a vehicle-to-vehicle wireless network, primarily at the Medium Access Control (MAC) and the routing layer. An overview is then provided to establish that the MAC and routing protocols from traditional Mobile Ad Hoc networks arc not directly applicable for CCA and similar safety-critical applications. Specific constraints and future research directions are then identified for packet routing protocols used to support such applications in the DSRC environment. In order to further explain the interactions between CCA and its underlying networking protocols, we present an example of the safety performance of CCA using simulated vehicle crash experiments. The results from these experiments arc also used to demonstrate the need for network data prioritization for safety-critical applications such as CCA. Finally, the performance sensitivity of CCA to unreliable wireless channels is discussed based on the experimental results.",
"title": ""
},
{
"docid": "efd566ac16ce096fe44fb89147d6976c",
"text": "Advances of sensor and RFID technology provide significant new power for humans to sense, understand and manage the world. RFID provides fast data collection with precise identification of objects with unique IDs without line of sight, thus it can be used for identifying, locating, tracking and monitoring physical objects. Despite these benefits, RFID poses many challenges for data processing and management: i) RFID observations contain duplicates, which have to be filtered; ii) RFID observations have implicit meanings, which have to be transformed and aggregated into semantic data represented in their data models; and iii) RFID data are temporal, streaming, and in high volume, and have to be processed on the fly. Thus, a general RFID data processing framework is needed to automate the transformation of physical RFID observations into the virtual counterparts in the virtual world linked to business applications. In this paper, we take an event-oriented approach to process RFID data, by devising RFID application logic into complex events. We then formalize the specification and semantics of RFID events and rules. We demonstrate that traditional ECA event engine cannot be used to support highly temporally constrained RFID events, and develop an RFID event detection engine that can effectively process complex RFID events. The declarative event-based approach greatly simplifies the work of RFID data processing, and significantly reduces the cost of RFID data integration.",
"title": ""
},
{
"docid": "f4955f2102675b67ffbe5c220e859c3b",
"text": "Identification of named entities such as person, organization and product names from text is an important task in information extraction. In many domains, the same entity could be referred to in multiple ways due to variations introduced by different user groups, variations of spellings across regions or cultures, usage of abbreviations, typographical errors and other reasons associated with conventional usage. Identifying a piece of text as a mention of an entity in such noisy data is difficult, even if we have a dictionary of possible entities. Previous approaches treat the synonym problem as part entity disambiguation and use learning-based methods that use the context of the words to identify synonyms. In this paper, we show that existing domain knowledge, encoded as rules, can be used effectively to address the synonym problem to a considerable extent. This makes the disambiguation task simpler, without the need for much training data. We look at a subset of application scenarios in named entity extraction, categorize the possible variations in entity names, and define rules for each category. Using these rules, we generate synonyms for the canonical list and match these synonyms to the actual occurrence in the data sets. In particular, we describe the rule categories that we developed for several named entities and report the results of applying our technique of extracting named entities by generating synonyms for two different domains.",
"title": ""
},
{
"docid": "9fd0049d079919282082a119763f2740",
"text": "The rapid development of Internet has given birth to a new business model: Cloud Computing. This new paradigm has experienced a fantastic rise in recent years. Because of its infancy, it remains a model to be developed. In particular, it must offer the same features of services than traditional systems. The cloud computing is large distributed systems that employ distributed resources to deliver a service to end users by implementing several technologies. Hence providing acceptable response time for end users, presents a major challenge for cloud computing. All components must cooperate to meet this challenge, in particular through load balancing algorithms. This will enhance the availability and will gain the end user confidence. In this paper we try to give an overview of load balancing in the cloud computing by exposing the most important research challenges.",
"title": ""
},
{
"docid": "55ada092fd628aead0fd64d20eff7b69",
"text": "BER estimation from measured EVM values is shown experimentally for QPSK and 16QAM optical signals with 28 GBd. Various impairments, such as gain imbalance, quadrature error and timing skew, are introduced into the transmitted signal in order to evaluate the robustness of the method. The EVM was measured using two different real-time sampling systems and the EVM measurement accuracy is discussed.",
"title": ""
},
{
"docid": "4a1a9504603177613cbc51c427de39d0",
"text": "A novel and low-cost embedded hardware architecture for real-time refocusing based on a standard plenoptic camera is presented in this study. The proposed layout design synthesizes refocusing slices directly from micro images by omitting the process for the commonly used sub-aperture extraction. Therefore, intellectual property cores, containing switch controlled Finite Impulse Response (FIR) filters, are developed and applied to the Field Programmable Gate Array (FPGA) XC6SLX45 from Xilinx. Enabling the hardware design to work economically, the FIR filters are composed of stored product as well as upsampling and interpolation techniques in order to achieve an ideal relation between image resolution, delay time, power consumption and the demand of logic gates. The video output is transmitted via High-Definition Multimedia Interface (HDMI) with a resolution of 720p at a frame rate of 60 fps conforming to the HD ready standard. Examples of the synthesized refocusing slices are presented.",
"title": ""
},
{
"docid": "2ba69997f51aa61ffeccce33b2e69054",
"text": "We consider the problem of transferring policies to the real world by training on a distribution of simulated scenarios. Rather than manually tuning the randomization of simulations, we adapt the simulation parameter distribution using a few real world roll-outs interleaved with policy training. In doing so, we are able to change the distribution of simulations to improve the policy transfer by matching the policy behavior in simulation and the real world. We show that policies trained with our method are able to reliably transfer to different robots in two real world tasks: swing-peg-in-hole and opening a cabinet drawer. The video of our experiments can be found at https: //sites.google.com/view/simopt.",
"title": ""
},
{
"docid": "477769b83e70f1d46062518b1d692664",
"text": "Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.",
"title": ""
},
{
"docid": "26bc2aa9b371e183500e9c979c1fff65",
"text": "Complex regional pain syndrome (CRPS) is clinically characterized by pain, abnormal regulation of blood flow and sweating, edema of skin and subcutaneous tissues, trophic changes of skin, appendages of skin and subcutaneous tissues, and active and passive movement disorders. It is classified into type I (previously reflex sympathetic dystrophy) and type II (previously causalgia). Based on multiple evidence from clinical observations, experimentation on humans, and experimentation on animals, the hypothesis has been put forward that CRPS is primarily a disease of the central nervous system. CRPS patients exhibit changes which occur in somatosensory systems processing noxious, tactile and thermal information, in sympathetic systems innervating skin (blood vessels, sweat glands), and in the somatomotor system. This indicates that the central representations of these systems are changed and data show that CRPS, in particular type I, is a systemic disease involving these neuronal systems. This way of looking at CRPS shifts the attention away from interpreting the syndrome conceptually in a narrow manner and to reduce it to one system or to one mechanism only, e. g., to sympathetic-afferent coupling. It will further our understanding why CRPS type I may develop after a trivial trauma, after a trauma being remote from the affected extremity exhibiting CRPS, and possibly after immobilization of an extremity. It will explain why, in CRPS patients with sympathetically maintained pain, a few temporary blocks of the sympathetic innervation of the affected extremity sometimes lead to long-lasting (even permanent) pain relief and to resolution of the other changes observed in CRPS. This changed view will bring about a diagnostic reclassification and redefinition of CRPS and will have bearings on the therapeutic approaches. Finally it will shift the focus of research efforts.",
"title": ""
},
{
"docid": "777f87414c0185739a92bbdb0f6aa994",
"text": "Limb apraxia (LA), is a neuropsychological syndrome characterized by difficulty in performing gestures and may therefore be an ideal model for investigating whether action execution deficits are causatively linked to deficits in action understanding. We tested 33 left brain-damaged patients and 8 right brain-damaged patients for the presence of the LA. Importantly, we also tested all the patients in an ad hoc developed gesture recognition task wherein an actor performs, either correctly or incorrectly, transitive (using objects) or intransitive (without objects) meaningful conventional limb gestures. Patients were instructed to judge whether the observed gesture was correct or incorrect. Lesion analysis enabled us to evaluate the relationship between specific brain regions and behavioral performance in gesture execution and gesture comprehension. We found that LA was present in 21 left brain-damaged patients and it was linked to frontal and parietal lesions. Moreover, we found that recognition of correct execution of familiar gestures performed by others was more impaired in patients with LA than in nonapraxic patients. Crucially, the gesture comprehension deficit correlated with damage to the opercular and triangularis portions of the inferior frontal gyrus, two regions that are involved in complex aspects of action-related processing. In contrast, no such relationship was observed with lesions centered on the inferior parietal cortex. The present findings suggest that lesions to left frontal regions that are involved in planning and performing actions are causatively associated with deficits in the recognition of the correct execution of meaningful gestures.",
"title": ""
},
{
"docid": "6203338ac688f2db4fad13a85e48c0ca",
"text": "Sense induction seeks to automatically identify word senses directly from a corpus. A key assumption underlying previous work is that the context surrounding an ambiguous word is indicative of its meaning. Sense induction is thus typically viewed as an unsupervised clustering problem where the aim is to partition a word’s contexts into different classes, each representing a word sense. Our work places sense induction in a Bayesian context by modeling the contexts of the ambiguous word as samples from a multinomial distribution over senses which are in turn characterized as distributions over words. The Bayesian framework provides a principled way to incorporate a wide range of features beyond lexical cooccurrences and to systematically assess their utility on the sense induction task. The proposed approach yields improvements over state-of-the-art systems on a benchmark dataset.",
"title": ""
},
{
"docid": "72b15b373785198624438cdd7e187a79",
"text": "The technical debt metaphor is widely used to encapsulate numerous software quality problems. The metaphor is attractive to practitioners as it communicates to both technical and nontechnical audiences that if quality problems are not addressed, things may get worse. However, it is unclear whether there are practices that move this metaphor beyond a mere communication mechanism. Existing studies of technical debt have largely focused on code metrics and small surveys of developers. In this paper, we report on our survey of 1,831 participants, primarily software engineers and architects working in long-lived, software-intensive projects from three large organizations, and follow-up interviews of seven software engineers. We analyzed our data using both nonparametric statistics and qualitative text analysis. We found that architectural decisions are the most important source of technical debt. Furthermore, while respondents believe the metaphor is itself important for communication, existing tools are not currently helpful in managing the details. We use our results to motivate a technical debt timeline to focus management and tooling approaches.",
"title": ""
},
{
"docid": "9547b04b76e653c8b4854ae193b4319f",
"text": "© 2017 Western Digital Corporation or its affiliates. All rights reserved Emerging fast byte-addressable non-volatile memory (eNVM) technologies such as ReRAM and 3D Xpoint are projected to offer two orders of magnitude higher performance than flash. However, the existing solid-state drive (SSD) architecture optimizes for flash characteristics and is not adequate to exploit the full potential of eNVMs due to architectural and I/O interface (e.g., PCIe, SATA) limitations. To improve the storage performance and reduce the host main memory requirement for KVS, we propose a novel SSD architecture that extends the semantic of SSD with the KVS features and implements indexing capability inside SSD. It has in-storage processing engine that implements key-value operations such as get, put and delete to efficiently operate on KV datasets. The proposed system introduces a compute channel interface to offload key-value operations down to the SSD that significantly reduces the operating system, file system and other software overhead. This SSD achieves 4.96 Mops/sec get and 3.44 Mops/sec put operations and shows better scalability with increasing number of keyvalue pairs as compared to flash-based NVMe (flash-NVMe) and DRAMbased NVMe (DRAM-NVMe) devices. With decreasing DRAM size by 75%, its performance decreases gradually, achieving speedup of 3.23x as compared to DRAM-NVMe. This SSD significantly improves performance and reduces memory by exploiting the fine grain parallelism within a controller and keeping data movement local to effectively utilize eNVM bandwidth and eliminating the superfluous data movement between the host and the SSD. Abstract",
"title": ""
},
{
"docid": "c01072bc843aafc88b157b6de1878829",
"text": "A new MEMS capacitive accelerometer has been developed to meet the requirements for oil and gas exploration, specifically for imaging deep and complex subterranean features. The sensor has been optimized to have a very low noise floor in a frequency range of 1–200 Hz. Several design and process parameters were modified from our previous sensors to reduce noise. Testing of the sensor has demonstrated a noise floor of 10ng/√Hz, in agreement with our predictive noise models. The sensor has a dynamic range of 120db with a maximum acceleration of +/− 80mg. In addition to the performance specifications, automated calibration routines have been implemented, allowing bias and sensitivity calibrations to be done in the field to ensure valid and accurate data. The sensor frequency and quality factor can also be measured in the field for an automated sensor health check.",
"title": ""
},
{
"docid": "4c2ab8f148d2e3136d4976b1b88184d5",
"text": "In ten years, more than half the world’s population will be living in cities. The United Ž . Nations UN has stated that this will threaten cities with social conflict, environmental degradation and the collapse of basic services. The economic, social, and environmental planning practices of societies embodying ‘urban sustainability’ have been proposed as antidotes to these negative urban trends. ‘Urban sustainability’ is a doctrine with diverse origins. The author believes that the alternative models of cultural development in Curitiba, Brazil, Kerala, India, and Nayarit, Mexico embody the integration and interlinkage of economic, social, and environmental sustainability. Curitiba has become a more livable city by building an efficient intra-urban bus system, expanding urban green space, and meeting the basic needs of the urban poor. Kerala has attained social harmony by emphasizing equitable resource distribution rather than consumption, by restraining reproduction, and by attacking divisions of race, caste, religion, and gender. Nayarit has sought to balance development with the environment by framing a nature-friendly development plan that protects natural systems from urban development and that involves the public in the development process. A detailed examination of these alternative cultural development models reveals a myriad of possible means by which economic, social, and environmental sustainability might be advanced in practice. The author concludes that while these examples from the developing world cannot be directly translated to cities in the developed world, they do indicate in a general sense the imaginative policies that any society must foster if it is to achieve ‘urban sustainability’.",
"title": ""
}
] |
scidocsrr
|
6e8ed213eca2ebc43cbaf5e062237b37
|
Volume Parameterization for Design Automation of Customized Free-Form Products
|
[
{
"docid": "30604dca66bbf3f0abe63c101f02e434",
"text": "This paper presents a novel feature based parameterization approach of human bodies from the unorganized cloud points and the parametric design method for generating new models based on the parameterization. The parameterization consists of two phases. Firstly, the semantic feature extraction technique is applied to construct the feature wireframe of a human body from laser scanned 3D unorganized points. Secondly, the symmetric detail mesh surface of the human body is modeled. Gregory patches are utilized to generate G 1 continuous mesh surface interpolating the curves on feature wireframe. After that, a voxel-based algorithm adds details on the smooth G 1 continuous surface by the cloud points. Finally, the mesh surface is adjusted to become symmetric. Compared to other template fitting based approaches, the parameterization approach introduced in this paper is more efficient. The parametric design approach synthesizes parameterized sample models to a new human body according to user input sizing dimensions. It is based on a numerical optimization process. The strategy of choosing samples for synthesis is also introduced. Human bodies according to a wide range of dimensions can be generated by our approach. Different from the mathematical interpolation function based human body synthesis methods, the models generated in our method have the approximation errors minimized. All mannequins constructed by our approach have consistent feature patches, which benefits the design automation of customized clothes around human bodies a lot.",
"title": ""
}
] |
[
{
"docid": "18f739a605222415afdea4f725201fba",
"text": "I discuss open theoretical questions pertaining to the modified dynamics (MOND)–a proposed alternative to dark matter, which posits a breakdown of Newtonian dynamics in the limit of small accelerations. In particular, I point the reasons for thinking that MOND is an effective theory–perhaps, despite appearance, not even in conflict with GR. I then contrast the two interpretations of MOND as modified gravity and as modified inertia. I describe two mechanical models that are described by potential theories similar to (non-relativistic) MOND: a potential-flow model, and a membrane model. These might shed some light on a possible origin of MOND. The possible involvement of vacuum effects is also speculated on.",
"title": ""
},
{
"docid": "c47f8e32713fe0bb1c8fbf3761216eff",
"text": "Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving many state-of-the-art (SOA) visual processing tasks. Even though graphical processing units are most often used in training and deploying CNNs, their power efficiency is less than 10 GOp/s/W for single-frame runtime inference. We propose a flexible and efficient CNN accelerator architecture called NullHop that implements SOA CNNs useful for low-power and low-latency application scenarios. NullHop exploits the sparsity of neuron activations in CNNs to accelerate the computation and reduce memory requirements. The flexible architecture allows high utilization of available computing resources across kernel sizes ranging from $1\\times 1$ to $7\\times 7$ . NullHop can process up to 128 input and 128 output feature maps per layer in a single pass. We implemented the proposed architecture on a Xilinx Zynq field-programmable gate array (FPGA) platform and presented the results showing how our implementation reduces external memory transfers and compute time in five different CNNs ranging from small ones up to the widely known large VGG16 and VGG19 CNNs. Postsynthesis simulations using Mentor Modelsim in a 28-nm process with a clock frequency of 500 MHz show that the VGG19 network achieves over 450 GOp/s. By exploiting sparsity, NullHop achieves an efficiency of 368%, maintains over 98% utilization of the multiply–accumulate units, and achieves a power efficiency of over 3 TOp/s/W in a core area of 6.3 mm2. As further proof of NullHop’s usability, we interfaced its FPGA implementation with a neuromorphic event camera for real-time interactive demonstrations.",
"title": ""
},
{
"docid": "348115a5dddbc2bcdcf5552b711e82c0",
"text": "Enterococci are Gram-positive, catalase-negative, non-spore-forming, facultative anaerobic bacteria, which usually inhabit the alimentary tract of humans in addition to being isolated from environmental and animal sources. They are able to survive a range of stresses and hostile environments, including those of extreme temperature (5-65 degrees C), pH (4.5-10.0) and high NaCl concentration, enabling them to colonize a wide range of niches. Virulence factors of enterococci include the extracellular protein Esp and aggregation substances (Agg), both of which aid in colonization of the host. The nosocomial pathogenicity of enterococci has emerged in recent years, as well as increasing resistance to glycopeptide antibiotics. Understanding the ecology, epidemiology and virulence of Enterococcus species is important for limiting urinary tract infections, hepatobiliary sepsis, endocarditis, surgical wound infection, bacteraemia and neonatal sepsis, and also stemming the further development of antibiotic resistance.",
"title": ""
},
{
"docid": "74bc04c376afa3a05f66620ec0873ed4",
"text": "One of the biggest challenges for forensic pathologists is to diagnose the postmortem interval (PMI) delimitation; therefore, the aim of this study was to use a routine histopathologic examination and quantitative analysis to obtain an accurate diagnosis of PMI. The current study was done by using 24 adult male albino rats divided into 8 groups based on the scarification schedule (0, 8, 16, 24, 32, 40, 48, and 72 hours PMI). Skin specimens were collected and subjected to a routine histopathologic processing. Examination of hematoxylin-eosin-stained sections from the skin, its appendages and underlying muscles were carried out. Morphometric analysis of epidermal nuclear chromatin intensities and area percentages, reticular dermis integrated density, and sebaceous gland nuclei areas and chromatin condensation was done. Progressive histopathologic changes could be detected in epidermis, dermis, hypodermis, underlying muscles including nerve endings, and red blood cells in relation to hours PMI. Significant difference was found in epidermal nuclear chromatin intensities at different-hours PMI (at P < 0.001). The highest intensity was detected 40 hours PMI. Quantitative analysis of measurements of dermal collagen area percentages revealed a high significant difference between 0 hours PMI and 24 to 72 hours PMI (P < 0.001). As the PMI increases, sebaceous gland nuclei and nuclear chromatin condensation showed a dramatic decrease. Significant differences of sebaceous gland nuclei areas between 0 hours and different-hours PMI (P < 0.001) were obtained. A combination between routine histopathologic examination and quantitative and morphometric analysis of the skin could be used to evaluate the time of death in different-hours PMI.",
"title": ""
},
{
"docid": "2fbcd34468edf53ee08e0a76a048c275",
"text": "Recently, the introduction of the generative adversarial network (GAN) and its variants has enabled the generation of realistic synthetic samples, which has been used for enlarging training sets. Previous work primarily focused on data augmentation for semi-supervised and supervised tasks. In this paper, we instead focus on unsupervised anomaly detection and propose a novel generative data augmentation framework optimized for this task. In particular, we propose to oversample infrequent normal samples - normal samples that occur with small probability, e.g., rare normal events. We show that these samples are responsible for false positives in anomaly detection. However, oversampling of infrequent normal samples is challenging for real-world high-dimensional data with multimodal distributions. To address this challenge, we propose to use a GAN variant known as the adversarial autoencoder (AAE) to transform the high-dimensional multimodal data distributions into low-dimensional unimodal latent distributions with well-defined tail probability. Then, we systematically oversample at the 'edge' of the latent distributions to increase the density of infrequent normal samples. We show that our oversampling pipeline is a unified one: it is generally applicable to datasets with different complex data distributions. To the best of our knowledge, our method is the first data augmentation technique focused on improving performance in unsupervised anomaly detection. We validate our method by demonstrating consistent improvements across several real-world datasets.",
"title": ""
},
{
"docid": "80faeaceefd3851b51feef2e50694ef7",
"text": "The sentiment detection of texts has been witnessed a booming interest in recent years, due to the increased availability of online reviews in digital form and the ensuing need to organize them. Till to now, there are mainly four different problems predominating in this research community, namely, subjectivity classification, word sentiment classification, document sentiment classification and opinion extraction. In fact, there are inherent relations between them. Subjectivity classification can prevent the sentiment classifier from considering irrelevant or even potentially misleading text. Document sentiment classification and opinion extraction have often involved word sentiment classification techniques. This survey discusses related issues and main approaches to these problems. 2009 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "86bf67085df96877b3409a80f78c4504",
"text": "Well-known met hods for solving the shape-from-shading problem require knowledge of the reflectance map. Here we show how the shape-from-shading problem can be solved when the reflectance map is not available, but is known to have a given form with some unknown parameters. This happens, for example, when the surface is known to be Lambertian, but the direction to the light source is not known. We give an iterative algorithm which alternately estimate* the surface shape and the light source direction. Use of the unit normal in parameterizing the reflectance map, rather than the gradient or stereographic coordinates, simplifies the analysis. Our approach also leads to an iterative scheme for computing shape from shading that adjusts the current estimates of the local normals toward or away from the direction of the light source. The amount of adjustment is proportional to the current difference between the predicted and the observed brightness. We also develop generalizations to less constrained forms of reflectance maps.",
"title": ""
},
{
"docid": "ff91ed2072c93eeae5f254fb3de0d780",
"text": "Machine learning requires access to all the data used for training. Recently, Google Research proposed Federated Learning as an alternative, where the training data is distributed over a federation of clients that each only access their own training data; the partially trained model is updated in a distributed fashion to maintain a situation where the data from all participating clients remains unknown. In this research we construct different distributions of the DMOZ dataset over the clients in the network and compare the resulting performance of Federated Averaging when learning a classifier. We find that the difference in spread of topics for each client has a strong correlation with the performance of the Federated Averaging algorithm.",
"title": ""
},
{
"docid": "fccbcdff722a297e5a389674d7557a18",
"text": "For the last few decades more than twenty standardized usability questionnaires for evaluating software systems have been proposed. These instruments have been widely used in the assessment of usability of user interfaces. They have their own characteristics, can be generic or address specific kinds of systems and can be composed of one or several items. Some comparison or comparative studies were also conducted to identify the best one in different situations. All these issues should be considered while choosing a questionnaire. In this paper, we present an extensive review of these questionnaires considering their key features, some classifications and main comparison studies already performed. Moreover, we present the result of a detailed analysis of all items being evaluated in each questionnaire to indicate those that can identify users’ perceptions about specific usability problems. This analysis was performed by confronting each questionnaire item (around 475 items) with usability criteria proposed by quality standards (ISO 9421-11 and ISO/WD 9241-112) and classical quality ergonomic criteria.",
"title": ""
},
{
"docid": "14839c18d1029270174e9f94d122edd5",
"text": "Nested event structures are a common occurrence in both open domain and domain specific extraction tasks, e.g., a “crime” event can cause a “investigation” event, which can lead to an “arrest” event. However, most current approaches address event extraction with highly local models that extract each event and argument independently. We propose a simple approach for the extraction of such structures by taking the tree of event-argument relations and using it directly as the representation in a reranking dependency parser. This provides a simple framework that captures global properties of both nested and flat event structures. We explore a rich feature space that models both the events to be parsed and context from the original supporting text. Our approach obtains competitive results in the extraction of biomedical events from the BioNLP’09 shared task with a F1 score of 53.5% in development and 48.6% in testing.",
"title": ""
},
{
"docid": "7f20eba09cddb9d980b6475aa089463f",
"text": "This technical note describes a new baseline for the Natural Questions (Kwiatkowski et al., 2019). Our model is based on BERT (Devlin et al., 2018) and reduces the gap between the model F1 scores reported in the original dataset paper and the human upper bound by 30% and 50% relative for the long and short answer tasks respectively. This baseline has been submitted to the official NQ leaderboard†. Code, preprocessed data and pretrained model are available‡.",
"title": ""
},
{
"docid": "a651ae33adce719033dad26b641e6086",
"text": "Knowledge base(KB) plays an important role in artificial intelligence. Much effort has been taken to both manually and automatically construct web-scale knowledge bases. Comparing with manually constructed KBs, automatically constructed KB is broader but with more noises. In this paper, we study the problem of improving the quality for automatically constructed web-scale knowledge bases, in particular, lexical taxonomies of isA relationships. We find that these taxonomies usually contain cycles, which are often introduced by incorrect isA relations. Inspired by this observation, we introduce two kinds of models to detect incorrect isA relations from cycles. The first one eliminates cycles by extracting directed acyclic graphs, and the other one eliminates cycles by grouping nodes into different levels. We implement our models on Probase, a state-of-the-art, automatically constructed, web-scale taxonomy. After processing tens of millions of relations, our models eliminate 74 thousand wrong relations with 91% accuracy.",
"title": ""
},
{
"docid": "7b9df4427a6290cf5efda9c41612ad64",
"text": "A systematic design of planar MIMO monopole antennas with significantly reduced mutual coupling is presented, based on the concept of metamaterials. The design is performed by means of individual rectangular loop resonators, placed in the space between the antenna elements. The underlying principle is that resonators act like small metamaterial samples, thus providing an effective means of controlling electromagnetic wave propagation. The proposed design achieves considerably high levels of isolation between antenna elements, without essentially affecting the simplicity and planarity of the MIMO antenna.",
"title": ""
},
{
"docid": "9292d1a97913257cfd1e72645969a988",
"text": "A digital PLL employing an adaptive tracking technique and a novel frequency acquisition scheme achieves a wide tracking range and fast frequency acquisition. The test chip fabricated in a 0.13 mum CMOS process operates from 0.6 GHz to 2 GHz and achieves better than plusmn3200 ppm frequency tracking range when the reference clock is modulated with a 1 MHz sine wave.",
"title": ""
},
{
"docid": "2a9e4ed54dd91eb8a6bad757afc9ac75",
"text": "The modern advancements in digital electronics allow waveforms to be easily synthesized and captured using only digital electronics. The synthesis of radar waveforms using only digital electronics, such as Digital-to-Analog Converters (DACs) and Analog-to-Digital Converters (ADCs) allows for a majority of the analog chain to be removed from the system. In order to create a constant amplitude waveform, the amplitude distortions must be compensated for. The method chosen to compensate for the amplitude distortions is to pre-distort the waveform so, when it is influenced by the system, the output waveform has a near constant amplitude modulus. The effects of the predistortion were observed to be successful in both range and range-Doppler radar implementations.",
"title": ""
},
{
"docid": "df5612b16477f7e18fdf5ff7f950cce8",
"text": "We have had to wait over 30 years since the naive Bayes model was first introduced in 1960 for the so-called Bayesian network classifiers to resurge. Based on Bayesian networks, these classifiers have many strengths, like model interpretability, accommodation to complex data and classification problem settings, existence of efficient algorithms for learning and classification tasks, and successful applicability in real-world problems. In this article, we survey the whole set of discrete Bayesian network classifiers devised to date, organized in increasing order of structure complexity: naive Bayes, selective naive Bayes, seminaive Bayes, one-dependence Bayesian classifiers, k-dependence Bayesian classifiers, Bayesian network-augmented naive Bayes, Markov blanket-based Bayesian classifier, unrestricted Bayesian classifiers, and Bayesian multinets. Issues of feature subset selection and generative and discriminative structure and parameter learning are also covered.",
"title": ""
},
{
"docid": "0a3ff05dc001e66be2fcd1a71973a8d7",
"text": "Recent advances in evaluating and measuring the perceived visual quality of three-dimensional (3-D) polygonal models are presented in this article, which analyzes the general process of objective quality assessment metrics and subjective user evaluation methods and presents a taxonomy of existing solutions. Simple geometric error computed directly on the 3-D models does not necessarily reflect the perceived visual quality; therefore, integrating perceptual issues for 3-D quality assessment is of great significance. This article discusses existing metrics, including perceptually based ones, computed either on 3-D data or on two-dimensional (2-D) projections, and evaluates their performance for their correlation with existing subjective studies.",
"title": ""
},
{
"docid": "c46dd659aa1dfeac9c58197ff8575278",
"text": "Previous studies indicate that childhood sexual abuse can have extensive and serious consequences. The aim of this research was to do a qualitative study of the consequences of childhood sexual abuse for Icelandic men's health and well-being. Phenomenology was the methodological approach of the study. Totally 14 interviews were conducted, two per individual, and analysed based on the Vancouver School of Phenomenology. The main results of the study showed that the men describe deep and almost unbearable suffering, affecting their entire life, of which there is no alleviation in sight. The men have lived in repressed silence most of their lives and have come close to taking their own lives. What stopped them from committing suicide was revealing to others what happened to them which set them free in a way. The men experienced fear- or rage-based shock at the time of the trauma and most of them endured the attack by dissociation, disconnecting psyche and body and have difficulties reconnecting. They had extremely difficult childhoods, living with indisposition, bullying, learning difficulties and behavioural problems. Some have, from a young age, numbed themselves with alcohol and elicit drugs. They have suffered psychologically and physically and have had relational and sexual intimacy problems. The consequences of the abuse surfaced either immediately after the shock or many years later and developed into complex post-traumatic stress disorder. Because of perceived societal prejudice, it was hard for the men to seek help. This shows the great need for professionals to be alert to the possible consequences of childhood sexual abuse in their practice to reverse the damaging consequences on their health and well-being. We conclude that living in repressed silence after a trauma, like childhood sexual abuse, can be dangerous for the health, well-being and indeed the very life of the survivor.",
"title": ""
},
{
"docid": "7b3db9eb3f441dc258aeb55f94c96e1c",
"text": "Pili annulati is a disorder that produces a spangled appearance to the hair, caused by alternating light and dark banding of hair shafts. This phenomenon is created by abnormal cavities in the cortex of the hair shaft, which produces lighter bands seen on clinical examination. Complications of pili annulati are limited; the most noteworthy complication is increased breakage secondary to weathering of the abnormal hair shafts. We report a case of a 14-year-old adolescent girl with pili annulati and progressive hair loss of 2 months' duration. Most of her hairs were notably short, spangled, and lusterless with light and dark banding observed with handheld magnification. Light microscopy demonstrated alternating light and dark bands, and the dark bands had the typical appearance of air-filled spaces. Gentler hair grooming practices were recommended, and at a follow-up visit, the appearance of the hair had improved with darker and longer shafts. This case should alert clinicians to look for pili annulati when hair fragility is present.",
"title": ""
},
{
"docid": "49445cfa92b95045d23a54eca9f9a592",
"text": "---------------------------------------------------------------------***--------------------------------------------------------------------Abstract In this competitive world, business is becoming highly saturated. Especially, the field of telecommunication faces complex challenges due to a number of vibrant competitive service providers. Therefore, it has become very difficult for them to retain existing customers. Since the cost of acquiring new customers is much higher than the cost of retaining the existing customers, it is the time for the telecom industries to take necessary steps to retain the customers to stabilize their market value. In the past decade, several data mining techniques have been proposed in the literature for predicting the churners using heterogeneous customer records. This paper reviews the different categories of customer data available in open datasets, predictive models and performance metrics used in the literature for churn prediction in telecom industry.",
"title": ""
}
] |
scidocsrr
|
d3ebe627e3c6516be2d8094fc4c19ef2
|
Waste Pickers Perceptions among Households in Cosmo City , South
|
[
{
"docid": "1eba4ab4cb228a476987a5d1b32dda6c",
"text": "Optimistic estimates suggest that only 30-70% of waste generated in cities of developing countries is collected for disposal. As a result, uncollected waste is often disposed of into open dumps, along the streets or into water bodies. Quite often, this practice induces environmental degradation and public health risks. Notwithstanding, such practices also make waste materials readily available for itinerant waste pickers. These 'scavengers' as they are called, therefore perceive waste as a resource, for income generation. Literature suggests that Informal Sector Recycling (ISR) activity can bring other benefits such as, economic growth, litter control and resources conservation. This paper critically reviews trends in ISR activities in selected developing and transition countries. ISR often survives in very hostile social and physical environments largely because of negative Government and public attitude. Rather than being stigmatised, the sector should be recognised as an important element for achievement of sustainable waste management in developing countries. One solution to this problem could be the integration of ISR into the formal waste management system. To achieve ISR integration, this paper highlights six crucial aspects from literature: social acceptance, political will, mobilisation of cooperatives, partnerships with private enterprises, management and technical skills, as well as legal protection measures. It is important to note that not every country will have the wherewithal to achieve social inclusion and so the level of integration must be 'flexible'. In addition, the structure of the ISR should not be based on a 'universal' model but should instead take into account local contexts and conditions.",
"title": ""
}
] |
[
{
"docid": "66334ca62a62a78cab72c80b9a19072b",
"text": "End-to-end neural models have made significant progress in question answering, however recent studies show that these models implicitly assume that the answer and evidence appear close together in a single document. In this work, we propose the Coarse-grain Fine-grain Coattention Network (CFC), a new question answering model that combines information from evidence across multiple documents. The CFC consists of a coarse-grain module that interprets documents with respect to the query then finds a relevant answer, and a fine-grain module which scores each candidate answer by comparing its occurrences across all of the documents with the query. We design these modules using hierarchies of coattention and selfattention, which learn to emphasize different parts of the input. On the Qangaroo WikiHop multi-evidence question answering task, the CFC obtains a new stateof-the-art result of 70.6% on the blind test set, outperforming the previous best by 3% accuracy despite not using pretrained contextual encoders.",
"title": ""
},
{
"docid": "b6023bc3fdd634ca22b77a561b92fe9e",
"text": "New intelligent power grids (smart grids) will be an essential way of improving efficiency in power supply and power consumption, facilitating the use of distributed and renewable resources on the supply side and providing consumers with a range of tailored services on the consumption side. The delivery of efficiencies and advanced services in a smart grid will require both a comprehensive overlay communications network and flexible software platforms that can process data from a variety of sources, especially electronic sensor networks. Parallel developments in autonomic systems, pervasive computing and context-awareness (relating in particular to data fusion, context modelling, and semantic data) could provide key elements in the development of scalable smart grid data management systems and applications that utilise a multi-technology communications network. This paper describes: (1) the communications and data management requirements of the emerging smart grid, (2) state-of-the-art techniques and systems for context-awareness and (3) a future direction towards devising a context-aware middleware platform for the smart grid, as well as associated requirements and challenges. Smart grids will transform the methods of generating electric power and the monitoring and billing of consumption. The drivers behind the development of smart grids include economic, political and technical elements. Major initiatives have been launched in Europe by the EU Commission [1] and the European Electricity Grid Initiative [2]. In the US, overall policies are set out by the National Science and Technology Council [3] while grid modernisation is specifically described in a report by the GridWise Alliance [4]. The main policy drivers for power grid development are as follows: Promote the integration of distributed renewable power sources (e.g. wind, solar, wave and tidal power, geothermal, biofuel); Provide significant reductions in carbon dioxide (CO 2) emissions through the phasing-out of fossil fuel power plants. This is to help meet agreed world targets in reducing greenhouse gases and combatting climate change; Promote the use of electric vehicles as an alternative to fossil fuelled transport systems; Renew and upgrade older grid transmission infrastructure to provide greater efficiency and security of supply; Introduce two-way ''smart'' metering to facilitate both power saving and power production by consumers; Apart from these policy drivers, power production and distribution will also have to operate in an increasingly deregulated and competitive market environment.",
"title": ""
},
{
"docid": "2e1825c04bd61898ef9266d56eb4e055",
"text": "One of the most important properties of neural nets (NNs) for control purposes is the universal approximation property. Unfortunately,, this property is generally proven for continuous functions. In most real industrial control systems there are nonsmooth functions (e.g., piecewise continuous) for which approximation results in the literature are sparse. Examples include friction, deadzone, backlash, and so on. It is found that attempts to approximate piecewise continuous functions using smooth activation functions require many NN nodes and many training iterations, and still do not yield very good results. Therefore, a novel neural-network structure is given for approximation of piecewise continuous functions of the sort that appear in friction, deadzone, backlash, and other motion control actuator nonlinearities. The novel NN consists of neurons having standard sigmoid activation functions, plus some additional neurons having a special class of nonsmooth activation functions termed \"jump approximation basis function.\" Two types of nonsmooth jump approximation basis functions are determined- a polynomial-like basis and a sigmoid-like basis. This modified NN with additional neurons having \"jump approximation\" activation functions can approximate any piecewise continuous function with discontinuities at a finite number of known points. Applications of the new NN structure are made to rigid-link robotic systems with friction nonlinearities. Friction is a nonlinear effect that can limit the performance of industrial control systems; it occurs in all mechanical systems and therefore is unavoidable in control systems. It can cause tracking errors, limit cycles, and other undesirable effects. Often, inexact friction compensation is used with standard adaptive techniques that require models that are linear in the unknown parameters. It is shown here how a certain class of augmented NN, capable of approximating piecewise continuous functions, can be used for friction compensation.",
"title": ""
},
{
"docid": "b00c6771f355577437dee2cdd63604b8",
"text": "A person gets frustrated when he faces slow speed as many devices are connected to the same network. As the number of people accessing wireless internet increases, it’s going to result in clogged airwaves. Li-Fi is transmission of data through illumination by taking the fiber out of fiber optics by sending data through a LED light bulb that varies in intensity faster than the human eye can follow.",
"title": ""
},
{
"docid": "8284163c893d79213b6573249a0f0a32",
"text": "Clustering is a core building block for data analysis, aiming to extract otherwise hidden structures and relations from raw datasets, such as particular groups that can be effectively related, compared, and interpreted. A plethora of visual-interactive cluster analysis techniques has been proposed to date, however, arriving at useful clusterings often requires several rounds of user interactions to fine-tune the data preprocessing and algorithms. We present a multi-stage Visual Analytics (VA) approach for iterative cluster refinement together with an implementation (SOMFlow) that uses Self-Organizing Maps (SOM) to analyze time series data. It supports exploration by offering the analyst a visual platform to analyze intermediate results, adapt the underlying computations, iteratively partition the data, and to reflect previous analytical activities. The history of previous decisions is explicitly visualized within a flow graph, allowing to compare earlier cluster refinements and to explore relations. We further leverage quality and interestingness measures to guide the analyst in the discovery of useful patterns, relations, and data partitions. We conducted two pair analytics experiments together with a subject matter expert in speech intonation research to demonstrate that the approach is effective for interactive data analysis, supporting enhanced understanding of clustering results as well as the interactive process itself.",
"title": ""
},
{
"docid": "cac081006bb1a7daefe3c62b6c80fe10",
"text": "A novel chaotic time-series prediction method based on support vector machines (SVMs) and echo-state mechanisms is proposed. The basic idea is replacing \"kernel trick\" with \"reservoir trick\" in dealing with nonlinearity, that is, performing linear support vector regression (SVR) in the high-dimension \"reservoir\" state space, and the solution benefits from the advantages from structural risk minimization principle, and we call it support vector echo-state machines (SVESMs). SVESMs belong to a special kind of recurrent neural networks (RNNs) with convex objective function, and their solution is global, optimal, and unique. SVESMs are especially efficient in dealing with real life nonlinear time series, and its generalization ability and robustness are obtained by regularization operator and robust loss function. The method is tested on the benchmark prediction problem of Mackey-Glass time series and applied to some real life time series such as monthly sunspots time series and runoff time series of the Yellow River, and the prediction results are promising",
"title": ""
},
{
"docid": "67731fe25f024540a46b084f42271e70",
"text": "Obstacle avoidance is a fundamental requirement for autonomous robots which operate in, and interact with, the real world. When perception is limited to monocular vision avoiding collision becomes significantly more challenging due to the lack of 3D information. Conventional path planners for obstacle avoidance require tuning a number of parameters and do not have the ability to directly benefit from large datasets and continuous use. In this paper, a dueling architecture based deep double-Q network (D3QN) is proposed for obstacle avoidance, using only monocular RGB vision. Based on the dueling and double-Q mechanisms, D3QN can efficiently learn how to avoid obstacles in a simulator even with very noisy depth information predicted from RGB image. Extensive experiments show that D3QN enables twofold acceleration on learning compared with a normal deep Q network and the models trained solely in virtual environments can be directly transferred to real robots, generalizing well to various new environments with previously unseen dynamic objects.",
"title": ""
},
{
"docid": "fd1fbbfba6c1462307cf280a42258ffe",
"text": "In this paper, we propose a selective physical layer network coding (SPNC) scheme in two-way relay channel (TWRC) for binary phase shift keying (BPSK) modulation over Rayleigh fading channels. Physical layer network coding (PNC) shows average throughput gain over traditional relaying in two-way relay systems, but not in all channel conditions PNC performs better than traditional relaying. The key idea of SPNC is to combine the advantage of PNC and traditional relaying in fading channels to improve throughput of the system. We show that in channel states when PNC fails to correctly detect XORed data from superimposed signal at the relay, it is still possible to detect one of the sources' data to retrieve up to half of the achievable throughput by a so called single node detection (SND) scheme. Also, we analyze error performance and throughput gain of SPNC. Simulation results show that SPNC achieves significant throughput gain over PNC in Rayleigh fading channels.",
"title": ""
},
{
"docid": "c5d7d29f4001aca1fbfc6e605e62933d",
"text": "A space efficient and simple circuit for ultra wideband (UWB) balanced pulse generation is presented. The pulse generator uses a single step recovery diode to provide a truly balanced output. The diode biasing is integrated with the switching circuitry to improve the compactness of the design. Two versions of the circuit with lumped and distributed pulse forming networks have been tested. The pulse parameters for distributed pulse shaping network were: rise/fall time (10-90%) 183 ps, pulse width (50-50%) 340 ps, pulse peak to peak voltage 896 mV (12.05 dBm peak power) and for the lumped case: rise time (10-90%) 272 ps, fall time (90-10%) 566 ps pulse width (50-50%) 511 ps, pulse amplitude /spl plusmn/1.6V (17 dBm peak power). In both cases excellent balance of the two pulses at the output ports can be observed. It should be noted that above parameters were obtained with typical inexpensive RF components. The circuit reduces the complexity of the design because of the lack of broadband baluns required for UWB balanced antennas. The circuit may be used as part of a UWB transmitter.",
"title": ""
},
{
"docid": "73e398a5ae434dbd2a10ddccd2cfb813",
"text": "Face alignment aims to estimate the locations of a set of landmarks for a given image. This problem has received much attention as evidenced by the recent advancement in both the methodology and performance. However, most of the existing works neither explicitly handle face images with arbitrary poses, nor perform large-scale experiments on non-frontal and profile face images. In order to address these limitations, this paper proposes a novel face alignment algorithm that estimates both 2D and 3D landmarks and their 2D visibilities for a face image with an arbitrary pose. By integrating a 3D point distribution model, a cascaded coupled-regressor approach is designed to estimate both the camera projection matrix and the 3D landmarks. Furthermore, the 3D model also allows us to automatically estimate the 2D landmark visibilities via surface normal. We use a substantially larger collection of all-pose face images to evaluate our algorithm and demonstrate superior performances than the state-of-the-art methods.",
"title": ""
},
{
"docid": "e643f7f29c2e96639a476abb1b9a38b1",
"text": "Weather forecasting has been one of the most scientifically and technologically challenging problem around the world. Weather data is one of the meteorological data that is rich with important information, which can be used for weather prediction We extract knowledge from weather historical data collected from Indian Meteorological Department (IMD) Pune. From the collected weather data comprising of 36 attributes, only 7 attributes are most relevant to rainfall prediction. We made data preprocessing and data transformation on raw weather data set, so that it shall be possible to work on Bayesian, the data mining, prediction model used for rainfall prediction. The model is trained using the training data set and has been tested for accuracy on available test data. The meteorological centers uses high performance computing and supercomputing power to run weather prediction model. To address the issue of compute intensive rainfall prediction model, we proposed and implemented data intensive model using data mining technique. Our model works with good accuracy and takes moderate compute resources to predict the rainfall. We have used Bayesian approach to prove our model for rainfall prediction, and found to be working well with good accuracy.",
"title": ""
},
{
"docid": "256376e1867ee923ff72d3376c3be918",
"text": "Driven by recent vision and graphics applications such as image segmentation and object recognition, computing pixel-accurate saliency values to uniformly highlight foreground objects becomes increasingly important. In this paper, we propose a unified framework called pixelwise image saliency aggregating (PISA) various bottom-up cues and priors. It generates spatially coherent yet detail-preserving, pixel-accurate, and fine-grained saliency, and overcomes the limitations of previous methods, which use homogeneous superpixel based and color only treatment. PISA aggregates multiple saliency cues in a global context, such as complementary color and structure contrast measures, with their spatial priors in the image domain. The saliency confidence is further jointly modeled with a neighborhood consistence constraint into an energy minimization formulation, in which each pixel will be evaluated with multiple hypothetical saliency levels. Instead of using global discrete optimization methods, we employ the cost-volume filtering technique to solve our formulation, assigning the saliency levels smoothly while preserving the edge-aware structure details. In addition, a faster version of PISA is developed using a gradient-driven image subsampling strategy to greatly improve the runtime efficiency while keeping comparable detection accuracy. Extensive experiments on a number of public data sets suggest that PISA convincingly outperforms other state-of-the-art approaches. In addition, with this work, we also create a new data set containing 800 commodity images for evaluating saliency detection.",
"title": ""
},
{
"docid": "4c3e6abcc0963efe7423fa25e9b231cb",
"text": "In this demo, we present NaLIR, a generic interactive natural language interface for querying relational databases. NaLIR can accept a logically complex English language sentence as query input. This query is first translated into a SQL query, which may include aggregation, nesting, and various types of joins, among other things, and then evaluated against an RDBMS. In this demonstration, we show that NaLIR, while far from being able to pass the Turing test, is perfectly usable in practice, and able to handle even quite complex queries in a variety of application domains. In addition, we also demonstrate how carefully designed interactive communication can avoid misinterpretation with minimum user burden.",
"title": ""
},
{
"docid": "d90191953e1d4b90dc2ff49251744221",
"text": "On microblogging services, people usually use hashtags to mark microblogs, which have a specific theme or content, making them easier for users to find. Hence, how to automatically recommend hashtags for microblogs has received much attention in recent years. Previous deep neural network-based hashtag recommendation approaches converted the task into a multiclass classification problem. However, most of these methods only took the microblog itself into consideration. Motivated by the intuition that the history of users should impact the recommendation procedure, in this work, we extend end-to-end memory networks to perform this task. We incorporate the histories of users into the external memory and introduce a hierarchical attention mechanism to select more appropriate histories. To train and evaluate the proposed method, we also construct a dataset based on microblogs collected from Twitter. Experimental results demonstrate that the proposed methods can significantly outperform state-of-the-art methods. By incorporating the hierarchical attention mechanism, the relative improvement in the proposed method over the state-of-the-art method is around 67.9% in the F1-score.",
"title": ""
},
{
"docid": "b1d2de1a59945dcdb05be93d510caaaa",
"text": "This chapter surveys the literature on bubbles, financial crises, and systemic risk. The first part of the chapter provides a brief historical account of bubbles and financial crisis. The second part of the chapter gives a structured overview of the literature on financial bubbles. The third part of the chapter discusses the literatures on financial crises and systemic risk, with particular emphasis on amplification and propagation mechanisms during financial crises, and the measurement of systemic risk. Finally, we point toward some questions for future",
"title": ""
},
{
"docid": "32aaaa1bb43a5631cebb4dd85ef54105",
"text": "In this work sentiment analysis of annual budget for Financial year 2016–17 is done. Text mining is used to extract text data from the budget document and to compute the word association of significant words and their correlation in computed with the associated words. Word frequency and the corresponding word cloud is plotted. The analysis is done in R software. The corresponding sentiment score is computed and analyzed. This analysis is of significant importance keeping in mind the sentiment reflected about the budget in the official budget document.",
"title": ""
},
{
"docid": "f13cbc36f2c51c5735185751ddc2500e",
"text": "This paper presents an overview of the road and traffic sign detection and recognition. It describes the characteristics of the road signs, the requirements and difficulties behind road signs detection and recognition, how to deal with outdoor images, and the different techniques used in the image segmentation based on the colour analysis, shape analysis. It shows also the techniques used for the recognition and classification of the road signs. Although image processing plays a central role in the road signs recognition, especially in colour analysis, but the paper points to many problems regarding the stability of the received information of colours, variations of these colours with respect to the daylight conditions, and absence of a colour model that can led to a good solution. This means that there is a lot of work to be done in the field, and a lot of improvement can be achieved. Neural networks were widely used in the detection and the recognition of the road signs. The majority of the authors used neural networks as a recognizer, and as classifier. Some other techniques such as template matching or classical classifiers were also used. New techniques should be involved to increase the robustness, and to get faster systems for real-time applications.",
"title": ""
},
{
"docid": "cc5746a332cca808cc0e35328eecd993",
"text": "This paper investigates the relationship between corporate social responsibility (CSR) and the economic performance of corporations. It first examines the theories that suggest a relationship between the two. To test these theories, measures of CSR performance and disclosure developed by the New Consumer Group were analysed against the (past, concurrent and subsequent to CSR performance period) economic performance of 56 large UK companies. Economic performance included: financial (return on capital employed, return on equity and gross profit to sales ratios); and capital market performance (systematic risk and excess market valuation). The results supported the conclusion that (past, concurrent and subsequent) economic performance is related to both CSR performance and disclosure. However, the relationships were weak and lacked an overall consistency. For example, past economic performance was found to partly explain variations in firms’ involvement in philanthropic activities. CSR disclosure was affected (positively) by both a firm’s CSR performance and its concurrent financial performance. Involvement in environmental protection activities was found to be negatively correlated with subsequent financial performance. Whereas, a firm’s policies regarding women’s positions seem to be more rewarding in terms of positive capital market responses (performance) in the subsequent period. Donations to the Conservative Party were found not to be related to companies’ (past, concurrent or subsequent) financial and/or capital performance. operation must fall within the guidelines set by society; and • businesses act as moral agents within",
"title": ""
},
{
"docid": "84d8d6ebd899950712003a5567899f75",
"text": "Despite the benefits of information technology for corporations in staff recruitment (reduced time and costs per hire) the increased use also led to glut of applications especially in major enterprises. Therefore the companies forced to find the best candidate in times of a \"War for Talent\" need help to find this needle in a haystack. This help could be provided by recommender systems predominately used in e-commerce to recommend products or services to customers purchasing specific products. Recommender systems could assist the recruiter to find the adequate candidate within the applicant's database. In order to support this search and selection process we conduct a design science approach to integrate recommender systems in a holistic e-recruiting architecture and therewith provide a complete and new solution for IT support in staff recruitment.",
"title": ""
}
] |
scidocsrr
|
4bc9b2a9cedd00cd8dbc1a54e336c86b
|
Exploring the Use of Autoencoders for Botnets Traffic Representation
|
[
{
"docid": "2ffb20d66a0d5cb64442c2707b3155c6",
"text": "A botnet is a network of compromised hosts that is under the control of a single, malicious entity, often called the botmaster. We present a system that aims to detect bot-infected machines, independent of any prior information about the command and control channels or propagation vectors, and without requiring multiple infections for correlation. Our system relies on detection models that target the characteristic fact that every bot receives commands from the botmaster to which it responds in a specific way. These detection models are generated automatically from network traffic traces recorded from actual bot instances. We have implemented the proposed approach and demonstrate that it can extract effective detection models for a variety of different bot families. These models are precise in describing the activity of bots and raise very few false positives.",
"title": ""
}
] |
[
{
"docid": "545998c2badee9554045c04983b1d11b",
"text": "This paper presents a new control approach for nonlinear network-induced time delay systems by combining online reset control, neural networks, and dynamic Bayesian networks. We use feedback linearization to construct a nominal control for the system then use reset control and a neural network to compensate for errors due to the time delay. Finally, we obtain a stochastic model of the Networked Control System (NCS) using a Dynamic Bayesian Network (DBN) and use it to design a predictive control. We apply our control methodology to a nonlinear inverted pendulum and evaluate its performance through numerical simulations. We also test our approach with real-time experiments on a dc motor-load NCS with wireless communication implemented using a Ubiquitous Sensor Network (USN). Both the simulation and experimental results demonstrate the efficacy of our control methodology.",
"title": ""
},
{
"docid": "0ecded7fad85b79c4c288659339bc18b",
"text": "We present an end-to-end supervised based system for detecting malware by analyzing network traffic. The proposed method extracts 972 behavioral features across different protocols and network layers, and refers to different observation resolutions (transaction, session, flow and conversation windows). A feature selection method is then used to identify the most meaningful features and to reduce the data dimensionality to a tractable size. Finally, various supervised methods are evaluated to indicate whether traffic in the network is malicious, to attribute it to known malware “families” and to discover new threats. A comparative experimental study using real network traffic from various environments indicates that the proposed system outperforms existing state-of-the-art rule-based systems, such as Snort and Suricata. In particular, our chronological evaluation shows that many unknown malware incidents could have been detected at least a month before their static rules were introduced to either the Snort or Suricata systems.",
"title": ""
},
{
"docid": "c79be5b8b375a9bced1bfe5c3f9024ce",
"text": "Recent technological advances have enabled DNA methylation to be assayed at single-cell resolution. However, current protocols are limited by incomplete CpG coverage and hence methods to predict missing methylation states are critical to enable genome-wide analyses. We report DeepCpG, a computational approach based on deep neural networks to predict methylation states in single cells. We evaluate DeepCpG on single-cell methylation data from five cell types generated using alternative sequencing protocols. DeepCpG yields substantially more accurate predictions than previous methods. Additionally, we show that the model parameters can be interpreted, thereby providing insights into how sequence composition affects methylation variability.",
"title": ""
},
{
"docid": "6fd3f4ab064535d38c01f03c0135826f",
"text": "BACKGROUND\nThere is evidence of under-detection and poor management of pain in patients with dementia, in both long-term and acute care. Accurate assessment of pain in people with dementia is challenging and pain assessment tools have received considerable attention over the years, with an increasing number of tools made available. Systematic reviews on the evidence of their validity and utility mostly compare different sets of tools. This review of systematic reviews analyses and summarises evidence concerning the psychometric properties and clinical utility of pain assessment tools in adults with dementia or cognitive impairment.\n\n\nMETHODS\nWe searched for systematic reviews of pain assessment tools providing evidence of reliability, validity and clinical utility. Two reviewers independently assessed each review and extracted data from them, with a third reviewer mediating when consensus was not reached. Analysis of the data was carried out collaboratively. The reviews were synthesised using a narrative synthesis approach.\n\n\nRESULTS\nWe retrieved 441 potentially eligible reviews, 23 met the criteria for inclusion and 8 provided data for extraction. Each review evaluated between 8 and 13 tools, in aggregate providing evidence on a total of 28 tools. The quality of the reviews varied and the reporting often lacked sufficient methodological detail for quality assessment. The 28 tools appear to have been studied in a variety of settings and with varied types of patients. The reviews identified several methodological limitations across the original studies. The lack of a 'gold standard' significantly hinders the evaluation of tools' validity. Most importantly, the samples were small providing limited evidence for use of any of the tools across settings or populations.\n\n\nCONCLUSIONS\nThere are a considerable number of pain assessment tools available for use with the elderly cognitive impaired population. However there is limited evidence about their reliability, validity and clinical utility. On the basis of this review no one tool can be recommended given the existing evidence.",
"title": ""
},
{
"docid": "457ba37bf69b870db2653b851d271b0b",
"text": "This paper presents a unified approach to local trajectory planning and control for the autonomous ground vehicle driving along a rough predefined path. In order to cope with the unpredictably changing environment reactively and reason about the global guidance, we develop an efficient sampling-based model predictive local path generation approach to generate a set of kinematically-feasible trajectories aligning with the reference path. A discrete optimization scheme is developed to select the best path based on a specified objective function, then followed by the velocity profile generation. As for the low-level control, to achieve high performance of control, two degree of freedom control architecture is employed by combining the feedforward control with the feedback control. The simulation results demonstrate the capability of the proposed approach to track the curvature-discontinuous reference path robustly, while avoiding collisions with static obstacles.",
"title": ""
},
{
"docid": "57856c122a6f8a0db8423a1af9378b3e",
"text": "Probiotics are defined as live microorganisms, which when administered in adequate amounts, confer a health benefit on the host. Health benefits have mainly been demonstrated for specific probiotic strains of the following genera: Lactobacillus, Bifidobacterium, Saccharomyces, Enterococcus, Streptococcus, Pediococcus, Leuconostoc, Bacillus, Escherichia coli. The human microbiota is getting a lot of attention today and research has already demonstrated that alteration of this microbiota may have far-reaching consequences. One of the possible routes for correcting dysbiosis is by consuming probiotics. The credibility of specific health claims of probiotics and their safety must be established through science-based clinical studies. This overview summarizes the most commonly used probiotic microorganisms and their demonstrated health claims. As probiotic properties have been shown to be strain specific, accurate identification of particular strains is also very important. On the other hand, it is also demonstrated that the use of various probiotics for immunocompromised patients or patients with a leaky gut has also yielded infections, sepsis, fungemia, bacteraemia. Although the vast majority of probiotics that are used today are generally regarded as safe and beneficial for healthy individuals, caution in selecting and monitoring of probiotics for patients is needed and complete consideration of risk-benefit ratio before prescribing is recommended.",
"title": ""
},
{
"docid": "95bbe5d13f3ca5f97d01f2692a9dc77a",
"text": "Moringa oleifera Lam. (family; Moringaceae), commonly known as drumstick, have been used for centuries as a part of the Ayurvedic system for several diseases without having any scientific data. Demineralized water was used to prepare aqueous extract by maceration for 24 h and complete metabolic profiling was performed using GC-MS and HPLC. Hypoglycemic properties of extract have been tested on carbohydrate digesting enzyme activity, yeast cell uptake, muscle glucose uptake, and intestinal glucose absorption. Type 2 diabetes was induced by feeding high-fat diet (HFD) for 8 weeks and a single injection of streptozotocin (STZ, 45 mg/kg body weight, intraperitoneally) was used for the induction of type 1 diabetes. Aqueous extract of M. oleifera leaf was given orally at a dose of 100 mg/kg to STZ-induced rats and 200 mg/kg in HFD mice for 3 weeks after diabetes induction. Aqueous extract remarkably inhibited the activity of α-amylase and α-glucosidase and it displayed improved antioxidant capacity, glucose tolerance and rate of glucose uptake in yeast cell. In STZ-induced diabetic rats, it produces a maximum fall up to 47.86% in acute effect whereas, in chronic effect, it was 44.5% as compared to control. The fasting blood glucose, lipid profile, liver marker enzyme level were significantly (p < 0.05) restored in both HFD and STZ experimental model. Multivariate principal component analysis on polar and lipophilic metabolites revealed clear distinctions in the metabolite pattern in extract and in blood after its oral administration. Thus, the aqueous extract can be used as phytopharmaceuticals for the management of diabetes by using as adjuvants or alone.",
"title": ""
},
{
"docid": "1be6aecdc3200ed70ede2d5e96cb43be",
"text": "In this paper we are exploring different models and methods for improving the performance of text independent speaker identification system for mobile devices. The major issues in speaker recognition for mobile devices are (i) presence of varying background environment, (ii) effect of speech coding introduced by the mobile device, and (iii) impairments due to wireless channel. In this paper, we are proposing multi-SNR multi-environment speaker models and speech enhancement (preprocessing) methods for improving the performance of speaker recognition system in mobile environment. For this study, we have simulated five different background environments (Car, Factory, High frequency, pink noise and white Gaussian noise) using NOISEX data. Speaker recognition studies are carried out on TIMIT, cellular, and microphone speech databases. Autoassociative neural network models are explored for developing these multi-SNR multi-environment speaker models. The results indicate that the proposed multi-SNR multi-environment speaker models and speech enhancement preprocessing methods have enhanced the speaker recognition performance in the presence of different noisy environments.",
"title": ""
},
{
"docid": "6646b5b8b4b9946fb58b4570763904d9",
"text": "Nowadays, mobile devices are ubiquitous in people's everyday life and applications on mobile devices are becoming increasingly resource-hungry. However, the resources on mobile devices are limited. Mobile cloud computing addresses the resource scarcity problem of mobile devices by offloading computation and/or data from mobile devices into the cloud. In the converging progress of mobile computing and cloud computing, the cloudlet is an important complement to the client-cloud hierarchy. This paper presents an extensive survey of researches on cloudlet based mobile computing. We first retrospect the evolution of cloudlet based mobile computing. After that, we review the existing works on cloudlet based computation offloading and data offloading. Then we introduce two examples of commercial cloudlet products. At last, we discuss the current situation, the challenges and future directions of this area.",
"title": ""
},
{
"docid": "5a6bfd63fbbe4aea72226c4aa30ac05d",
"text": "Submitted: 1 December 2015 Accepted: 6 April 2016 doi:10.1111/zsc.12190 Sotka, E.E., Bell, T., Hughes, L.E., Lowry, J.K. & Poore, A.G.B. (2016). A molecular phylogeny of marine amphipods in the herbivorous family Ampithoidae. —Zoologica Scripta, 00, 000–000. Ampithoid amphipods dominate invertebrate assemblages associated with shallow-water macroalgae and seagrasses worldwide and represent the most species-rich family of herbivorous amphipod known. To generate the first molecular phylogeny of this family, we sequenced 35 species from 10 genera at two mitochondrial genes [the cytochrome c oxidase subunit I (COI) and the large subunit of 16 s (LSU)] and two nuclear loci [sodium–potassium ATPase (NAK) and elongation factor 1-alpha (EF1)], for a total of 1453 base pairs. All 10 genera are embedded within an apparently monophyletic Ampithoidae (Amphitholina, Ampithoe, Biancolina, Cymadusa, Exampithoe, Paragrubia, Peramphithoe, Pleonexes, Plumithoe, Pseudoamphithoides and Sunamphitoe). Biancolina was previously placed within its own superfamily in another suborder. Within the family, single-locus trees were generally poor at resolving relationships among genera. Combined-locus trees were better at resolving deeper nodes, but complete resolution will require greater taxon sampling of ampithoids and closely related outgroup species, and more molecular characters. Despite these difficulties, our data generally support the monophyly of Ampithoidae, novel evolutionary relationships among genera, several currently accepted genera that will require revisions via alpha taxonomy and the presence of cryptic species. Corresponding author: Erik Sotka, Department of Biology and the College of Charleston Marine Laboratory, 205 Fort Johnson Road, Charleston, SC 29412, USA. E-mail: SotkaE@cofc.edu Erik E. Sotka, and Tina Bell, Department of Biology and Grice Marine Laboratory, College of Charleston, 205 Fort Johnson Road, Charleston, SC 29412, USA. E-mails: SotkaE@cofc.edu, tinamariebell@gmail.com Lauren E. Hughes, and James K. Lowry, Australian Museum Research Institute, 6 College Street, Sydney, NSW 2010, Australia. E-mails: megaluropus@gmail.com, stephonyx@gmail.com Alistair G. B. Poore, Evolution & Ecology Research Centre, School of Biological, Earth and Environmental Sciences, University of New South Wales, Sydney, NSW 2052, Australia. E-mail: a.poore@unsw.edu.au",
"title": ""
},
{
"docid": "261ef8b449727b615f8cd5bd458afa91",
"text": "Luck (2009) argues that gamers face a dilemma when it comes to performing certain virtual acts. Most gamers regularly commit acts of virtual murder, and take these acts to be morally permissible. They are permissible because unlike real murder, no one is harmed in performing them; their only victims are computer-controlled characters, and such characters are not moral patients. What Luck points out is that this justification equally applies to virtual pedophelia, but gamers intuitively think that such acts are not morally permissible. The result is a dilemma: either gamers must reject the intuition that virtual pedophelic acts are impermissible and so accept partaking in such acts, or they must reject the intuition that virtual murder acts are permissible, and so abstain from many (if not most) extant games. While the prevailing solution to this dilemma has been to try and find a morally relevant feature to distinguish the two cases, I argue that a different route should be pursued. It is neither the case that all acts of virtual murder are morally permissible, nor are all acts of virtual pedophelia impermissible. Our intuitions falter and produce this dilemma because they are not sensitive to the different contexts in which games present virtual acts.",
"title": ""
},
{
"docid": "23df6d913ffcdeda3de8b37977866bb7",
"text": "This paper examined the impact of customer relationship management (CRM) elements on customer satisfaction and loyalty. CRM is one of the critical strategies that can be employed by organizations to improve competitive advantage. Four critical CRM elements are measured in this study are behavior of the employees, quality of customer services, relationship development and interaction management. The study was performed at a departmental store in Tehran, Iran. The study employed quantitative approach and base on 300 respondents. Multiple regression analysis is used to examine the relationship of the variables. The finding shows that behavior of the employees is significantly relate and contribute to customer satisfaction and loyalty.",
"title": ""
},
{
"docid": "b311ce7a34d3bdb21678ed765bcd0f0b",
"text": "This paper focuses on the micro-blogging service Twitter, looking at source credibility for information shared in relation to the Fukushima Daiichi nuclear power plant disaster in Japan. We look at the sources, credibility, and between-language differences in information shared in the month following the disaster. Messages were categorized by user, location, language, type, and credibility of information source. Tweets with reference to third-party information made up the bulk of messages sent, and it was also found that a majority of those sources were highly credible, including established institutions, traditional media outlets, and highly credible individuals. In general, profile anonymity proved to be correlated with a higher propensity to share information from low credibility sources. However, Japanese-language tweeters, while more likely to have anonymous profiles, referenced lowcredibility sources less often than non-Japanese tweeters, suggesting proximity to the disaster mediating the degree of credibility of shared content.",
"title": ""
},
{
"docid": "96f42b3a653964cffa15d9b3bebf0086",
"text": "The brain processes information through many layers of neurons. This deep architecture is representationally powerful1,2,3,4, but it complicates learning by making it hard to identify the responsible neurons when a mistake is made1,5. In machine learning, the backpropagation algorithm1 assigns blame to a neuron by computing exactly how it contributed to an error. To do this, it multiplies error signals by matrices consisting of all the synaptic weights on the neuron’s axon and farther downstream. This operation requires a precisely choreographed transport of synaptic weight information, which is thought to be impossible in the brain1,6,7,8,9,10,11,12,13,14. Here we present a surprisingly simple algorithm for deep learning, which assigns blame by multiplying error signals by random synaptic weights. We show that a network can learn to extract useful information from signals sent through these random feedback connections. In essence, the network learns to learn. We demonstrate that this new mechanism performs as quickly and accurately as backpropagation on a variety of problems and describe the principles which underlie its function. Our demonstration provides a plausible basis for how a neuron can be adapted using error signals generated at distal locations in the brain, and thus dispels long-held assumptions about the algorithmic constraints on learning in neural circuits. 1 ar X iv :1 41 1. 02 47 v1 [ qbi o. N C ] 2 N ov 2 01 4 Networks in the brain compute via many layers of interconnected neurons15,16. To work properly neurons must adjust their synapses so that the network’s outputs are appropriate for its tasks. A longstanding mystery is how upstream synapses (e.g. the synapse between α and β in Fig. 1a) are adjusted on the basis of downstream errors (e.g. e in Fig. 1a). In artificial intelligence this problem is solved by an algorithm called backpropagation of error1. Backprop works well in real-world applications17,18,19, and networks trained with it can account for cell response properties in some areas of cortex20,21. But it is biologically implausible because it requires that neurons send each other precise information about large numbers of synaptic weights — i.e. it needs weight transport1,6,7,8,12,14,22 (Fig. 1a, b). Specifically, backprop multiplies error signals e by the matrix W T , the transpose of the forward synaptic connections, W (Fig. 1b). This implies that feedback is computed using knowledge of all the synaptic weights W in the forward path. For this reason, current theories of biological learning have turned to simpler schemes such as reinforcement learning23, and “shallow” mechanisms which use errors to adjust only the final layer of a network4,11. But reinforcement learning, which delivers the same reward signal to each neuron, is slow and scales poorly with network size5,13,24. And shallow mechanisms waste the representational power of deep networks3,4,25. Here we describe a new deep-learning algorithm that is as fast and accurate as backprop, but much simpler, avoiding all transport of synaptic weight information. This makes it a mechanism the brain could easily exploit. It is based on three insights: (i) The feedback weights need not be exactly W T . In fact, any matrix B will suffice, so long as on average,",
"title": ""
},
{
"docid": "3417c9c1de65c18fe82dc8cdf46335a4",
"text": "Romualdo Pastor-Satorras, Claudio Castellano, 3 Piet Van Mieghem, and Alessandro Vespignani 6 Departament de F́ısica i Enginyeria Nuclear, Universitat Politècnica de Catalunya, Campus Nord B4, 08034 Barcelona, Spain Istituto dei Sistemi Complessi (ISC-CNR), via dei Taurini 19, I-00185 Roma, Italy Dipartimento di Fisica, “Sapienza” Università di Roma, P.le A. Moro 2, I-00185 Roma, Italy Delft University of Technology, Delft, The Netherlands Laboratory for the Modeling of Biological and Socio-technical Systems, Northeastern University, Boston MA 02115 USA Institute for Scientific Interchange Foundation, Turin 10133, Italy",
"title": ""
},
{
"docid": "d685e84f8ddc55f2391a9feffc88889f",
"text": "Little is known about how Agile developers and UX designers integrate their work on a day-to-day basis. While accounts in the literature attempt to integrate Agile development and UX design by combining their processes and tools, the contradicting claims found in the accounts complicate extracting advice from such accounts. This paper reports on three ethnographically-informed field studies of the day-today practice of developers and designers in organisational settings. Our results show that integration is achieved in practice through (1) mutual awareness, (2) expectations about acceptable behaviour, (3) negotiating progress and (4) engaging with each other. Successful integration relies on practices that support and maintain these four aspects in the day-to-day work of developers and designers.",
"title": ""
},
{
"docid": "07e93064b1971a32b5c85b251f207348",
"text": "With the growing demand on automotive electronics for the advanced driver assistance systems and autonomous driving, the functional safety becomes one of the most important issues in the hardware development. Thus, the safety standard for automotive E/E system, ISO-26262, becomes state-of-the-art guideline to ensure that the required safety level can be achieved. In this study, we base on ISO-26262 to develop a FMEDA-based fault injection and data analysis framework. The main contribution of this study is to effectively reduce the effort for generating FMEDA report which is used to evaluate hardware's safety level based on ISO-26262 standard.",
"title": ""
},
{
"docid": "72a86b52797d61bf631d75cd7109e9d9",
"text": "We introduce Olympus, a freely available framework for research in conversational interfaces. Olympus’ open, transparent, flexible, modular and scalable nature facilitates the development of large-scale, real-world systems, and enables research leading to technological and scientific advances in conversational spoken language interfaces. In this paper, we describe the overall architecture, several systems spanning different domains, and a number of current research efforts supported by Olympus.",
"title": ""
},
{
"docid": "e56af4a3a8fbef80493d77b441ee1970",
"text": "A new, systematic, simplified design procedure for quasi-Yagi antennas is presented. The design is based on the simple impedance matching among antenna components: i.e., transition, feed, and antenna. This new antenna design is possible due to the newly developed ultra-wideband transition. As design examples, wideband quasi- Yagi antennas are successfully designed and implemented in Ku- and Ka-bands with frequency bandwidths of 53.2% and 29.1%, and antenna gains of 4-5 dBi and 5.2-5.8 dBi, respectively. The design method can be applied to other balanced antennas and their arrays.",
"title": ""
},
{
"docid": "5e360af9f3fa234afe9d2f71d04cc64c",
"text": "Personality is an important psychological construct accounting for individual differences in people. To reliably, validly, and efficiently recognize an individual’s personality is a worthwhile goal; however, the traditional ways of personality assessment through self-report inventories or interviews conducted by psychologists are costly and less practical in social media domains, since they need the subjects to take active actions to cooperate. This paper proposes a method of big five personality recognition (PR) from microblog in Chinese language environments with a new machine learning paradigm named label distribution learning (LDL), which has never been previously reported to be used in PR. One hundred and thirteen features are extracted from 994 active Sina Weibo users’ profiles and micro-blogs. Eight LDL algorithms and nine non-trivial conventional machine learning algorithms are adopted to train the big five personality traits prediction models. Experimental results show that two of the proposed LDL approaches outperform the others in predictive ability, and the most predictive one also achieves relatively higher running efficiency among all the algorithms.",
"title": ""
}
] |
scidocsrr
|
8cf64d30e864cab57eaddbad7923891a
|
Health-ATM: A Deep Architecture for Multifaceted Patient Health Record Representation and Risk Prediction
|
[
{
"docid": "897a6d208785b144b5d59e4f346134cd",
"text": "Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name \"deep patient\". We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.",
"title": ""
},
{
"docid": "d3fa8a6b4cd436b16d98166e2c4c230d",
"text": "Inferring phenotypic patterns from population-scale clinical data is a core computational task in the development of personalized medicine. One important source of data on which to conduct this type of research is patient Electronic Medical Records (EMR). However, the patient EMRs are typically sparse and noisy, which creates significant challenges if we use them directly to represent patient phenotypes. In this paper, we propose a data driven phenotyping framework called Pacifier (PAtient reCord densIFIER), where we interpret the longitudinal EMR data of each patient as a sparse matrix with a feature dimension and a time dimension, and derive more robust patient phenotypes by exploring the latent structure of those matrices. Specifically, we assume that each derived phenotype is composed of a subset of the medical features contained in original patient EMR, whose value evolves smoothly over time. We propose two formulations to achieve such goal. One is Individual Basis Approach (IBA), which assumes the phenotypes are different for every patient. The other is Shared Basis Approach (SBA), which assumes the patient population shares a common set of phenotypes. We develop an efficient optimization algorithm that is capable of resolving both problems efficiently. Finally we validate Pacifier on two real world EMR cohorts for the tasks of early prediction of Congestive Heart Failure (CHF) and End Stage Renal Disease (ESRD). Our results show that the predictive performance in both tasks can be improved significantly by the proposed algorithms (average AUC score improved from 0.689 to 0.816 on CHF, and from 0.756 to 0.838 on ESRD respectively, on diagnosis group granularity). We also illustrate some interesting phenotypes derived from our data.",
"title": ""
}
] |
[
{
"docid": "4044d493ac6c38fcb590a7fa5ced84d9",
"text": "Use of sub-design-rule (SDR) thick-gate-oxide MOS structures can significantly improve RF performance. Utilizing 3-stack 3.3-V MOSFET's with an SDR channel length, a 31.3-dBm 900-MHz Bulk CMOS T/R switch with transmit (TX) and receive (RX) insertion losses of 0.5 and 1.0 dB is realized. A 28-dBm 2.4-GHz T/R switch with TX and RX insertion losses of 0.8 and 1.2 dB is also demonstrated. SDR MOS varactors achieve Qmin of ~ 80 at 24 GHz with a tuning range of ~ 40%.",
"title": ""
},
{
"docid": "306a833c0130678e1b2ece7e8b824d5e",
"text": "In many natural languages, there are clear syntactic and/or intonational differences between declarative sentences, which are primarily used to provide information, and interrogative sentences, which are primarily used to request information. Most logical frameworks restrict their attention to the former. Those that are concerned with both usually assume a logical language that makes a clear syntactic distinction between declaratives and interrogatives, and usually assign different types of semantic values to these two types of sentences. A different approach has been taken in recent work on inquisitive semantics. This approach does not take the basic syntactic distinction between declaratives and interrogatives as its starting point, but rather a new notion of meaning that captures both informative and inquisitive content in an integrated way. The standard way to treat the logical connectives in this approach is to associate them with the basic algebraic operations on these new types of meanings. For instance, conjunction and disjunction are treated as meet and join operators, just as in classical logic. This gives rise to a hybrid system, where sentences can be both informative and inquisitive at the same time, and there is no clearcut division between declaratives and interrogatives. It may seem that these two general approaches in the existing literature are quite incompatible. The main aim of this paper is to show that this is not the case. We develop an inquisitive semantics for a logical language that has a clearcut division between declaratives and interrogatives. We show that this language coincides in expressive power with the hybrid language that is standardly assumed in inquisitive semantics, we establish a sound and complete axiomatization for the associated logic, and we consider a natural enrichment of the system with presuppositional interrogatives.",
"title": ""
},
{
"docid": "a281ab54ac5b5ff85d09b773429291d3",
"text": "This article elaborates on the competencies, often referred to as 21st century competencies, that are needed to be able to live in and contribute to our current (and future) society. We begin by describing, analysing and reflecting on international frameworks describing 21st century competencies, giving special attention to digital literacy as one of the core competencies for the 21st century. This is followed by an analysis of the learning approaches that are considered appropriate for acquiring 21st century competencies, and the specific role of technology in these learning processes. Despite some consensus about what 21st century competencies are and how they can be acquired, results from international studies indicate that teaching strategies for 21st century competencies are often not well implemented in actual educational practice. The reasons for this include a lack of integration of 21st century competencies in curriculum and assessment, insufficient preparation of teachers and the absence of any systematic attention for strategies to adopt at scale innovative teaching and learning practices. The article concludes with a range of specific recommendations for the implementation of 21st century competencies.",
"title": ""
},
{
"docid": "874ad221d7ea2fc9fdc368b814e7f4de",
"text": "Tail labels in the multi-label learning problem undermine the low-rank assumption. Nevertheless, this problem has rarely been investigated. In addition to using the low-rank structure to depict label correlations, this paper explores and exploits an additional sparse component to handle tail labels behaving as outliers, in order to make the classical low-rank principle in multi-label learning valid. The divide-and-conquer optimization technique is employed to increase the scalability of the proposed algorithm while theoretically guaranteeing its performance. A theoretical analysis of the generalizability of the proposed algorithm suggests that it can be improved by the low-rank and sparse decomposition given tail labels. Experimental results on real-world data demonstrate the significance of investigating tail labels and the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "7a86d9e19930ce5af78431a52bb75728",
"text": "Mapping Relational Databases (RDB) to RDF is an active field of research. The majority of data on the current Web is stored in RDBs. Therefore, bridging the conceptual gap between the relational model and RDF is needed to make the data available on the Semantic Web. In addition, recent research has shown that Semantic Web technologies are useful beyond the Web, especially if data from different sources has to be exchanged or integrated. Many mapping languages and approaches were explored leading to the ongoing standardization effort of the World Wide Web Consortium (W3C) carried out in the RDB2RDF Working Group (WG). The goal and contribution of this paper is to provide a feature-based comparison of the state-of-the-art RDB-to-RDF mapping languages. It should act as a guide in selecting a RDB-to-RDF mapping language for a given application scenario and its requirements w.r.t. mapping features. Our comparison framework is based on use cases and requirements for mapping RDBs to RDF as identified by the RDB2RDF WG. We apply this comparison framework to the state-of-the-art RDB-to-RDF mapping languages and report the findings in this paper. As a result, our classification proposes four categories of mapping languages: direct mapping, read-only general-purpose mapping, read-write general-purpose mapping, and special-purpose mapping. We further provide recommendations for selecting a mapping language.",
"title": ""
},
{
"docid": "0ac0f9965376f5547a2dabd3d06b6b96",
"text": "A sentence extract summary of a document is a subset of the document's sentences that contains the main ideas in the document. We present an approach to generating such summaries, a hidden Markov model that judges the likelihood that each sentence should be contained in the summary. We compare the results of this method with summaries generated by humans, showing that we obtain significantly higher agreement than do earlier methods.",
"title": ""
},
{
"docid": "12130736941091e54f42c82fffb5f0c0",
"text": "This paper introduces an ontology-based framework to improve the preparation of ISO/IEC 27001 audits, and to strengthen the security state of the company respectively. Building on extensive previous work on security ontologies, we elaborate on how ISO/IEC 27001 artifacts can be integrated into this ontology. A basic introduction to security ontologies is given first. Specific examples show how certain ISO/IEC 27001 requirements are to be integrated into the ontology; moreover, our rule-based engine is used to query the knowledge base to check whether specific security requirements are fulfilled. The aim of this paper is to explain how security ontologies can be used for a tool to support the ISO/IEC 27001 certification, providing pivotal information for the preparation of audits and the creation and maintenance of security guidelines and policies.",
"title": ""
},
{
"docid": "8741e414199ecfbbf4a4c16d8a303ab5",
"text": "In ophthalmic artery occlusion by hyaluronic acid injection, the globe may get worse by direct intravitreal administration of hyaluronidase. Retrograde cannulation of the ophthalmic artery may have the potential for restoration of retinal perfusion and minimizing the risk of phthisis bulbi. The study investigated the feasibility of cannulation of the ophthalmic artery for retrograde injection. In 10 right orbits of 10 cadavers, cannulation and ink injection of the supraorbital artery in the supraorbital approach were performed under surgical loupe magnification. In 10 left orbits, the medial upper lid was curvedly incised to retrieve the retroseptal ophthalmic artery for cannulation by a transorbital approach. Procedural times were recorded. Diameters of related arteries were bilaterally measured for comparison. Dissections to verify dye distribution were performed. Cannulation was successfully performed in 100 % and 90 % of the transorbital and the supraorbital approaches, respectively. The transorbital approach was more practical to perform compared with the supraorbital approach due to a trend toward a short procedure time (18.4 ± 3.8 vs. 21.9 ± 5.0 min, p = 0.74). The postseptal ophthalmic artery exhibited a tortious course, easily retrieved and cannulated, with a larger diameter compared to the supraorbital artery (1.25 ± 0.23 vs. 0.84 ± 0.16 mm, p = 0.000). The transorbital approach is more practical than the supraorbital approach for retrograde cannulation of the ophthalmic artery. This study provides a reliable access route implication for hyaluronidase injection into the ophthalmic artery to salvage central retinal occlusion following hyaluronic acid injection. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .",
"title": ""
},
{
"docid": "281f23c51d3ba27e09e3109c8578c385",
"text": "Generative Adversarial Networks (GANs) are an incredibly exciting approach for efficiently training computers to learn many features in data, as well as to generate realistic novel samples. Thanks to a number of their unique characteristics, some experts believe they may reinvent machine learning. In this thesis I explore the state of the GAN, focusing on the mechanisms by which they work, the fundamental challenges and strategies associated with training them, a selection of their various extensions, and what they may have to offer to the the greater machine learning community. I also consider the broader idea of building machine learning systems comprised of multiple neural networks, as opposed to using a single network. Using the state of the art progressive growing of GANs approach, I conducted experiments where I generated painting-like images that I believe to be the most authentic GAN-generated portrait paintings. I also generated highly realistic chest X-ray images, using a progressively grown GAN trained without labels on the NIH’s ChestX-ray14 dataset, which contains 112,000 chest X-ray images with 14 different disease diagnoses represented; it still remains to be seen whether the GAN-generated X-ray images contain clear identifying features of the various diseases. My generated results further demonstrate the relatively stable training of the progressive growing approach as well as the GAN’s compelling capacity for learning features in a variety of forms of image data.",
"title": ""
},
{
"docid": "a45ac7298f57a1be7bf5a968a3d4f10b",
"text": "Recent work has shown that tight concentration of the entire spectrum of singular values of a deep network’s input-output Jacobian around one at initialization can speed up learning by orders of magnitude. Therefore, to guide important design choices, it is important to build a full theoretical understanding of the spectra of Jacobians at initialization. To this end, we leverage powerful tools from free probability theory to provide a detailed analytic understanding of how a deep network’s Jacobian spectrum depends on various hyperparameters including the nonlinearity, the weight and bias distributions, and the depth. For a variety of nonlinearities, our work reveals the emergence of new universal limiting spectral distributions that remain concentrated around one even as the depth goes to infinity.",
"title": ""
},
{
"docid": "52b1c306355e6bf8ba10ea7e3cf1d05e",
"text": "QUESTION\nIs there a means of assessing research impact beyond citation analysis?\n\n\nSETTING\nThe case study took place at the Washington University School of Medicine Becker Medical Library.\n\n\nMETHOD\nThis case study analyzed the research study process to identify indicators beyond citation count that demonstrate research impact.\n\n\nMAIN RESULTS\nThe authors discovered a number of indicators that can be documented for assessment of research impact, as well as resources to locate evidence of impact. As a result of the project, the authors developed a model for assessment of research impact, the Becker Medical Library Model for Assessment of Research.\n\n\nCONCLUSION\nAssessment of research impact using traditional citation analysis alone is not a sufficient tool for assessing the impact of research findings, and it is not predictive of subsequent clinical applications resulting in meaningful health outcomes. The Becker Model can be used by both researchers and librarians to document research impact to supplement citation analysis.",
"title": ""
},
{
"docid": "5365f6f5174c3d211ea562c8a7fa0aab",
"text": "Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good cryptographic primitives. In this talk, we will first introduce the ba- sics of GANs and then discuss the fundamental statistical question about GANs — assuming the training can succeed with polynomial samples, can we have any statistical guarantees for the estimated distributions? In the work with Arora, Ge, Liang, and Zhang, we suggested a dilemma: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. Such a conundrum may be solved or alleviated by designing discrimina- tor class with strong distinguishing power against the particular generator class (instead of against all possible generators.)",
"title": ""
},
{
"docid": "42a412b11300ec8d7721c1f532dadfb9",
"text": " Most data-driven dependency parsing approaches assume that sentence structure is represented as trees. Although trees have several desirable properties from both computational and linguistic perspectives, the structure of linguistic phenomena that goes beyond shallow syntax often cannot be fully captured by tree representations. We present a parsing approach that is nearly as simple as current data-driven transition-based dependency parsing frameworks, but outputs directed acyclic graphs (DAGs). We demonstrate the benefits of DAG parsing in two experiments where its advantages over dependency tree parsing can be clearly observed: predicate-argument analysis of English and syntactic analysis of Danish with a representation that includes long-distance dependencies and anaphoric reference links.",
"title": ""
},
{
"docid": "ea49d288ffefd549f77519c90de51fbc",
"text": "Text line detection is a prerequisite procedure of mathematical formula recognition, however, many incorrectly segmented text lines are often produced due to the two-dimensional structures of mathematics when using existing segmentation methods such as Projection Profiles Cutting or white space analysis. In consequence, mathematical formula recognition is adversely affected by these incorrectly detected text lines, with errors propagating through further processes. Aimed at mathematical formula recognition, we propose a text line detection method to produce reliable line segmentation. Based on the results produced by PPC, a learning based merging strategy is presented to combine incorrectly split text lines. In the merging strategy, the features of layout and text for a text line and those between successive lines are utilised to detect the incorrectly split text lines. Experimental results show that the proposed approach obtains good performance in detecting text lines from mathematical documents. Furthermore, the error rate in mathematical formula identification is reduced significantly through adopting the proposed text line detection method.",
"title": ""
},
{
"docid": "2fbd1b2e25473affb40990195b26a88b",
"text": "In this paper we considerably improve on a state-of-the-art alpha matting approach by incorporating a new prior which is based on the image formation process. In particular, we model the prior probability of an alpha matte as the convolution of a high-resolution binary segmentation with the spatially varying point spread function (PSF) of the camera. Our main contribution is a new and efficient de-convolution approach that recovers the prior model, given an approximate alpha matte. By assuming that the PSF is a kernel with a single peak, we are able to recover the binary segmentation with an MRF-based approach, which exploits flux and a new way of enforcing connectivity. The spatially varying PSF is obtained via a partitioning of the image into regions of similar defocus. Incorporating our new prior model into a state-of-the-art matting technique produces results that outperform all competitors, which we confirm using a publicly available benchmark.",
"title": ""
},
{
"docid": "169258ee8696b481aac76fcee488632c",
"text": "Three parkinsonian patients are described who independently discovered that their gait was facilitated by inverting a walking stick and using the handle, carried a few inches from the ground, as a visual cue or target to step over and initiate walking. It is suggested that the \"inverted\" walking stick have wider application in patients with Parkinson's disease as an aid to walking, particularly if they have difficulty with step initiation and maintenance of stride length.",
"title": ""
},
{
"docid": "974800093c29c5484abd6644ae330555",
"text": "In this paper, we investigate the gender gap in education in rural northwest China. We first discuss parental perceptions of abilities and appropriate roles for girls and boys; parental concerns about old-age support; and parental perceptions of different labor market outcomes for girls' and boys' education. We then investigate gender disparities in investments in children, children's performance at school, and children's subsequent attainment. We analyze a survey of 9-12-year-old children and their families conducted in rural Gansu Province in the year 2000, along with follow-up information about subsequent educational attainment collected 7 years later. We complement our main analysis with two illustrative case studies of rural families drawn from 11 months of fieldwork conducted in rural Gansu between 2003 and 2005 by the second author.In 2000, most mothers expressed egalitarian views about girls' and boys' rights and abilities, in the abstract. However, the vast majority of mothers still expected to rely on sons for old-age support, and nearly one in five mothers interviewed agreed with the traditional saying, \"Sending girls to school is useless since they will get married and leave home.\" Compared to boys, girls faced somewhat lower (though still very high) maternal educational expectations and a greater likelihood of being called on for household chores than boys. However, there was little evidence of a gender gap in economic investments in education. Girls rivaled or outperformed boys in academic performance and engagement. Seven years later, boys had attained just about a third of a year more schooling than girls-a quite modest advantage that could not be fully explained by early parental attitudes and investments, or student performance or engagement. Fieldwork confirmed that parents of sons and daughters tended to have high aspirations for their children. Parents sometimes viewed boys as having greater aptitude, but tended to view girls as having more dedication-an attribute parents perceived as being critical for educational success. Findings suggest that at least in Gansu, rural parental educational attitudes and practices toward boys and girls are more complicated and less uniformly negative for girls than commonly portrayed.",
"title": ""
},
{
"docid": "a532dcd3dbaf3ba784d1f5f8623b600c",
"text": "Our long term interest is in building inference algorithms capable of answering questions and producing human-readable explanations by aggregating information from multiple sources and knowledge bases. Currently information aggregation (also referred to as “multi-hop inference”) is challenging for more than two facts due to “semantic drift”, or the tendency for natural language inference algorithms to quickly move off-topic when assembling long chains of knowledge. In this paper we explore the possibility of generating large explanations with an average of six facts by automatically extracting common explanatory patterns from a corpus of manually authored elementary science explanations represented as lexically-connected explanation graphs grounded in a semi-structured knowledge base of tables. We empirically demonstrate that there are sufficient common explanatory patterns in this corpus that it is possible in principle to reconstruct unseen explanation graphs by merging multiple explanatory patterns, then adapting and/or adding to their knowledge. This may ultimately provide a mechanism to allow inference algorithms to surpass the two-fact “aggregation horizon” in practice by using common explanatory patterns as constraints to limit the search space during information aggregation.",
"title": ""
},
{
"docid": "cae9e77074db114690a6ed1330d9b14c",
"text": "BACKGROUND\nOn December 8th, 2015, World Health Organization published a priority list of eight pathogens expected to cause severe outbreaks in the near future. To better understand global research trends and characteristics of publications on these emerging pathogens, we carried out this bibliometric study hoping to contribute to global awareness and preparedness toward this topic.\n\n\nMETHOD\nScopus database was searched for the following pathogens/infectious diseases: Ebola, Marburg, Lassa, Rift valley, Crimean-Congo, Nipah, Middle Eastern Respiratory Syndrome (MERS), and Severe Respiratory Acute Syndrome (SARS). Retrieved articles were analyzed to obtain standard bibliometric indicators.\n\n\nRESULTS\nA total of 8619 journal articles were retrieved. Authors from 154 different countries contributed to publishing these articles. Two peaks of publications, an early one for SARS and a late one for Ebola, were observed. Retrieved articles received a total of 221,606 citations with a mean ± standard deviation of 25.7 ± 65.4 citations per article and an h-index of 173. International collaboration was as high as 86.9%. The Centers for Disease Control and Prevention had the highest share (344; 5.0%) followed by the University of Hong Kong with 305 (4.5%). The top leading journal was Journal of Virology with 572 (6.6%) articles while Feldmann, Heinz R. was the most productive researcher with 197 (2.3%) articles. China ranked first on SARS, Turkey ranked first on Crimean-Congo fever, while the United States of America ranked first on the remaining six diseases. Of retrieved articles, 472 (5.5%) were on vaccine - related research with Ebola vaccine being most studied.\n\n\nCONCLUSION\nNumber of publications on studied pathogens showed sudden dramatic rise in the past two decades representing severe global outbreaks. Contribution of a large number of different countries and the relatively high h-index are indicative of how international collaboration can create common health agenda among distant different countries.",
"title": ""
},
{
"docid": "ebf06033624b52607ed767019fdfd1c8",
"text": "In Taiwan elementary schools, Scratch programming has been taught for more than four years. Previous studies have shown that personal annotations is a useful learning method that improve learning performance. An annotation-based Scratch programming (ASP) system provides for the creation, share, and review of annotations and homework solutions in the interface of Scratch programming. In addition, we combine the ASP system with the problem solving-based teaching approach in Scratch programming pedagogy, which boosts cognition development and enhances learning achievements. This study is aimed at exploring the effects of annotations and homework on learning achievement. A quasi-experimental method was used with elementary school students in a Scratch programming course over a 4-month period. The experimental results revealed that students’ thoughts and solutions in solving homework assignments have a significant influence on learning achievement. We further investigated that only making annotations in solving homework activities, among all other variables (the quantity of annotations, the quantity of one’s own annotations reviewed, the quantity of peers’ annotations reviewed, the quantity of one’s own homework solutions reviewed, and the quantity of peers’ homework solutions reviewed), can significantly predict learning achievements.",
"title": ""
}
] |
scidocsrr
|
e1dfeaf110c1db0dab7e707242321c22
|
Multi-Sense Embeddings Through a Word Sense Disambiguation Process
|
[
{
"docid": "09df260d26638f84ec3bd309786a8080",
"text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/",
"title": ""
},
{
"docid": "d2a1ecb8ad28ed5ba75460827341f741",
"text": "Most word representation methods assume that each word owns a single semantic vector. This is usually problematic because lexical ambiguity is ubiquitous, which is also the problem to be resolved by word sense disambiguation. In this paper, we present a unified model for joint word sense representation and disambiguation, which will assign distinct representations for each word sense.1 The basic idea is that both word sense representation (WSR) and word sense disambiguation (WSD) will benefit from each other: (1) highquality WSR will capture rich information about words and senses, which should be helpful for WSD, and (2) high-quality WSD will provide reliable disambiguated corpora for learning better sense representations. Experimental results show that, our model improves the performance of contextual word similarity compared to existing WSR methods, outperforms stateof-the-art supervised methods on domainspecific WSD, and achieves competitive performance on coarse-grained all-words WSD.",
"title": ""
}
] |
[
{
"docid": "f89e14124700ee75fc2499b4b418cc49",
"text": "This study examines cultural differences and similarities in design of university Web sites using Hofstede’s model of cultural dimensions. Graphical elements on a sample of university home pages from Malaysia, Austria, the United States, Ecuador, Japan, Sweden, Greece and Denmark are compared using content analysis methods. The home pages were analyzed on the basis of two criteria: organization and graphical design. Element frequency scores were correlated with Hofstede’s indexes and interpreted on the basis of the existing literature. The results suggest that similarities and differences in Web site design can be brought out through Hofstede’s cultural model. Computed correlations between Hofstede’s scores and frequency counts of interface elements were weaker than anticipated, but in most cases occurred in the hypothesized direction.",
"title": ""
},
{
"docid": "19efc1557ccf9cc6a270a85e7afa6bc8",
"text": "This paper looks into a new direction in video content analysis - the representation and modeling of affective video content . The affective content of a given video clip can be defined as the intensity and type of feeling or emotion (both are referred to as affect) that are expected to arise in the user while watching that clip. The availability of methodologies for automatically extracting this type of video content will extend the current scope of possibilities for video indexing and retrieval. For instance, we will be able to search for the funniest or the most thrilling parts of a movie, or the most exciting events of a sport program. Furthermore, as the user may want to select a movie not only based on its genre, cast, director and story content, but also on its prevailing mood, the affective content analysis is also likely to contribute to enhancing the quality of personalizing the video delivery to the user. We propose in this paper a computational framework for affective video content representation and modeling. This framework is based on the dimensional approach to affect that is known from the field of psychophysiology. According to this approach, the affective video content can be represented as a set of points in the two-dimensional (2-D) emotion space that is characterized by the dimensions of arousal (intensity of affect) and valence (type of affect). We map the affective video content onto the 2-D emotion space by using the models that link the arousal and valence dimensions to low-level features extracted from video data. This results in the arousal and valence time curves that, either considered separately or combined into the so-called affect curve, are introduced as reliable representations of expected transitions from one feeling to another along a video, as perceived by a viewer.",
"title": ""
},
{
"docid": "b85a6286ca2fb14a9255c9d70c677de3",
"text": "0140-3664/$ see front matter 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.comcom.2013.01.009 q The research leading to these results has been conducted in the SAIL project and received funding from the European Community’s Seventh Framework Program (FP7/2007-2013) under Grant Agreement No. 257448. ⇑ Corresponding author. Tel.: +49 5251 60 5385; fax: +49 5251 60 5377. E-mail addresses: cdannewitz@upb.de (C. Dannewitz), Dirk.Kutscher@neclab.eu (D. Kutscher), Borje.Ohlman@ericsson.com (B. Ohlman), stephen.farrell@cs.tcd.ie (S. Farrell), bengta@sics.se (B. Ahlgren), hkarl@upb.de (H. Karl). 1 <http://www.cisco.com/web/solutions/sp/vni/vni_mobile_forecast_highlights/ index.html>. Christian Dannewitz , Dirk Kutscher b,⇑, Börje Ohlman , Stephen Farrell , Bengt Ahlgren , Holger Karl a",
"title": ""
},
{
"docid": "357ae5590fb6f11fbd210baced2fc4ee",
"text": "To achieve the best results from an OCR system, the pre-processing steps must be performed with a high degree of accuracy and reliability. There are two critically important steps in the OCR pre-processing phase. First, blocks must be extracted from each page of the scanned document. Secondly, all blocks resulting from the first step must be arranged in the correct order. One of the most notable techniques for block ordering in the second step is the recursive x-y cut (RXYC) algorithm. This technique works accurately only when applied to documents with a simple page layout but it causes incorrect block ordering when applied to documents with complex page layouts. This paper proposes a modified recursive x-y cut algorithm for solving block ordering problems for documents with complex page layouts. This proposed algorithm can solve problems such as (1) the overlapping block problem; (2) the blocks overlay problem, and (3) the L-Shaped block problem.",
"title": ""
},
{
"docid": "9d97803a016e24fc9a742d45adf1cc3a",
"text": "Biochemical compositional analysis of microbial biomass is a useful tool that can provide insight into the behaviour of an organism and its adaptational response to changes in its environment. To some extent, it reflects the physiological and metabolic status of the organism. Conventional methods to estimate biochemical composition often employ different sample pretreatment strategies and analytical steps for analysing each major component, such as total proteins, carbohydrates, and lipids, making it labour-, time- and sample-intensive. Such analyses when carried out individually can also result in uncertainties of estimates as different pre-treatment or extraction conditions are employed for each of the component estimations and these are not necessarily standardised for the organism, resulting in observations that are not easy to compare within the experimental set-up or between laboratories. We recently reported a method to estimate total lipids in microalgae (Chen, Vaidyanathan, Anal. Chim. Acta, 724, 67-72). Here, we propose a unified method for the simultaneous estimation of the principal biological components, proteins, carbohydrates, lipids, chlorophyll and carotenoids, in a single microalgae culture sample that incorporates the earlier published lipid assay. The proposed methodology adopts an alternative strategy for pigment assay that has a high sensitivity. The unified assay is shown to conserve sample (by 79%), time (67%), chemicals (34%) and energy (58%) when compared to the corresponding assay for each component, carried out individually on different samples. The method can also be applied to other microorganisms, especially those with recalcitrant cell walls.",
"title": ""
},
{
"docid": "0bf227d17e76d1fb16868ff90d75e94c",
"text": "The high-efficiency current-mode (CM) and voltage-mode (VM) Class-E power amplifiers (PAs) for MHz wireless power transfer (WPT) systems are first proposed in this paper and the design methodology for them is presented. The CM/VM Class-E PA is able to deliver the increasing/decreasing power with the increasing load and the efficiency maintains high even when the load varies in a wide range. The high efficiency and certain operation mode are realized by introducing an impedance transformation network with fixed components. The efficiency, output power, circuit tolerance, and robustness are all taken into consideration in the design procedure, which makes the CM and the VM Class-E PAs especially practical and efficient to real WPT systems. 6.78-MHz WPT systems with the CM and the VM Class-E PAs are fabricated and compared to that with the classical Class-E PA. The measurement results show that the output power is proportional to the load for the CM Class-E PA and is inversely proportional to the load for the VM Class-E PA. The efficiency for them maintains high, over 83%, when the load of PA varies from 10 to 100 $\\Omega$, while the efficiency of the classical Class-E is about 60% in the worst case. The experiment results validate the feasibility of the proposed design methodology and show that the CM and the VM Class-E PAs present superior performance in WPT systems compared to the traditional Class-E PA.",
"title": ""
},
{
"docid": "66b2ca04ed0b1435d525f04cd81969ac",
"text": "Over the past couple of decades, trends in both microarchitecture and underlying semiconductor technology have significantly reduced microprocessor clock periods. These trends have significantly increased relative main-memory latencies as measured in processor clock cycles. To avoid large performance losses caused by long memory access delays, microprocessors rely heavily on a hierarchy of cache memories. But cache memories are not always effective, either because they are not large enough to hold a program's working set, or because memory access patterns don't exhibit behavior that matches a cache memory's demand-driven, line-structured organization. To partially overcome cache memories' limitations, we organize data cache prefetch information in a new way, a GHB (global history buffer) supports existing prefetch algorithms more effectively than conventional prefetch tables. It reduces stale table data, improving accuracy and reducing memory traffic. It contains a more complete picture of cache miss history and is smaller than conventional tables.",
"title": ""
},
{
"docid": "d3ccd01e5efacc8772a01768e09b48e0",
"text": "We introduce the Wordometer, a novel method to estimate the number of words a user reads using a mobile eye tracker and document image retrieval. We present a reading detection algorithm which works with over 91 % accuracy over 10 test subjects using 10-fold cross validation. We implement two algorithms to estimate the read words using a line break detector. A simple version gives an average error rate of 13,5 % for 9 users over 10 documents. A more sophisticated word count algorithm based on support vector regression with an RBF kernel reaches an average error rate from only 8.2 % (6.5 % if one test subject with abnormal behavior is excluded). The achieved error rates are comparable to pedometers that count our steps in our daily life. Thus, we believe the Wordometer can be used as a step counter for the information we read to make our knowledge life healthier.",
"title": ""
},
{
"docid": "c049c79253bd9575774c60b459af4505",
"text": "Ginkgo has been a mainstay of traditional Chinese medicine for more than 5000 years. Perhaps the ancient Taoist Monks had some vision of the future of Ginkgo as a brain and memory tonic when they planted it ceremonially in places of honor in their monasteries. They felt that this two lobed, fan-shaped leaf (biloba) represented the two phases of Yin and Yang in Taoist Philosophy. Ginkgo was planted to portray wisdom, centeredness and a meditative state.",
"title": ""
},
{
"docid": "479b124662755d8b07f2f5f9baabef9a",
"text": "The ARINC 653 specification defines the functionality that an operating system (OS) must guarantee to enforce robust spatial and temporal partitioning as well as an avionics application programming interface for the system. The standard application interface - the ARINC 653 application executive (APEX) - is defined as a set of software services a compliant OS must provide to avionics application developers. The ARINC 653 specification defines the interfaces and the behavior of the APEX but leaves implementation details to OS vendors. This paper describes an OS independent design approach of a portable APEX interface. POSIX, as a programming interface available on a wide range of modern OS, will be used to implement the APEX layer. This way the standardization of the APEX is taken a step further: not only the definition of services is standardized but also its interface to the underlying OS. Therefore, the APEX operation does not depend on a particular OS but relies on a well defined set of standardized components.",
"title": ""
},
{
"docid": "9a52f85acb0c322c47921a33f530c7c2",
"text": "The purpose of this article is to explore recent research into World Englishes (henceforth WEs) and English as a Lingua Franca (ELF),1 focusing on its implications for TESOL, and the extent to which it is being taken into account by English language teachers, linguists, and second language acquisition researchers. After a brief introduction comparing the current situation with that of 15 years ago, I look more closely at definitions of WEs and ELF. Then follows an overview of relevant developments in WEs and ELF research during the past 15 years, along with a more detailed discussion of some key research projects and any controversies they have aroused. I then address the implications of WEs/ELF research for TESOL vis-à-vis English language standards and standard English, and the longstanding native versus nonnative teacher debate. Finally, I assess the consensus on WEs and ELF that is emerging both among researchers and between researchers and language teaching professionals. The article concludes by raising a number of questions that remain to be investigated in future research.",
"title": ""
},
{
"docid": "82d6b67170b3245b422428751d148aac",
"text": "Disclosures about new financial frauds and scandals are continually appearing in the press. As a consequence, the accounting profession's traditional methods of monitoring corporate financial activities are under intense scrutiny. At the same time, there is recognition that principles-based GAAP from the International Accounting Standards Board will become the recognized standard in the U.S. The authors argue that these two factors will change the practices used to fight corporate malfeasance as investigators adapt the techniques of accounting into a forensic audit engagement model.",
"title": ""
},
{
"docid": "77a0234ae555075aebd10b0d9926484f",
"text": "The antibacterial effect of visible light irradiation combined with photosensitizers has been reported. The objective of this was to test the effect of visible light irradiation without photosensitizers on the viability of oral microorganisms. Strains of Porphyromonas gingivalis, Fusobacterium nucleatum, Streptococcus mutans and Streptococcus faecalis in suspension or grown on agar were exposed to visible light at wavelengths of 400-500 nm. These wavelengths are used to photopolymerize composite resins widely used for dental restoration. Three photocuring light sources, quartz-tungsten-halogen lamp, light-emitting diode and plasma-arc, at power densities between 260 and 1300 mW/cm2 were used for up to 3 min. Bacterial samples were also exposed to a near-infrared diode laser (wavelength, 830 nm), using identical irradiation parameters for comparison. The results show that blue light sources exert a phototoxic effect on P. gingivalis and F. nucleatum. The minimal inhibitory dose for P. gingivalis and F. nucleatum was 16-62 J/cm2, a value significantly lower than that for S. mutans and S. faecalis (159-212 J/cm2). Near-infrared diode laser irradiation did not affect any of the bacteria tested. Our results suggest that visible light sources without exogenous photosensitizers have a phototoxic effect mainly on Gram-negative periodontal pathogens.",
"title": ""
},
{
"docid": "04d286949838098a480e532001117013",
"text": "We propose Stegobot, a new generation botnet that communicates over probabilistically unobservable communication channels. It is designed to spread via social malware attacks and steal information from its victims. Unlike conventional botnets, Stegobot traffic does not introduce new communication endpoints between bots. Instead, it is based on a model of covert communication over a social-network overlay – bot to botmaster communication takes place along the edges of a social network. Further, bots use image steganography to hide the presence of communication within image sharing behavior of user interaction. We show that it is possible to design such a botnet even with a less than optimal routing mechanism such as restricted flooding. We analyzed a real-world dataset of image sharing between members of an online social network. Analysis of Stegobot’s network throughput indicates that stealthy as it is, it is also functionally powerful – capable of channeling fair quantities of sensitive data from its victims to the botmaster at tens of megabytes every month",
"title": ""
},
{
"docid": "0a1925251cac8d15da9bbc90627c28dc",
"text": "The Madden–Julian oscillation (MJO) is the dominant mode of tropical atmospheric intraseasonal variability and a primary source of predictability for global sub-seasonal prediction. Understanding the origin and perpetuation of the MJO has eluded scientists for decades. The present paper starts with a brief review of progresses in theoretical studies of the MJO and a discussion of the essential MJO characteristics that a theory should explain. A general theoretical model framework is then described in an attempt to integrate the major existing theoretical models: the frictionally coupled Kelvin–Rossby wave, the moisture mode, the frictionally coupled dynamic moisture mode, the MJO skeleton, and the gravity wave interference, which are shown to be special cases of the general MJO model. The last part of the present paper focuses on a special form of trio-interaction theory in terms of the general model with a simplified Betts–Miller (B-M) cumulus parameterization scheme. This trio-interaction theory extends the Matsuno–Gill theory by incorporating a trio-interaction among convection, moisture, and wave-boundary layer (BL) dynamics. The model is shown to produce robust large-scale characteristics of the observed MJO, including the coupled Kelvin–Rossby wave structure, slow eastward propagation (~5 m/s) over warm pool, the planetary (zonal) scale circulation, the BL low-pressure and moisture convergence preceding major convection, and amplification/decay over warm/cold sea surface temperature (SST) regions. The BL moisture convergence feedback plays a central role in coupling equatorial Kelvin and Rossby waves with convective heating, selecting a preferred eastward propagation, and generating instability. The moisture feedback can enhance Rossby wave component, thereby substantially slowing down eastward propagation. With the trio-interaction theory, a number of fundamental issues of MJO dynamics are addressed: why the MJO possesses a mixed Kelvin–Rossby wave structure and how the Kelvin and Rossby waves, which propagate in opposite directions, could couple together with convection and select eastward propagation; what makes the MJO move eastward slowly in the eastern hemisphere, resulting in the 30–60-day periodicity; why MJO amplifies over the warm pool ocean and decays rapidly across the dateline. Limitation and ramifications of the model results to general circulation modeling of MJO are discussed.",
"title": ""
},
{
"docid": "6fc9388ecbd862e36789250e99fde23d",
"text": "Short Term Tra c Forecasting: Modeling and Learning Spatio Temporal Relations in Transportation Networks Using Graph Neural Networks",
"title": ""
},
{
"docid": "e426f2c13b9cf7c924766f3c57c0c36b",
"text": "Burst-mode clock and data recovery circuits (BMCDR) are widely used in passive optical networks (PON) [1] and as a replacement for conventional CDRs in clock-forwarding links to reduce power [2]. In PON, a single CDR performs the task of clock and data recovery for several burst sequences, each originating from a different source. As a result, the BMCDR is required to lock to an incoming data stream within tens of UIs (for example 40ns in GPON). Previous works use either injection locking [3, 4] or gated VCO [5, 6] to achieve this fast locking. In both cases, the control voltage of the CDR's VCO is generated by a reference PLL with a matching VCO to guarantee accurate frequency locking. However, any component mismatch between the two VCO's results in a frequency offset between the reference PLL frequency and the CDR's VCO frequency, and hence in a reduction of the CDR's tolerance for consecutive identical digits (CID). For example, [7] reports a frequency offset of over 20MHz (2000ppm) for 10Gb/s operation. We present a BMCDR that is based on phase interpolation (PI), eliminating the possibility of local frequency offset between the reference and recovered clock. We demonstrate 1 to 6Gb/s operation in 65nm CMOS with a locking time of less than 1UI.",
"title": ""
},
{
"docid": "4aa96113ad29f737fbbf82f97b558211",
"text": "The null vector method, based on a simple linear algebraic concept, is proposed as a solution to the phase retrieval problem. In the case with complex Gaussian random measurement matrices, a non-asymptotic error bound is derived, yielding an asymptotic regime of accurate approximation comparable to that for the spectral vector method.",
"title": ""
},
{
"docid": "22cb0a390087efcb9fa2048c74e9845f",
"text": "This paper describes the early conception and latest developments of electroactive polymer (EAP)-based sensors, actuators, electronic components, and power sources, implemented as wearable devices for smart electronic textiles (e-textiles). Such textiles, functioning as multifunctional wearable human interfaces, are today considered relevant promoters of progress and useful tools in several biomedical fields, such as biomonitoring, rehabilitation, and telemedicine. After a brief outline on ongoing research and the first products on e-textiles under commercial development, this paper presents the most highly performing EAP-based devices developed by our lab and other research groups for sensing, actuation, electronics, and energy generation/storage, with reference to their already demonstrated or potential applicability to electronic textiles",
"title": ""
},
{
"docid": "b64e9e994730a0a530801c6f22d52cd1",
"text": "The environment of the city is surrounded by a lot of radio frequency (RF) energy that is broadcasted by diverse wireless systems. In order to enhance the value of the energy more than the channels to communicate, the ambient RF energy harvesting system was designed to harvest and recycle the energy for many applications such as battery chargers, sensor devices and portable devices. The main element of the ambient RF energy harvesting system is a rectenna that is the combination of an antenna and a rectifying circuit. Even though the ambient RF energy is widely broadcasted with many systems, the energy is extremely low. Therefore, the high performance of the antenna and the rectifying circuit has to be designed for supporting the small incident power; also the number of frequency channels of the rectenna can enhance the performance and support different harvesting locations. This paper proposes a dual-band rectifier for RF energy harvesting systems that was designed to operate at 2.1 GHz and 2.45 GHz. The first channel can provide the maximum efficiency of 24% with 1.9 V of the output voltage at 10 dBm of input power. On the other hand, a maximum efficiency of 18% and 1.7 V of the output voltage can be achieved by the second channel at 10 dBm of input power.",
"title": ""
}
] |
scidocsrr
|
95f95ff9dd9a832800ee576f45eb27ff
|
Modelling Experimental Game Design
|
[
{
"docid": "e5a3119470420024b99df2d6eb14b966",
"text": "Why should wait for some days to get or receive the rules of play game design fundamentals book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This rules of play game design fundamentals is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?",
"title": ""
},
{
"docid": "f672af55234d85a113e45fcb65a2149f",
"text": "In recent years, the fields of Interactive Storytelling and Player Modelling have independently enjoyed increased interest in both academia and the computer games industry. The combination of these technologies, however, remains largely unexplored. In this paper, we present PaSSAGE (PlayerSpecific Stories via Automatically Generated Events), an interactive storytelling system that uses player modelling to automatically learn a model of the player’s preferred style of play, and then uses that model to dynamically select the content of an interactive story. Results from a user study evaluating the entertainment value of adaptive stories created by our system as well as two fixed, pre-authored stories indicate that automatically adapting a story based on learned player preferences can increase the enjoyment of playing a computer role-playing game for certain types of players.",
"title": ""
}
] |
[
{
"docid": "8340d53ef94cb267e67a730f7a592257",
"text": "Systems that adapt to input from users are susceptible to attacks from those same users. Recommender systems are common targets for such attacks since there are financial, political and many other motivations for influencing the promotion or demotion of recommendable items [2].Recent research has shown that incorporating trust and reputation models into the recommendation process can have a positive impact on the accuracy and robustness of recommendations. In this paper we examine the effect of using five different trust models in the recommendation process on the robustness of collaborative filtering in an attack situation. In our analysis we also consider the quality and accuracy of recommendations. Our results caution that including trust models in recommendation can either reduce or increase prediction shift for an attacked item depending on the model-building process used, while highlighting approaches that appear to be more robust.",
"title": ""
},
{
"docid": "a35aa35c57698d2518e3485ec7649c66",
"text": "The review paper describes the application of various image processing techniques for automatic detection of glaucoma. Glaucoma is a neurodegenerative disorder of the optic nerve, which causes partial loss of vision. Large number of people suffers from eye diseases in rural and semi urban areas all over the world. Current diagnosis of retinal disease relies upon examining retinal fundus image using image processing. The key image processing techniques to detect eye diseases include image registration, image fusion, image segmentation, feature extraction, image enhancement, morphology, pattern matching, image classification, analysis and statistical measurements. KeywordsImage Registration; Fusion; Segmentation; Statistical measures; Morphological operation; Classification Full Text: http://www.ijcsmc.com/docs/papers/November2013/V2I11201336.pdf",
"title": ""
},
{
"docid": "d87edfb603b5d69bcd0e0dc972d26991",
"text": "The adult nervous system is not static, but instead can change, can be reshaped by experience. Such plasticity has been demonstrated from the most reductive to the most integrated levels, and understanding the bases of this plasticity is a major challenge. It is apparent that stress can alter plasticity in the nervous system, particularly in the limbic system. This paper reviews that subject, concentrating on: a) the ability of severe and/or prolonged stress to impair hippocampal-dependent explicit learning and the plasticity that underlies it; b) the ability of mild and transient stress to facilitate such plasticity; c) the ability of a range of stressors to enhance implicit fear conditioning, and to enhance the amygdaloid plasticity that underlies it.",
"title": ""
},
{
"docid": "c8edb6b8ed8176368faf591161718b95",
"text": "A new 4-group model of attachment styles in adulthood is proposed. Four prototypic attachment patterns are defined using combinations of a person's self-image (positive or negative) and image of others (positive or negative). In Study 1, an interview was developed to yield continuous and categorical ratings of the 4 attachment styles. Intercorrelations of the attachment ratings were consistent with the proposed model. Attachment ratings were validated by self-report measures of self-concept and interpersonal functioning. Each style was associated with a distinct profile of interpersonal problems, according to both self- and friend-reports. In Study 2, attachment styles within the family of origin and with peers were assessed independently. Results of Study 1 were replicated. The proposed model was shown to be applicable to representations of family relations; Ss' attachment styles with peers were correlated with family attachment ratings.",
"title": ""
},
{
"docid": "ec369ae7aa038ab688173a7583c51a22",
"text": "OBJECTIVE\nTo examine longitudinal associations of parental report of household food availability and parent intakes of fruits, vegetables and dairy foods with adolescent intakes of the same foods. This study expands upon the limited research of longitudinal studies examining the role of parents and household food availability in adolescent dietary intakes.\n\n\nDESIGN\nLongitudinal study. Project EAT-II followed an ethnically and socio-economically diverse sample of adolescents from 1999 (time 1) to 2004 (time 2). In addition to the Project EAT survey, adolescents completed the Youth Adolescent Food-Frequency Questionnaire in both time periods, and parents of adolescents completed a telephone survey at time 1. General linear modelling was used to examine the relationship between parent intake and home availability and adolescent intake, adjusting for time 1 adolescent intakes. Associations were examined separately for the high school and young adult cohorts and separately for males and females in combined cohorts.\n\n\nSUBJECTS/SETTING\nThe sample included 509 pairs of parents/guardians and adolescents.\n\n\nRESULTS\nVegetables served at dinner significantly predicted adolescent intakes of vegetables for males (P = 0.037), females (P = 0.009), high school (P = 0.033) and young adults (P = 0.05) at 5-year follow-up. Among young adults, serving milk at dinner predicted dairy intake (P = 0.002). Time 1 parental intakes significantly predicted intakes of young adults for fruit (P = 0.044), vegetables (P = 0.041) and dairy foods (P = 0.008). Parental intake predicted intake of dairy for females (P = 0.02).\n\n\nCONCLUSIONS\nThe findings suggest the importance of providing parents of adolescents with knowledge and skills to enhance the home food environment and improve their own eating behaviours.",
"title": ""
},
{
"docid": "bf70216c9a73d6711c8acd92918d6e1c",
"text": "Modern conflict-driven clause-learning (CDCL) Boolean SAT solvers provide efficient automatic analysis of real-world feature models (FM) of systems ranging from cars to operating systems. It is well-known that solver-based analysis of real-world FMs scale very well even though SAT instances obtained from such FMs are large, and the corresponding analysis problems are known to be NP-complete. To better understand why SAT solvers are so effective, we systematically studied many syntactic and semantic characteristics of a representative set of large real-world FMs. We discovered that a key reason why large real-world FMs are easy-to-analyze is that the vast majority of the variables in these models are unrestricted, i.e., the models are satisfiable for both true and false assignments to such variables under the current partial assignment. Given this discovery and our understanding of CDCL SAT solvers, we show that solvers can easily find satisfying assignments for such models without too many backtracks relative to the model size, explaining why solvers scale so well. Further analysis showed that the presence of unrestricted variables in these real-world models can be attributed to their high-degree of variability. Additionally, we experimented with a series of well-known nonbacktracking simplifications that are particularly effective in solving FMs. The remaining variables/clauses after simplifications, called the core, are so few that they are easily solved even with backtracking, further strengthening our conclusions. We explain the connection between our findings and backdoors, an idea posited by theorists to explain the power of SAT solvers. This connection strengthens our hypothesis that SAT-based analysis of FMs is easy. In contrast to our findings, previous research characterizes the difficulty of analyzing randomly-generated FMs in terms of treewidth. Our experiments suggest that the difficulty of analyzing real-world FMs cannot be explained in terms of treewidth.",
"title": ""
},
{
"docid": "f285815e47ea0613fb1ceb9b69aee7df",
"text": "Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless communication. The mmWave band offers higher bandwidth communication channels versus those presently used in commercial wireless systems. The applications of mmWave are immense: wireless local and personal area networks in the unlicensed band, 5G cellular systems, not to mention vehicular area networks, ad hoc networks, and wearables. Signal processing is critical for enabling the next generation of mmWave communication. Due to the use of large antenna arrays at the transmitter and receiver, combined with radio frequency and mixed signal power constraints, new multiple-input multiple-output (MIMO) communication signal processing techniques are needed. Because of the wide bandwidths, low complexity transceiver algorithms become important. There are opportunities to exploit techniques like compressed sensing for channel estimation and beamforming. This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.",
"title": ""
},
{
"docid": "d043a086f143c713e4c4e74c38e3040c",
"text": "Background: The NASA Metrics Data Program data sets have been heavily used in software defect prediction experiments. Aim: To demonstrate and explain why these data sets require significant pre-processing in order to be suitable for defect prediction. Method: A meticulously documented data cleansing process involving all 13 of the original NASA data sets. Results: Post our novel data cleansing process; each of the data sets had between 6 to 90 percent less of their original number of recorded values. Conclusions: One: Researchers need to analyse the data that forms the basis of their findings in the context of how it will be used. Two: Defect prediction data sets could benefit from lower level code metrics in addition to those more commonly used, as these will help to distinguish modules, reducing the likelihood of repeated data points. Three: The bulk of defect prediction experiments based on the NASA Metrics Data Program data sets may have led to erroneous findings. This is mainly due to repeated data points potentially causing substantial amounts of training and testing data to be identical.",
"title": ""
},
{
"docid": "ee173a79714c48ebcf6eafdba0bc53f4",
"text": "Little is known about the measurement properties of clinical tests of stepping in different directions for children with cerebral palsy (CP) and Down syndrome (DS). The ability to step in various directions is an important balance skill for daily life. Standardized testing of this skill can yield important information for therapy planning. This observational methodological study was aimed at defining the relative and absolute reliability, minimal detectable difference, and concurrent validity with the Timed Up-&-Go (TUG) of the Four Square Step Test (FSST) for children with CP and DS. Thirty children, 16 with CP and 14 with DS, underwent repeat testing 2 weeks apart on the FSST by 3 raters. TUG was administered on the second test occasion. Intraclass correlation coefficients (ICC [1,1] and [3,1]) with 95% confidence intervals, standard error of measurement (SEM), minimal detectable difference (MDD) and the Spearman rank correlation coefficient were computed. The FSST demonstrated excellent interrater reliability (ICC=.79; 95% CI: .66, .89) and high positive correlation with the TUG (r=.74). Test-retest reliability estimates varied from moderate to excellent among the 3 raters (.54, .78 and .89 for raters 1, 2 and 3, respectively). SEM and MDD were calculated at 1.91s and 5.29s, respectively. Scores on the FSST of children with CP and DS between 5 and 12 years of age are reliable and valid.",
"title": ""
},
{
"docid": "bf42a82730cfc7fb81866fbb345fef64",
"text": "MicroRNAs (miRNAs) are evolutionarily conserved small non-coding RNAs that have crucial roles in regulating gene expression. Increasing evidence supports a role for miRNAs in many human diseases, including cancer and autoimmune disorders. The function of miRNAs can be efficiently and specifically inhibited by chemically modified antisense oligonucleotides, supporting their potential as targets for the development of novel therapies for several diseases. In this Review we summarize our current knowledge of the design and performance of chemically modified miRNA-targeting antisense oligonucleotides, discuss various in vivo delivery strategies and analyse ongoing challenges to ensure the specificity and efficacy of therapeutic oligonucleotides in vivo. Finally, we review current progress on the clinical development of miRNA-targeting therapeutics.",
"title": ""
},
{
"docid": "8a6e7ac784b63253497207c63caa1036",
"text": "Synchronized control (SYNC) is widely adopted for doubly fed induction generator (DFIG)-based wind turbine generators (WTGs) in microgrids and weak grids, which applies P-f droop control to achieve grid synchronization instead of phase-locked loop. The DFIG-based WTG with SYNC will reach a new equilibrium of rotor speed under frequency deviation, resulting in the WTG's acceleration or deceleration. The acceleration/deceleration process can utilize the kinetic energy stored in the rotating mass of WTG to provide active power support for the power grid, but the WTG may lose synchronous stability simultaneously. This stability problem occurs when the equilibrium of rotor speed is lost and the rotor speed exceeds the admissible range during the frequency deviations, which will be particularly analyzed in this paper. It is demonstrated that the synchronous stability can be improved by increasing the P-f droop coefficient. However, increasing the P-f droop coefficient will deteriorate the system's small signal stability. To address this contradiction, a modified synchronized control strategy is proposed. Simulation results verify the effectiveness of the analysis and the proposed control strategy.",
"title": ""
},
{
"docid": "063074a6129dea17c7626c17faccd96f",
"text": "PrDOS is a server that predicts the disordered regions of a protein from its amino acid sequence (http://prdos.hgc.jp). The server accepts a single protein amino acid sequence, in either plain text or FASTA format. The prediction system is composed of two predictors: a predictor based on local amino acid sequence information and one based on template proteins. The server combines the results of the two predictors and returns a two-state prediction (order/disorder) and a disorder probability for each residue. The prediction results are sent by e-mail, and the server also provides a web-interface to check the results.",
"title": ""
},
{
"docid": "2f9efc20fb961bc42f20211a6c958832",
"text": "We introduce PixelPlayer, a system that, by leveraging large amounts of unlabeled videos, learns to locate image regions which produce sounds and separate the input sounds into a set of components that represents the sound from each pixel. Our approach capitalizes on the natural synchronization of the visual and audio modalities to learn models that jointly parse sounds and images, without requiring additional manual supervision. Experimental results on a newly collected MUSIC dataset show that our proposed Mix-and-Separate framework outperforms several baselines on source separation. Qualitative results suggest our model learns to ground sounds in vision, enabling applications such as independently adjusting the volume of sound sources.",
"title": ""
},
{
"docid": "42734f20beec1ed290f10dc7f379a2e3",
"text": "Recent advances in deep domain adaptation reveal that adversarial learning can be embedded into deep networks to learn transferable features that reduce distribution discrepancy between the source and target domains. Existing domain adversarial adaptation methods based on single domain discriminator only align the source and target data distributions without exploiting the complex multimode structures. In this paper, we present a multi-adversarial domain adaptation (MADA) approach, which captures multimode structures to enable fine-grained alignment of different data distributions based on multiple domain discriminators. The adaptation can be achieved by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Empirical evidence demonstrates that the proposed model outperforms state of the art methods on standard domain adaptation datasets.",
"title": ""
},
{
"docid": "5c935db4a010bc26d93dd436c5e2f978",
"text": "A taxonomic revision of Australian Macrobrachium identified three species new to the Australian fauna – two undescribed species and one new record, viz. M. auratumsp. nov., M. koombooloombasp. nov., and M. mammillodactylus(Thallwitz, 1892). Eight taxa previously described by Riek (1951) are recognised as new junior subjective synonyms, viz. M. adscitum adscitum, M. atactum atactum, M. atactum ischnomorphum, M. atactum sobrinum, M. australiense crassum, M. australiense cristatum, M. australiense eupharum of M. australienseHolthuis, 1950, and M. glypticumof M. handschiniRoux, 1933. Apart from an erroneous type locality for a junior subjective synonym, there were no records to confirm the presence of M. australe(Guérin-Méneville, 1838) on the Australian continent. In total, 13 species of Macrobrachiumare recorded from the Australian continent. Keys to male developmental stages and Australian species are provided. A revised diagnosis is given for the genus. A list of 31 atypical species which do not appear to be based on fully developed males or which require re-evaluation of their generic status is provided. Terminology applied to spines and setae is revised.",
"title": ""
},
{
"docid": "d380a5de56265c80309733370c612316",
"text": "Two experiments demonstrated that self-perceptions and social perceptions may persevere after the initial basis for such perceptions has been completely discredited. In both studies subjects first received false feedback, indicating that they had either succeeded or failed on a novel discrimination task and then were thoroughly debriefed concerning the predetermined and random nature of this outcome manipulation. In experiment 2, both the initial outcome manipulation and subsequent debriefing were watched and overheard by observers. Both actors and observers showed substantial perseverance of initial impressions concerning the actors' performance and abilities following a standard \"outcome\" debriefing. \"Process\" debriefing, in which explicit discussion of the perseverance process was provided, generally proved sufficient to eliminate erroneous self-perceptions. Biased attribution processes that might underlie perserverance phenomena and the implications of the present data for the ethical conduct of deception research are discussed.",
"title": ""
},
{
"docid": "3194a0dd979b668bb25afb10260c30d2",
"text": "An octa-band antenna for 5.7-in mobile phones with the size of 80 mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times6$ </tex-math></inline-formula> mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times5.8$ </tex-math></inline-formula> mm is proposed and studied. The proposed antenna is composed of a coupled line, a monopole branch, and a ground branch. By using the 0.25-, 0.5-, and 0.75-wavelength modes, the lower band (704–960 MHz) and the higher band (1710–2690 MHz) are covered. The working mechanism is analyzed based on the S-parameters and the surface current distributions. The attractive merits of the proposed antenna are that the nonground portion height is only 6 mm and any lumped element is not used. A prototype of the proposed antenna is fabricated and measured. The measured −6 dB impedance bandwidths are 350 MHz (0.67–1.02 GHz) and 1.27 GHz (1.65–2.92 GHz) at the lower and higher bands, respectively, which can cover the LTE700, GSM850, GSM900, GSM1800, GSM1900, UMTS, LTE2300, and LTE2500 bands. The measured patterns, gains, and efficiencies are presented.",
"title": ""
},
{
"docid": "0b56a411692b4c0c051ef318d996511f",
"text": "The pathophysiology of perinatal brain injury is multifactorial and involves hypoxia-ischemia (HI) and inflammation. N-methyl-d-aspartate receptors (NMDAR) are present on neurons and glia in immature rodents, and NMDAR antagonists are protective in HI models. To enhance clinical translation of rodent data, we examined protein expression of 6 NMDAR subunits in postmortem human brains without injury from 20 postconceptional weeks through adulthood and in cases of periventricular leukomalacia (PVL). We hypothesized that the developing brain is intrinsically vulnerable to excitotoxicity via maturation-specific NMDAR levels and subunit composition. In normal white matter, NR1 and NR2B levels were highest in the preterm period compared with adult. In gray matter, NR2A and NR3A expression were highest near term. NR2A was significantly elevated in PVL white matter, with reduced NR1 and NR3A in gray matter compared with uninjured controls. These data suggest increased NMDAR-mediated vulnerability during early brain development due to an overall upregulation of individual receptors subunits, in particular, the presence of highly calcium permeable NR2B-containing and magnesium-insensitive NR3A NMDARs. These data improve understanding of molecular diversity and heterogeneity of NMDAR subunit expression in human brain development and supports an intrinsic prenatal vulnerability to glutamate-mediated injury; validating NMDAR subunit-specific targeted therapies for PVL.",
"title": ""
},
{
"docid": "edd6fb76f672e00b14935094cb0242d0",
"text": "Despite widespread interests in reinforcement-learning for task-oriented dialogue systems, several obstacles can frustrate research and development progress. First, reinforcement learners typically require interaction with the environment, so conventional dialogue corpora cannot be used directly. Second, each task presents specific challenges, requiring separate corpus of task-specific annotated data. Third, collecting and annotating human-machine or human-human conversations for taskoriented dialogues requires extensive domain knowledge. Because building an appropriate dataset can be both financially costly and time-consuming, one popular approach is to build a user simulator based upon a corpus of example dialogues. Then, one can train reinforcement learning agents in an online fashion as they interact with the simulator. Dialogue agents trained on these simulators can serve as an effective starting point. Once agents master the simulator, they may be deployed in a real environment to interact with humans, and continue to be trained online. To ease empirical algorithmic comparisons in dialogues, this paper introduces a new, publicly available simulation framework, where our simulator, designed for the movie-booking domain, leverages both rules and collected data. The simulator supports two tasks: movie ticket booking and movie seeking. Finally, we demonstrate several agents and detail the procedure to add and test your own agent in the proposed framework.",
"title": ""
}
] |
scidocsrr
|
c22eff7d67f6fc9a9fd3853f762a53eb
|
Cross-domain Effects of Music and Language Experience on the Representation of Pitch in the Human Auditory Brainstem
|
[
{
"docid": "3654827519075eac6bfe5ee442c6d4b2",
"text": "We examined the relations among phonological awareness, music perception skills, and early reading skills in a population of 100 4- and 5-year-old children. Music skills were found to correlate significantly with both phonological awareness and reading development. Regression analyses indicated that music perception skills contributed unique variance in predicting reading ability, even when variance due to phonological awareness and other cognitive abilities (math, digit span, and vocabulary) had been accounted for. Thus, music perception appears to tap auditory mechanisms related to reading that only partially overlap with those related to phonological awareness, suggesting that both linguistic and nonlinguistic general auditory mechanisms are involved in reading.",
"title": ""
},
{
"docid": "6509150b9a7fcf201eb19b98d88adc4f",
"text": "The main aim of the present experiment was to determine whether extensive musical training facilitates pitch contour processing not only in music but also in language. We used a parametric manipulation of final notes' or words' fundamental frequency (F0), and we recorded behavioral and electrophysiological data to examine the precise time course of pitch processing. We compared professional musicians and nonmusicians. Results revealed that within both domains, musicians detected weak F0 manipulations better than nonmusicians. Moreover, F0 manipulations within both music and language elicited similar variations in brain electrical potentials, with overall shorter onset latency for musicians than for nonmusicians. Finally, the scalp distribution of an early negativity in the linguistic task varied with musical expertise, being largest over temporal sites bilaterally for musicians and largest centrally and over left temporal sites for nonmusicians. These results are taken as evidence that extensive musical training influences the perception of pitch contour in spoken language.",
"title": ""
},
{
"docid": "908716e7683bdc78283600f63bd3a1b0",
"text": "The need for a simply applied quantitative assessment of handedness is discussed and some previous forms reviewed. An inventory of 20 items with a set of instructions and responseand computational-conventions is proposed and the results obtained from a young adult population numbering some 1100 individuals are reported. The separate items are examined from the point of view of sex, cultural and socio-economic factors which might appertain to them and also of their inter-relationship to each other and to the measure computed from them all. Criteria derived from these considerations are then applied to eliminate 10 of the original 20 items and the results recomputed to provide frequency-distribution and cumulative frequency functions and a revised item-analysis. The difference of incidence of handedness between the sexes is discussed.",
"title": ""
}
] |
[
{
"docid": "45390290974f347d559cd7e28c33c993",
"text": "Text ambiguity is one of the most interesting phenomenon in human communication and a difficult problem in Natural Language Processing (NLP). Identification of text ambiguities is an important task for evaluating the quality of text and uncovering its vulnerable points. There exist several types of ambiguity. In the present work we review and compare different approaches to ambiguity identification task. We also propose our own approach to this problem. Moreover, we present the prototype of a tool for ambiguity identification and measurement in natural language text. The tool is intended to support the process of writing high quality documents.",
"title": ""
},
{
"docid": "8eb96ae8116a16e24e6a3b60190cc632",
"text": "IT professionals are finding that more of their IT investments are being measured against a knowledge management (KM) metric. Those who want to deploy foundation technologies such as groupware, CRM or decision support tools, but fail to justify them on the basis of their contribution to KM, may find it difficult to get funding unless they can frame them within the KM context. Determining KM's pervasiveness and impact is analogous to measuring the contribution of marketing, employee development, or any other management or organizational competency. This paper addresses the problem of developing measurement models for KM metrics and discusses what current KM metrics are in use, and examine their sustainability and soundness in assessing knowledge utilization and retention of generating revenue. The paper will then discuss the use of a Balanced Scorecard approach to determine a business-oriented relationship between strategic KM usage and IT strategy and implementation.",
"title": ""
},
{
"docid": "3667adb02ff66fee9a77ba02a774f42f",
"text": "This report points out a correlation between asthma and dental caries. It also gives certain guidelines on the measures to be taken in an asthmatic to negate the risk of dental caries.",
"title": ""
},
{
"docid": "f4708a4f62cb17a83ed14c65e5f14f32",
"text": "Data imbalance is common in many vision tasks where one or more classes are rare. Without addressing this issue, conventional methods tend to be biased toward the majority class with poor predictive accuracy for the minority class. These methods further deteriorate on small, imbalanced data that have a large degree of class overlap. In this paper, we propose a novel discriminative sparse neighbor approximation (DSNA) method to ameliorate the effect of class-imbalance during prediction. Specifically, given a test sample, we first traverse it through a cost-sensitive decision forest to collect a good subset of training examples in its local neighborhood. Then, we generate from this subset several class-discriminating but overlapping clusters and model each as an affine subspace. From these subspaces, the proposed DSNA iteratively seeks an optimal approximation of the test sample and outputs an unbiased prediction. We show that our method not only effectively mitigates the imbalance issue, but also allows the prediction to extrapolate to unseen data. The latter capability is crucial for achieving accurate prediction on small data set with limited samples. The proposed imbalanced learning method can be applied to both classification and regression tasks at a wide range of imbalance levels. It significantly outperforms the state-of-the-art methods that do not possess an imbalance handling mechanism, and is found to perform comparably or even better than recent deep learning methods by using hand-crafted features only.",
"title": ""
},
{
"docid": "385ae4c2278c2f4b876bf50941e98998",
"text": "Deep neural networks (DNN) have been successfully employed for the problem of monaural sound source separation achieving state-of-the-art results. In this paper, we propose using convolutional recurrent neural network (CRNN) architecture for tackling this problem. We focus on a scenario where low algorithmic delay (< 10 ms) is paramount, and relatively little training data is available. We show that the proposed architecture can achieve slightly better performance as compared to feedforward DNNs and long short-term memory (LSTM) networks. In addition to reporting separation performance metrics (i.e., source to distortion ratios), we also report extended short term objective intelligibility (ESTOI) scores which better predict intelligibility performance in presence of non-stationary interferers.",
"title": ""
},
{
"docid": "a13a50d552572d08b4d1496ca87ac160",
"text": "In recent years, mining with imbalanced data sets receives more and more attentions in both theoretical and practical aspects. This paper introduces the importance of imbalanced data sets and their broad application domains in data mining, and then summarizes the evaluation metrics and the existing methods to evaluate and solve the imbalance problem. Synthetic minority oversampling technique (SMOTE) is one of the over-sampling methods addressing this problem. Based on SMOTE method, this paper presents two new minority over-sampling methods, borderline-SMOTE1 and borderline-SMOTE2, in which only the minority examples near the borderline are over-sampled. For the minority class, experiments show that our approaches achieve better TP rate and F-value than SMOTE and random over-sampling methods.",
"title": ""
},
{
"docid": "913709f4fe05ba2783c3176ed00015fe",
"text": "A generalization of the PWM (pulse width modulation) subharmonic method for controlling single-phase or three-phase multilevel voltage source inverters (VSIs) is considered. Three multilevel PWM techniques for VSI inverters are presented. An analytical expression of the spectral components of the output waveforms covering all the operating conditions is derived. The analysis is based on an extension of Bennet's method. The improvements in harmonic spectrum are pointed out, and several examples are presented which prove the validity of the multilevel modulation. Improvements in the harmonic contents were achieved due to the increased number of levels.<<ETX>>",
"title": ""
},
{
"docid": "345e46da9fc01a100f10165e82d9ca65",
"text": "We present a new theoretical framework for analyzing and learning artificial neural networks. Our approach simultaneously and adaptively learns both the structure of the network as well as its weights. The methodology is based upon and accompanied by strong data-dependent theoretical learning guarantees, so that the final network architecture provably adapts to the complexity of any given problem.",
"title": ""
},
{
"docid": "13c250fc46dfc45e9153dbb1dc184b70",
"text": "This paper proposes Travel Prediction-based Data forwarding (TPD), tailored and optimized for multihop vehicle-to-vehicle communications. The previous schemes forward data packets mostly utilizing statistical information about road network traffic, which becomes much less accurate when vehicles travel in a light-traffic vehicular network. In this light-traffic vehicular network, highly dynamic vehicle mobility can introduce a large variance for the traffic statistics used in the data forwarding process. However, with the popularity of GPS navigation systems, vehicle trajectories become available and can be utilized to significantly reduce this uncertainty in the road traffic statistics. Our TPD takes advantage of these vehicle trajectories for a better data forwarding in light-traffic vehicular networks. Our idea is that with the trajectory information of vehicles in a target road network, a vehicle encounter graph is constructed to predict vehicle encounter events (i.e., timing for two vehicles to exchange data packets in communication range). With this encounter graph, TPD optimizes data forwarding process for minimal data delivery delay under a specific delivery ratio threshold. Through extensive simulations, we demonstrate that our TPD significantly outperforms existing legacy schemes in a variety of road network settings. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "45bf73a93f0014820864d1805f257bfc",
"text": "SEPIC topology based bidirectional DC-DC Converter is proposed for interfacing energy storage elements such as batteries & super capacitors with various power systems. This proposed bidirectional DC-DC converter acts as a buck boost where it changes its output voltage according to its duty cycle. An important factor is used to increase the voltage conversion ratio as well as it achieves high efficiency. In the proposed SEPIC based BDC converter is used to increase the voltage proposal of this is low voltage at the input side is converted into a very high level at the output side to drive the HVDC smart grid. In this project PIC microcontro9 ller is used to give faster response than the existing system. The proposed scheme ensures that the voltage on the both sides of the converter is always matched thereby the conduction losses can be reduced to improve efficiency. MATLAB/Simulink software is utilized for simulation. The obtained experimental results show the functionality and feasibility of the proposed converter.",
"title": ""
},
{
"docid": "4e7f166d1098b1223f03afa78adc7b46",
"text": "This paper builds a theory of trust based on informal contract enforcement in social networks. In our model, network connections between individuals can be used as social collateral to secure informal borrowing. We de
ne network-based trust as the highest amount one agent can borrow from another agent, and derive a reduced-form expression for this quantity which we then use in three applications. (1) We predict that dense networks generate bonding social capital that allows transacting valuable assets, while loose networks create bridging social capital that improves access to cheap favors like information. (2) For job recommendation networks, we show that strong ties between employers and trusted recommenders reduce asymmetric information about the quality of job candidates. (3) Using data from Peru, we show empirically that network-based trust predicts informal borrowing, and we structurally estimate and test our model. E-mails: dean.karlan@yale.edu, mobius@fas.harvard.edu, tanyar@iastate.edu, szeidl@econ.berkeley.edu. We thank Attila Ambrus, Susan Athey, Antoni Calvó-Armengol, Pablo Casas-Arce, Rachel Croson, Avinash Dixit, Drew Fudenberg, Andrea Galeotti, Ed Glaeser, Sanjeev Goyal, Daniel Hojman, Matthew Jackson, Rachel Kranton, Ariel Pakes, Andrea Prat, Michael Schwarz, Andrei Shleifer, Andy Skrzypacz, Fernando Vega-Redondo and seminar participants for helpful comments. A growing body of research demonstrates the importance of trust for economic outcomes.1 Arrow (1974) calls trust an important lubricant of a social system. If trust is low, poverty can persist because individuals are unable to acquire capital, even if they have strong investment opportunities. If trust is high, informal transactions can be woven into daily life and help generate e¢ cient allocations of resources. But what determines the level of trust between individuals? In this paper we propose a model where the social network inuences how much agents trust each other. Sociologists such as Granovetter (1985), Coleman (1988) and Putnam (2000) have long argued that social networks play an important role in building trust.2 In our model, networks create trust when agents use connections as social collateral to facilitate informal borrowing. The possibility of losing valuable friendships secures informal transactions the same way that the possibility of losing physical collateral can secure formal lending.3 Since both direct and indirect connections can serve as social collateral, the level of trust is determined by the structure of the entire network. Although we present our model in terms of trust over a borrowing transaction, it can also apply to other situations that involve moral hazard or asymmetric information, such as hiring workers through referrals.4 To understand the basic logic of our model, consider the examples in Figure 1, where agent s would like to borrow an asset, like a car, from agent t, in an economy with no formal contract enforcement. In Figure 1A, the network consists only of s and t; the value of their relationship, which represents either the social bene
ts of friendship or the present value of future transactions, is assumed to be 2. As in standard models of informal contracting, t will only lend the car if its value does not exceed the relationship value of 2. More interesting is Figure 1B, where s and t have a common friend u, the value of the friendship between s and u is 3, and that between u and t is 4. Here, the common friend increases the borrowing limit by min [3; 4] = 3, the weakest link on the path connecting borrower and lender through u, to a total of 5. The logic is that the intermediate agent u vouches for the borrower, acting as a guarantor of the loan transaction. If the borrower chooses not to return the car, he is breaking his promise of repayment to u, and therefore loses us Trust has been linked with outcomes including economic growth (Knack and Keefer 1997), judicial e¢ ciency and lack of corruption (La Porta, Lopez-de-Silanes, Shleifer, and Vishny 1997), international trade and
nancial ows (Guiso, Sapienza, and Zingales 2008), and private investment (Bohnet, Herrman, and Zeckhauser 2008). Glaeser, Laibson, Scheinkman, and Soutter (2000) show in experiments that social connections increase trust. Field evidence on the role of networks in trust-intensive exchange includes McMillan and Woodru¤ (1999) and Johnson, McMillan, and Woodru¤ (2002) for business transactions in Vietnam and transition countries; Townsend (1994) and Udry (1994) for insurance arrangements in India and Nigeria; and Macaulay (1963) and Uzzi (1999) for
rms in the U.S. We abstract from morality, altruism and other mechansisms that can generate trust even between strangers (e.g., Fukuyama (1995), Berg, Dickhaut, and McCabe (1995)); hence our de
nition of trust is like Hardins (1992). 4 In related work, Kandori (1992), Greif (1993) and Ellison (1994) develop models of community enforcement where deviators are punished by all members of society. More recently, Ali and Miller (2008), Bloch, Genicot, and Ray (2005), Dixit (2003) and Lippert and Spagnolo (2006) explore models of informal contracting where networks are used to transmit information. In contrast, in our work the network serves as social collateral.",
"title": ""
},
{
"docid": "1bb21862a8c5c7264933e19ed316499c",
"text": "In this paper, we present approximation algorithms for the directed multi-multiway cut and directed multicut problems. The so called region growing paradigm [1] is modified and used for these two cut problems on directed graphs By this paradigm, we give for each problem an approximation algorithm such that both algorithms have an approximate factor. The work previously done on these problems need to solve k linear programming, whereas our algorithms require only one linear programming for obtaining a good approximate factor.",
"title": ""
},
{
"docid": "c7ff67367986a0c7447045cae18fa43a",
"text": "Wireless Power Transfer (WPT) technology is a novel research area in the charging technology that bridges the utility and the automotive industries. There are various solutions that are currently being evaluated by several research teams to find the most efficient way to manage the power flow from the grid to the vehicle energy storage system. There are different control parameters that can be utilized to compensate for the change in the impedance due to variable parameters such as battery state-of-charge, coupling factor, and coil misalignment. This paper presents the implementation of an active front-end rectifier on the grid side for power factor control and voltage boost capability for load power regulation. The proposed SiC MOSFET based single phase active front end rectifier with PFC resulted in >97% efficiency at 137mm air-gap and >95% efficiency at 160mm air-gap.",
"title": ""
},
{
"docid": "7f479783ccab6c705bc1d76533f0b1c6",
"text": "The purpose of this research, computerized hotel management system with Satellite Motel Ilorin, Nigeria as the case study is to understand and make use of the computer to solve some of the problems which are usually encountered during manual operations of the hotel management. Finding an accommodation or a hotel after having reached a particular destination is quite time consuming as well as expensive. Here comes the importance of online hotel booking facility. Online hotel booking is one of the latest techniques in the arena of internet that allows travelers to book a hotel located anywhere in the world and that too according to your tastes and preferences. In other words, online hotel booking is one of the awesome facilities of the internet. Booking a hotel online is not only fast as well as convenient but also very cheap. Nowadays, many of the hotel providers have their sites on the web, which in turn allows the users to visit these sites and view the facilities and amenities offered by each of them. So, the proposed computerized of an online hotel management system is set to find a more convenient, well organized, faster, reliable and accurate means of processing the current manual system of the hotel for both near and far customer.",
"title": ""
},
{
"docid": "8bd5d94ed7b92845abae07a636cce185",
"text": "The media world of today's youth is almost completely digital. With newspapers going online and television becoming increasingly digital, the current generation of youth has little reason to consume analog media. Music, movies, and all other forms of mass-mediated content can be obtained via a wide array of digital devices, ranging from CDs to DVDs, from iPods to PDAs. Even their nonmedia experiences are often characterized by a reliance on digital devices. Most young people communicate with most of their acquaintances through cell phones and computer-mediated communication tools such as instant messengers and e-mail systems. 1 And, with the arrival of personal broadcasting technologies such as blogs and social networking sites, many youngsters experience the world through their own self-expression and the expressions of their peers. This serves to blur the traditional boundary between interpersonal and mass communication, leading to an idiosyncratic construction of one's media world. Customization in the digital age—be it in the form of Web sites such as cus-tomizable portals that allow users to shape content or devices such as iPods that allow for customized playlists—enables the user to serve as the gatekeeper of content. As media get highly interactive, multimodal, and navigable, the receiver tends to become the source of communication. 2 While this leads naturally to egocentric construals of one's information environment, it also raises questions about the veracity of all the material that is consumed. The ease of digital publishing has made authors out of us all, leading to a dramatic profusion of information available for personal as well as public consumption. Much of this information, however, is free-floating and does not follow any universally accepted gatekeeping standards, let alone a professional process of writing and editing. Therefore, the veridicality of information accessed on the Web and other digital media is often suspect. 3 This makes credibility a supremely key concern in the new media environment, necessitating the constant need to critically assess information while consuming it. Credibility is classically ascertained by considering the source of information. If the attributed source of a piece of information is a credible person or organization, then, according to conventional wisdom, that information is probably reliable. However, in Internet-based media, source is a murky entity because there are often multiple layers of sources in online transmission of information (e.g., e-mail from a friend giving you a piece of information that he or she found on a newsgroup, posted …",
"title": ""
},
{
"docid": "544a5a95a169b9ac47960780ac09de80",
"text": "Monte Carlo Tree Search methods have led to huge progress in Computer Go. Still, program performance is uneven most current Go programs are much stronger in some aspects of the game, such as local fighting and positional evaluation, than in others. Well known weaknesses of many programs include the handling of several simultaneous fights, including the “two safe groups” problem, and dealing with coexistence in seki. Starting with a review of MCTS techniques, several conjectures regarding the behavior of MCTS-based Go programs in specific types of Go situations are made. Then, an extensive empirical study of ten leading Go programs investigates their performance of two specifically designed test sets containing “two safe group” and seki situations. The results give a good indication of the state of the art in computer Go as of 2012/2013. They show that while a few of the very top programs can apparently solve most of these evaluation problems in their playouts already, these problems are difficult to solve by global search. ∗shihchie@ualberta.ca †mmueller@ualberta.ca",
"title": ""
},
{
"docid": "a4418b6e010a630a8ae1f10ce23e0ec5",
"text": "While neural machine translation (NMT) has made remarkable progress in recent years, it is hard to interpret its internal workings due to the continuous representations and non-linearity of neural networks. In this work, we propose to use layer-wise relevance propagation (LRP) to compute the contribution of each contextual word to arbitrary hidden states in the attention-based encoderdecoder framework. We show that visualization with LRP helps to interpret the internal workings of NMT and analyze translation errors.",
"title": ""
},
{
"docid": "184319fbdee41de23718bb0831c53472",
"text": "Localization is a prominent application and research area in Wireless Sensor Networks. Various research studies have been carried out on localization techniques and algorithms in order to improve localization accuracy. Received signal strength indicator is a parameter, which has been widely used in localization algorithms in many research studies. There are several environmental and other factors that affect the localization accuracy and reliability. This study introduces a new technique to increase the localization accuracy by employing a dynamic distance reference anchor method. In order to investigate the performance improvement obtained with the proposed technique, simulation models have been developed, and results have been analyzed. The simulation results show that considerable improvement in localization accuracy can be achieved with the proposed model.",
"title": ""
},
{
"docid": "82e6da590f8f836c9a06c26ef4440005",
"text": "We introduce a new count-based optimistic exploration algorithm for reinforcement learning (RL) that is feasible in environments with highdimensional state-action spaces. The success of RL algorithms in these domains depends crucially on generalisation from limited training experience. Function approximation techniques enable RL agents to generalise in order to estimate the value of unvisited states, but at present few methods enable generalisation regarding uncertainty. This has prevented the combination of scalable RL algorithms with efficient exploration strategies that drive the agent to reduce its uncertainty. We present a new method for computing a generalised state visit-count, which allows the agent to estimate the uncertainty associated with any state. Our φ-pseudocount achieves generalisation by exploiting the same feature representation of the state space that is used for value function approximation. States that have less frequently observed features are deemed more uncertain. The φ-ExplorationBonus algorithm rewards the agent for exploring in feature space rather than in the untransformed state space. The method is simpler and less computationally expensive than some previous proposals, and achieves near state-of-the-art results on highdimensional RL benchmarks.",
"title": ""
}
] |
scidocsrr
|
8e0d73df450e50012dccc681672d87f1
|
Adversarial Message Passing For Graphical Models
|
[
{
"docid": "234acba61dacec90d771a396f04e19f8",
"text": "Image Super-resolution (SR) is an underdetermined inverse problem, where a large number of plausible high-resolution images can explain the same downsampled image. Most current single image SR methods use empirical risk minimisation, often with a pixel-wise mean squared error (MSE) loss. However, the outputs from such methods tend to be blurry, over-smoothed and generally appear implausible. A more desirable approach would employ Maximum a Posteriori (MAP) inference, preferring solutions that always have a high probability under the image prior, and thus appear more plausible. Direct MAP estimation for SR is non-trivial, as it requires us to build a model for the image prior from samples. Furthermore, MAP inference is often performed via optimisationbased iterative algorithms which don’t compare well with the efficiency of neuralnetwork-based alternatives. Here we introduce new methods for amortised MAP inference whereby we calculate the MAP estimate directly using a convolutional neural network. We first introduce a novel neural network architecture that performs a projection to the affine subspace of valid SR solutions ensuring that the high resolution output of the network is always consistent with the low resolution input. We show that, using this architecture, the amortised MAP inference problem reduces to minimising the cross-entropy between two distributions, similar to training generative models. We propose three methods to solve this optimisation problem: (1) Generative Adversarial Networks (GAN) (2) denoiser-guided SR which backpropagates gradient-estimates from denoising to train the network, and (3) a baseline method using a maximum-likelihood-trained image prior. Our experiments show that the GAN based approach performs best on real image data, achieving particularly good results in photo-realistic texture SR.",
"title": ""
},
{
"docid": "a33cf416cf48f67cd0a91bf3a385d303",
"text": "Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generativeadversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any f -divergence can be used for training generative neural samplers. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models.",
"title": ""
},
{
"docid": "9b9181c7efd28b3e407b5a50f999840a",
"text": "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines. Introduction Generating sequential synthetic data that mimics the real one is an important problem in unsupervised learning. Recently, recurrent neural networks (RNNs) with long shortterm memory (LSTM) cells (Hochreiter and Schmidhuber 1997) have shown excellent performance ranging from natural language generation to handwriting generation (Wen et al. 2015; Graves 2013). The most common approach to training an RNN is to maximize the log predictive likelihood of each true token in the training sequence given the previous observed tokens (Salakhutdinov 2009). However, as argued in (Bengio et al. 2015), the maximum likelihood approaches suffer from so-called exposure bias in the inference stage: the model generates a sequence iteratively and predicts next token conditioned on its previously predicted ones that may be never observed in the training data. Such a discrepancy between training and inference can incur accumulatively along with the sequence and will become prominent ∗Weinan Zhang is the corresponding author. Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. as the length of sequence increases. To address this problem, (Bengio et al. 2015) proposed a training strategy called scheduled sampling (SS), where the generative model is partially fed with its own synthetic data as prefix (observed tokens) rather than the true data when deciding the next token in the training stage. Nevertheless, (Huszár 2015) showed that SS is an inconsistent training strategy and fails to address the problem fundamentally. Another possible solution of the training/inference discrepancy problem is to build the loss function on the entire generated sequence instead of each transition. For instance, in the application of machine translation, a task specific sequence score/loss, bilingual evaluation understudy (BLEU) (Papineni et al. 2002), can be adopted to guide the sequence generation. However, in many other practical applications, such as poem generation (Zhang and Lapata 2014) and chatbot (Hingston 2009), a task specific loss may not be directly available to score a generated sequence accurately. General adversarial net (GAN) proposed by (Goodfellow and others 2014) is a promising framework for alleviating the above problem. Specifically, in GAN a discriminative net D learns to distinguish whether a given data instance is real or not, and a generative net G learns to confuse D by generating high quality data. This approach has been successful and been mostly applied in computer vision tasks of generating samples of natural images (Denton et al. 2015). Unfortunately, applying GAN to generating sequences has two problems. Firstly, GAN is designed for generating real-valued, continuous data but has difficulties in directly generating sequences of discrete tokens, such as texts (Huszár 2015). The reason is that in GANs, the generator starts with random sampling first and then a determistic transform, govermented by the model parameters. As such, the gradient of the loss from D w.r.t. the outputs by G is used to guide the generative model G (paramters) to slightly change the generated value to make it more realistic. If the generated data is based on discrete tokens, the “slight change” guidance from the discriminative net makes little sense because there is probably no corresponding token for such slight change in the limited dictionary space (Goodfellow 2016). Secondly, GAN can only give the score/loss for an entire sequence when it has been generated; for a partially generated sequence, it is non-trivial to balance how good as it is now and the future score as the entire sequence. ar X iv :1 60 9. 05 47 3v 6 [ cs .L G ] 2 5 A ug 2 01 7 In this paper, to address the above two issues, we follow (Bachman and Precup 2015; Bahdanau et al. 2016) and consider the sequence generation procedure as a sequential decision making process. The generative model is treated as an agent of reinforcement learning (RL); the state is the generated tokens so far and the action is the next token to be generated. Unlike the work in (Bahdanau et al. 2016) that requires a task-specific sequence score, such as BLEU in machine translation, to give the reward, we employ a discriminator to evaluate the sequence and feedback the evaluation to guide the learning of the generative model. To solve the problem that the gradient cannot pass back to the generative model when the output is discrete, we regard the generative model as a stochastic parametrized policy. In our policy gradient, we employ Monte Carlo (MC) search to approximate the state-action value. We directly train the policy (generative model) via policy gradient (Sutton et al. 1999), which naturally avoids the differentiation difficulty for discrete data in a conventional GAN. Extensive experiments based on synthetic and real data are conducted to investigate the efficacy and properties of the proposed SeqGAN. In our synthetic data environment, SeqGAN significantly outperforms the maximum likelihood methods, scheduled sampling and PG-BLEU. In three realworld tasks, i.e. poem generation, speech language generation and music generation, SeqGAN significantly outperforms the compared baselines in various metrics including human expert judgement. Related Work Deep generative models have recently drawn significant attention, and the ability of learning over large (unlabeled) data endows them with more potential and vitality (Salakhutdinov 2009; Bengio et al. 2013). (Hinton, Osindero, and Teh 2006) first proposed to use the contrastive divergence algorithm to efficiently training deep belief nets (DBN). (Bengio et al. 2013) proposed denoising autoencoder (DAE) that learns the data distribution in a supervised learning fashion. Both DBN and DAE learn a low dimensional representation (encoding) for each data instance and generate it from a decoding network. Recently, variational autoencoder (VAE) that combines deep learning with statistical inference intended to represent a data instance in a latent hidden space (Kingma and Welling 2014), while still utilizing (deep) neural networks for non-linear mapping. The inference is done via variational methods. All these generative models are trained by maximizing (the lower bound of) training data likelihood, which, as mentioned by (Goodfellow and others 2014), suffers from the difficulty of approximating intractable probabilistic computations. (Goodfellow and others 2014) proposed an alternative training methodology to generative models, i.e. GANs, where the training procedure is a minimax game between a generative model and a discriminative model. This framework bypasses the difficulty of maximum likelihood learning and has gained striking successes in natural image generation (Denton et al. 2015). However, little progress has been made in applying GANs to sequence discrete data generation problems, e.g. natural language generation (Huszár 2015). This is due to the generator network in GAN is designed to be able to adjust the output continuously, which does not work on discrete data generation (Goodfellow 2016). On the other hand, a lot of efforts have been made to generate structured sequences. Recurrent neural networks can be trained to produce sequences of tokens in many applications such as machine translation (Sutskever, Vinyals, and Le 2014; Bahdanau, Cho, and Bengio 2014). The most popular way of training RNNs is to maximize the likelihood of each token in the training data whereas (Bengio et al. 2015) pointed out that the discrepancy between training and generating makes the maximum likelihood estimation suboptimal and proposed scheduled sampling strategy (SS). Later (Huszár 2015) theorized that the objective function underneath SS is improper and explained the reason why GANs tend to generate natural-looking samples in theory. Consequently, the GANs have great potential but are not practically feasible to discrete probabilistic models currently. As pointed out by (Bachman and Precup 2015), the sequence data generation can be formulated as a sequential decision making process, which can be potentially be solved by reinforcement learning techniques. Modeling the sequence generator as a policy of picking the next token, policy gradient methods (Sutton et al. 1999) can be adopted to optimize the generator once there is an (implicit) reward function to guide the policy. For most practical sequence generation tasks, e.g. machine translation (Sutskever, Vinyals, and Le 2014), the reward signal is meaningful only for the entire sequence, for instance in the game of Go (Silver et al. 2016), the reward signal is only set at the end of the game. In",
"title": ""
}
] |
[
{
"docid": "4753890e95974bc9f7d795ded183fa89",
"text": "Large scale knowledge bases systems are difficult and expensive to construct. If we could share knowledge across systems, costs would be reduced. However, because knowledge bases are typically constructed from scratch, each with their own idiosyncratic structure, sharing is difficult. Recent research has focused on the use of ontologies to promote sharing. An ontology is a hierarchically structured set of terms for describing a domain that can be used as a skeletal foundation for a knowledge base. If two knowledge bases are built on a common ontology, knowledge can be more readily shared, since they share a common underlying structure. This paper outlines a set of desiderata for ontologies, and then describes how we have used a large-scale (50,000+ concept) ontology develop a specialized, domain-specific ontology semiautomatically. We then discuss the relation between ontologies and the process of developing a system, arguing that to be useful, an ontology needs to be created as a \"living document\", whose development is tightly integrated with the system’s. We conclude with a discussion of Web-based ontology tools we are developing to support this approach.",
"title": ""
},
{
"docid": "64fb3fdb4f37ee75b1506c2fdb09cf7a",
"text": "With the proliferation of mobile devices, cloud-based photo sharing and searching services are becoming common du e to the mobile devices’ resource constrains. Meanwhile, the r is also increasing concern about privacy in photos. In this wor k, we present a framework SouTu, which enables cloud servers to provide privacy-preserving photo sharing and search as a se rvice to mobile device users. Privacy-seeking users can share the ir photos via our framework to allow only their authorized frie nds to browse and search their photos using resource-bounded mo bile devices. This is achieved by our carefully designed archite cture and novel outsourced privacy-preserving computation prot ocols, through which no information about the outsourced photos or even the search contents (including the results) would be revealed to the cloud servers. Our framework is compatible with most of the existing image search technologies, and it requi res few changes to the existing cloud systems. The evaluation of our prototype system with 31,772 real-life images shows the communication and computation efficiency of our system.",
"title": ""
},
{
"docid": "1450854a32ea6c18f4cc817f686aaf15",
"text": "This article reports on the development of two measures relating to historical trauma among American Indian people: The Historical Loss Scale and The Historical Loss Associated Symptoms Scale. Measurement characteristics including frequencies, internal reliability, and confirmatory factor analyses were calculated based on 143 American Indian adult parents of children aged 10 through 12 years who are part of an ongoing longitudinal study of American Indian families in the upper Midwest. Results indicate both scales have high internal reliability. Frequencies indicate that the current generation of American Indian adults have frequent thoughts pertaining to historical losses and that they associate these losses with negative feelings. Two factors of the Historical Loss Associated Symptoms Scale indicate one anxiety/depression component and one anger/avoidance component. The results are discussed in terms of future research and theory pertaining to historical trauma among American Indian people.",
"title": ""
},
{
"docid": "942da03bcd01ecdcb7e1334940c7c549",
"text": "This paper introduces three classic models of statistical topic models: Latent Semantic Indexing (LSI), Probabilistic Latent Semantic Indexing (PLSI) and Latent Dirichlet Allocation (LDA). Then a method of text classification based on LDA model is briefly described, which uses LDA model as a text representation method. Each document means a probability distribution of fixed latent topic sets. Next, Support Vector Machine (SVM) is chose as classification algorithm. Finally, the evaluation parameters in classification system of LDA with SVM are higher than other two methods which are LSI with SVM and VSM with SVM, showing a better classification performance.",
"title": ""
},
{
"docid": "1c3af13e29fc8a1cea5ee821d62b86f0",
"text": "Cellular and 802.11 WiFi are compelling options for mobile Internet connectivity. The goal of our work is to understand the performance afforded by each of these technologies in diverse environments and use conditions. In this paper, we compare and contrast cellular and WiFi performance using crowd-sourced data from Speedtest.net. Our study considers spatio-temporal performance (upload/download throughput and latency) using over 3 million user-initiated tests from iOS and Android apps in 15 different metro areas collected over a 15 week period. Our basic performance comparisons show that (i) WiFi provides better absolute download/upload throughput, and a higher degree of consistency in performance; (ii) WiFi networks generally deliver lower absolute latency, but the consistency in latency is often better with cellular access; (iii) throughput and latency vary widely depending on the particular access type e.g., HSPA, EVDO, LTE, WiFi, etc.) and service provider. More broadly, our results show that performance consistency for cellular and WiFi is much lower than has been reported for wired broadband. Temporal analysis shows that average performance for cell and WiFi varies with time of day, with the best performance for large metro areas coming at non-peak hours. Spatial analysis shows that performance is highly variable across metro areas, but that there are subregions that offer consistently better performance for cell or WiFi. Comparisons between metro areas show that larger areas provide higher throughput and lower latency than smaller metro areas, suggesting where ISPs have focused their deployment efforts. Finally, our analysis reveals diverse performance characteristics resulting from the rollout of new cell access technologies and service differences among local providers.",
"title": ""
},
{
"docid": "6081bf3a4f6e742ffc834a384223d66d",
"text": "According to the vision of the society to brand trust and brand loyalty, this study is conducted to \"investigate the effective factors on the loyalty to the brand in social media with a case study of Samsung brand\". This research is important and necessary because we can obtain predictions and plans by the achieved results for the society and the influence of virtual social media as a non-native technology. Therefore, the focus of this research is on the relationships of customers whom use social media and the effects of these relationships on brand trust and brand loyalty in brand society. In this research, descriptive, correlational and causal –comparative methods are used. And users of social media in Tehran city were considered as statistical population, because this statistical population is infinite, 384 samples were selected by simple random method and were studied by standard questionnaire of Michele Laruch et al (2012). Expert’s approval and Cronbach's alpha coefficient test with value of 0.922 and Splithalf method with the value of 0.920 were used to measure the validity and reliability of the questionnaire. Then, SPSS software, and uni-variate and multivariate linear regression analysis were used to calculate the effect of each independent variable on the dependent variable and the relationship between them. The obtained results show that social media has positive effects on customer-product, customer-brand, customer-company, customer-other customer’s relationships which in turn has a positive effect on brand trust, and brand trust has positive effects brand loyalty. We have found that brand trust has a quite intermediate role in changing the effects of improved relationships in brand society to brand loyalty. © 2015 Bull. Georg. Natl.Acad. Sci.",
"title": ""
},
{
"docid": "0f2caa9b91c2c180cbfbfcc25941f78e",
"text": "BACKGROUND\nSevere mitral annular calcification causing degenerative mitral stenosis (DMS) is increasingly encountered in patients undergoing mitral and aortic valve interventions. However, its clinical profile and natural history and the factors affecting survival remain poorly characterized. The goal of this study was to characterize the factors affecting survival in patients with DMS.\n\n\nMETHODS\nAn institutional echocardiographic database was searched for patients with DMS, defined as severe mitral annular calcification without commissural fusion and a mean transmitral diastolic gradient of ≥2 mm Hg. This resulted in a cohort of 1,004 patients. Survival was analyzed as a function of clinical, pharmacologic, and echocardiographic variables.\n\n\nRESULTS\nThe patient characteristics were as follows: mean age, 73 ± 14 years; 73% women; coronary artery disease in 49%; and diabetes mellitus in 50%. The 1- and 5-year survival rates were 78% and 47%, respectively, and were slightly worse with higher DMS grades (P = .02). Risk factors for higher mortality included greater age (P < .0001), atrial fibrillation (P = .0009), renal insufficiency (P = .004), mitral regurgitation (P < .0001), tricuspid regurgitation (P < .0001), elevated right atrial pressure (P < .0001), concomitant aortic stenosis (P = .02), and low serum albumin level (P < .0001). Adjusted for propensity scores, use of renin-angiotensin system blockers (P = .02) or statins (P = .04) was associated with better survival, and use of digoxin was associated with higher mortality (P = .007).\n\n\nCONCLUSIONS\nPrognosis in patients with DMS is poor, being worse in the aged and those with renal insufficiency, atrial fibrillation, and other concomitant valvular lesions. Renin-angiotensin system blockers and statins may confer a survival benefit, and digoxin use may be associated with higher mortality in these patients.",
"title": ""
},
{
"docid": "4cef84bb3a1ff5f5ed64a4149d501f57",
"text": "In the future, intelligent machines will replace or enhance human capabilities in many areas. Artificial intelligence is the intelligence exhibited by machines or software. It is the subfield of computer science. Artificial Intelligence is becoming a popular field in computer science as it has enhanced the human life in many areas. Artificial intelligence in the last two decades has greatly improved performance of the manufacturing and service systems. Study in the area of artificial intelligence has given rise to the rapidly growing technology known as expert system. Application areas of Artificial Intelligence is having a huge impact on various fields of life as expert system is widely used these days to solve the complex problems in various areas as science, engineering, business, medicine, weather forecasting. The areas employing the technology of Artificial Intelligence have seen an increase in the quality and efficiency. This paper gives an overview of this technology and the application areas of this technology. This paper will also explore the current use of Artificial Intelligence technologies in the PSS design to damp the power system oscillations caused by interruptions, in Network Intrusion for protecting computer and communication networks from intruders, in the medical areamedicine, to improve hospital inpatient care, for medical image classification, in the accounting databases to mitigate the problems of it and in the computer games.",
"title": ""
},
{
"docid": "d6d275b719451982fa67d442c55c186c",
"text": "Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature with the results of a case study at Ericsson AB in Sweden, investigating issues in the waterfall model. The case study aims at validating or contradicting the beliefs of what the problems are in waterfall development through empirical research.",
"title": ""
},
{
"docid": "4003b1a03be323c78e98650895967a07",
"text": "In an experiment on Airbnb, we find that applications from guests with distinctively African-American names are 16% less likely to be accepted relative to identical guests with distinctively White names. Discrimination occurs among landlords of all sizes, including small landlords sharing the property and larger landlords with multiple properties. It is most pronounced among hosts who have never had an African-American guest, suggesting only a subset of hosts discriminate. While rental markets have achieved significant reductions in discrimination in recent decades, our results suggest that Airbnb’s current design choices facilitate discrimination and raise the possibility of erasing some of these civil rights gains.",
"title": ""
},
{
"docid": "6dbabfe7370b19c55a52671c82c3e3c8",
"text": "The development of a compact circular polarization Orthomode Trasducer (OMT) working in two frequency bands with dual circular polarization (RHCP & LHCP) is presented. The device covers the complete communication spectrum allocated at C-band. At the same time, the device presents high power handling capability and very low mass and envelope size. The OMT plus a feed horn are used to illuminate a Reflector antenna, the surface of which is shaped to provide domestic or regional coverage from geostationary orbit. The full band operation increases the earth-satellite communication capability. The paper will show the OMT selected architecture, the RF performances at unit level and at component level. RF power aspects like multipaction and PIM are addressed. This development was performed under European Space Agency ESA ARTES-4 program.",
"title": ""
},
{
"docid": "2876086e4431e8607d5146f14f0c29dc",
"text": "Vascular ultrasonography has an important role in the diagnosis and management of venous disease. The venous system, however, is more complex and variable compared to the arterial system due to its frequent anatomical variations. This often becomes quite challenging for sonographers. This paper discusses the anatomy of the long saphenous vein and its anatomical variations accompanied by sonograms and illustrations.",
"title": ""
},
{
"docid": "17663e43a26892d78f52abe4bceb8a28",
"text": "This paper presents a project named PBMaster, which provides an open implementation of the Profibus DP (Process Field Bus Decentralized Peripherals). The project implements a software implementation of this very popular fieldbus used in factory automation. Most Profibus solutions, especially those implementing the master station, are based on ASICs, which require bespoke hardware to be built solely for the purpose of Profibus from the outset. Conversely, this software implementation can run on a wide range of hardware, where the UART and RS-485 standards are present.",
"title": ""
},
{
"docid": "3460dbea27f1de0f13636c04bbfb2569",
"text": "The secret keys of critical network authorities -- such as time, name, certificate, and software update services -- represent high-value targets for hackers, criminals, and spy agencies wishing to use these keys secretly to compromise other hosts. To protect authorities and their clients proactively from undetected exploits and misuse, we introduce CoSi, a scalable witness cosigning protocol ensuring that every authoritative statement is validated and publicly logged by a diverse group of witnesses before any client will accept it. A statement S collectively signed by W witnesses assures clients that S has been seen, and not immediately found erroneous, by those W observers. Even if S is compromised in a fashion not readily detectable by the witnesses, CoSi still guarantees S's exposure to public scrutiny, forcing secrecy-minded attackers to risk that the compromise will soon be detected by one of the W witnesses. Because clients can verify collective signatures efficiently without communication, CoSi protects clients' privacy, and offers the first transparency mechanism effective against persistent man-in-the-middle attackers who control a victim's Internet access, the authority's secret key, and several witnesses' secret keys. CoSi builds on existing cryptographic multisignature methods, scaling them to support thousands of witnesses via signature aggregation over efficient communication trees. A working prototype demonstrates CoSi in the context of timestamping and logging authorities, enabling groups of over 8,000 distributed witnesses to cosign authoritative statements in under two seconds.",
"title": ""
},
{
"docid": "37d36c930f6cf75d469aa27a8cd7f48f",
"text": "Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.",
"title": ""
},
{
"docid": "a6acba54f34d1d101f4abb00f4fe4675",
"text": "We study the potential flow of information in interaction networks, that is, networks in which the interactions between the nodes are being recorded. The central notion in our study is that of an information channel. An information channel is a sequence of interactions between nodes forming a path in the network which respects the time order. As such, an information channel represents a potential way information could have flown in the interaction network. We propose algorithms to estimate information channels of limited time span from every node to other nodes in the network. We present one exact and one more efficient approximate algorithm. Both algorithms are onepass algorithms. The approximation algorithm is based on an adaptation of the HyperLogLog sketch, which allows easily combining the sketches of individual nodes in order to get estimates of how many unique nodes can be reached from groups of nodes as well. We show how the results of our algorithm can be used to build efficient influence oracles for solving the Influence maximization problem which deals with finding top k seed nodes such that the information spread from these nodes is maximized. Experiments show that the use of information channels is an interesting data-driven and model-independent way to find top k influential nodes in interaction networks.",
"title": ""
},
{
"docid": "2802e8fd4d8df23d55dee9afac0f4177",
"text": "Brain plasticity refers to the brain's ability to change structure and function. Experience is a major stimulant of brain plasticity in animal species as diverse as insects and humans. It is now clear that experience produces multiple, dissociable changes in the brain including increases in dendritic length, increases (or decreases) in spine density, synapse formation, increased glial activity, and altered metabolic activity. These anatomical changes are correlated with behavioral differences between subjects with and without the changes. Experience-dependent changes in neurons are affected by various factors including aging, gonadal hormones, trophic factors, stress, and brain pathology. We discuss the important role that changes in dendritic arborization play in brain plasticity and behavior, and we consider these changes in the context of changing intrinsic circuitry of the cortex in processes such as learning.",
"title": ""
},
{
"docid": "9478efffef9b34aa43a3e69765a48507",
"text": "Digital chaotic ciphers have been investigated for more than a decade. However, their overall performance in terms of the tradeoff between security and speed, as well as the connection between chaos and cryptography, has not been sufficiently addressed. We propose a chaotic Feistel cipher and a chaotic uniform cipher. Our plan is to examine crypto components from both dynamical-system and cryptographical points of view, thus to explore connection between these two fields. In the due course, we also apply dynamical system theory to create cryptographically secure transformations and evaluate cryptographical security measures",
"title": ""
},
{
"docid": "0afbce731c55b9a3d3ced22ad59aa0ef",
"text": "In this paper, we introduce a method that automatically builds text classifiers in a new language by training on already labeled data in another language. Our method transfers the classification knowledge across languages by translating the model features and by using an Expectation Maximization (EM) algorithm that naturally takes into account the ambiguity associated with the translation of a word. We further exploit the readily available unlabeled data in the target language via semisupervised learning, and adapt the translated model to better fit the data distribution of the target language.",
"title": ""
},
{
"docid": "efb124a26b0cdc9b022975dd83ec76c8",
"text": "Apache Spark is an open-source cluster computing framework for big data processing. It has emerged as the next generation big data processing engine, overtaking Hadoop MapReduce which helped ignite the big data revolution. Spark maintains MapReduce's linear scalability and fault tolerance, but extends it in a few important ways: it is much faster (100 times faster for certain applications), much easier to program in due to its rich APIs in Python, Java, Scala (and shortly R), and its core data abstraction, the distributed data frame, and it goes far beyond batch applications to support a variety of compute-intensive tasks, including interactive queries, streaming, machine learning, and graph processing. This tutorial will provide an accessible introduction to Spark and its potential to revolutionize academic and commercial data science practices.",
"title": ""
}
] |
scidocsrr
|
bd89482883bb7c2b52fa23dfda6722a7
|
Fault tolerant permanent magnet motor drives for electric vehicles
|
[
{
"docid": "c5118bfd338ed2879477023b69fff911",
"text": "The paper describes a study and an experimental verification of remedial strategies against failures occurring in the inverter power devices of a permanent-magnet synchronous motor drive. The basic idea of this design consists in incorporating a fourth inverter pole, with the same topology and capabilities of the other conventional three poles. This minimal redundant hardware, appropriately connected and controlled, allows the drive to face a variety of power device fault conditions while maintaining a smooth torque production. The achieved results also show the industrial feasibility of the proposed fault-tolerant control, that could fit many practical applications.",
"title": ""
}
] |
[
{
"docid": "3da5087d3ba29b772ce6dcd30d4c1b67",
"text": "We prove that the coefficients of certain weight −1/2 harmonic Maass forms are “traces” of singular moduli for weak Maass forms. To prove this theorem, we construct a theta lift from spaces of weight −2 harmonic weak Maass forms to spaces of weight −1/2 vectorvalued harmonic weak Maass forms on Mp2(Z), a result which is of independent interest. We then prove a general theorem which guarantees (with bounded denominator) when such Maass singular moduli are algebraic. As an example of these results, we derive a formula for the partition function p(n) as a finite sum of algebraic numbers which lie in the usual discriminant −24n + 1 ring class field.",
"title": ""
},
{
"docid": "6277d1a524d45908acfe4045df560f36",
"text": "We present a novel method to track 3D models in color and depth data. To this end, we introduce approximations that accelerate the state-of-the-art in region-based tracking by an order of magnitude while retaining similar accuracy. Furthermore, we show how the method can be made more robust in the presence of depth data and consequently formulate a new joint contour and ICP tracking energy. We present better results than the state-of-the-art while being much faster then most other methods and achieving all of the above on a single CPU core.",
"title": ""
},
{
"docid": "d7c0d9e43f8f894fbe21154c2a26c3fd",
"text": "Decision tree classification (DTC) is a widely used technique in data mining algorithms known for its high accuracy in forecasting. As technology has progressed and available storage capacity in modern computers increased, the amount of data available to be processed has also increased substantially, resulting in much slower induction and classification times. Many parallel implementations of DTC algorithms have already addressed the issues of reliability and accuracy in the induction process. In the classification process, larger amounts of data require proportionately more execution time, thus hindering the performance of legacy systems. We have devised a pipelined architecture for the implementation of axis parallel binary DTC that dramatically improves the execution time of the algorithm while consuming minimal resources in terms of area. Scalability is achieved when connected to a high-speed communication unit capable of performing data transfers at a rate similar to that of the DTC engine. We propose a hardware accelerated solution composed of parallel processing nodes capable of independently processing data from a streaming source. Each engine processes the data in a pipelined fashion to use resources more efficiently and increase the achievable throughput. The results show that this system is 3.5 times faster than the existing hardware implementation of classification.",
"title": ""
},
{
"docid": "49ff105e4bd35d88e2cbf988e22a7a3a",
"text": "Personality testing is a popular method that used to be commonly employed in selection decisions in organizational settings. However, it is also a controversial practice according to a number researcher who claims that especially explicit measures of personality may be prone to the negative effects of faking and response distortion. The first aim of the present paper is to summarize Morgeson, Morgeson, Campion, Dipboye, Hollenbeck, Murphy and Schmitt’s paper that discussed the limitations of personality testing for performance ratings in relation to its basic conclusions about faking and response distortion. Secondly, the results of Rosse, Stecher, Miller and Levin’s study that investigated the effects of faking in personality testing on selection decisions will be discussed in detail. Finally, recent research findings related to implicit personality measures will be introduced along with the examples of the results related to the implications of those measures for response distortion in personality research and the suggestions for future research.",
"title": ""
},
{
"docid": "4a8c9a2301ea45d6c18ec5ab5a75a2ba",
"text": "We propose in this paper a computer vision-based posture recognition method for home monitoring of the elderly. The proposed system performs human detection prior to the posture analysis; posture recognition is performed only on a human silhouette. The human detection approach has been designed to be robust to different environmental stimuli. Thus, posture is analyzed with simple and efficient features that are not designed to manage constraints related to the environment but only designed to describe human silhouettes. The posture recognition method, based on fuzzy logic, identifies four static postures and is robust to variation in the distance between the camera and the person, and to the person's morphology. With an accuracy of 74.29% of satisfactory posture recognition, this approach can detect emergency situations such as a fall within a health smart home.",
"title": ""
},
{
"docid": "6e4dcb451292cc38cb72300a24135c1b",
"text": "This survey gives state-of-the-art of genetic algorithm (GA) based clustering techniques. Clustering is a fundamental and widely applied method in understanding and exploring a data set. Interest in clustering has increased recently due to the emergence of several new areas of applications including data mining, bioinformatics, web use data analysis, image analysis etc. To enhance the performance of clustering algorithms, Genetic Algorithms (GAs) is applied to the clustering algorithm. GAs are the best-known evolutionary techniques. The capability of GAs is applied to evolve the proper number of clusters and to provide appropriate clustering. This paper present some existing GA based clustering algorithms and their application to different problems and domains.",
"title": ""
},
{
"docid": "2526e181083af43aac08a77c67ec402f",
"text": "In its native Europe, the bumblebee, Bombus terrestris (L.) has co-evolved with a large array of parasites whose numbers are negatively linked to the genetic diversity of the colony. In Tasmania B. terrestris was first detected in 1992 and has since spread over much of the state. In order to understand the bee’s invasive success and as part of a wider study into the genetic diversity of bumblebees across Tasmania, we screened bees for co-invasions of ectoparasitic and endoparasitic mites, nematodes and micro-organisms, and searched their nests for brood parasites. The only bee parasite detected was the relatively benign acarid mite Kuzinia laevis (Dujardin) whose numbers per bee did not vary according to region. Nests supported no brood parasites, but did contain the pollen-feeding life stages of K. laevis. Upon summer-autumn collected drones and queens, mites were present on over 80% of bees, averaged ca. 350–400 per bee and were more abundant on younger bees. Nest searching spring queens had similar mite numbers to those collected in summer-autumn but mite numbers dropped significantly once spring queens began foraging for pollen. The average number of mites per queen bee was over 30 fold greater than that reported in Europe. Mite incidence and mite numbers were significantly lower on worker bees than drones or queens, being present on just 51% of bees and averaging 38 mites per bee. Our reported incidence of worker bee parasitism by this mite is 5–50 times higher than reported in Europe. That only one parasite species co-invaded Tasmania supports the notion that a small number of queens founded the Tasmanian population. However, it is clearly evident that both the bee in the absence of parasites, and the mite have been extraordinarily successful invaders.",
"title": ""
},
{
"docid": "cf0116613c099b013ca9464af76e5121",
"text": "Grapheme-to-phoneme (G2P) conversion is an important problem for many speech and language processing applications. G2P models are particularly useful for low-resource languages that do not have well-developed pronunciation lexicons. Prominent G2P paradigms are based on initial alignments between grapheme and phoneme sequences. In this work, we devise new alignment strategies that work effectively with recurrent neural network based models when only a small number of pronunciations are available to train the models. In a small data setting, we build G2P models for Pashto, Tagalog and Lithuanian that significantly outperform a joint sequence model and a baseline recurrent neural network based model, giving up to 14% and 9% relative reductions in phone and word error rates when trained on a dataset of 250 words.",
"title": ""
},
{
"docid": "c0ebb032224694bbe9cd87885cf673da",
"text": "Appropriate management of temporomandibular disorders (TMD) requires an understanding of the underlying dysfunction associated with the temporomandibular joint (TMJ) and surrounding structures. A comprehensive examination process, as described in part 1 of this series, can reveal underlying clinical findings that assist in the delivery of comprehensive physical therapy services for patients with TMD. Part 2 of this series focuses on management strategies for TMD. Physical therapy is the preferred conservative management approach for TMD. Physical therapists are professionally well-positioned to step into the void and provide clinical services for patients with TMD. Clinicians should utilize examination findings to design rehabilitation programs that focus on addressing patient-specific impairments. Potentially appropriate plan of care components include joint and soft tissue mobilization, trigger point dry needling, friction massage, therapeutic exercise, patient education, modalities, and outside referral. Management options should address both symptom reduction and oral function. Satisfactory results can often be achieved when management focuses on patient-specific clinical variables.",
"title": ""
},
{
"docid": "410bd8286a87a766dd221c1269f05c04",
"text": "The lowand mid-frequency model of the transformer with resistive load is analysed for different values of coupling coefficients. The model comprising of coupling-dependent inductances is used to derive the following characteristics: voltage gain, current gain, bandwidth, input impedance, and transformer efficiency. It is shown that in the lowand mid-frequency range, the turns ratio between the windings is a strong function of the coupling coefficient, i.e., if the coupling coefficient decreases, then the effective turns ratio reduces. A practical transformer was designed, simulated, and tested. It was observed that the magnitudes of the voltage transfer function and current transfer function exhibit a maximum value each at a different value of coupling coefficient. In addition, as the coupling coefficient decreases, the transformer bandwidth also decreases. Furthermore, analytical expressions for the transformer efficiency for resistive loads are derived and its variation with respect to frequency at different coupling coefficients is investigated. It is shown that the transformer efficiency is maximum at any coupling coefficient if the input resistance is equal to the load resistance. Experimental validation of the theoretical results was performed using a practical transformer set-up. The theoretical predictions were found to be in good agreement with the experimental results.",
"title": ""
},
{
"docid": "b01cb0af3dc85c5d62040c6bb0c21011",
"text": "CT scanner technology is continuously evolving, with scan times becoming shorter with each scanner generation. Achieving adequate arterial opacification synchronized with CT data acquisition is becoming increasingly difficult. A fundamental understanding of early arterial contrast medium dynamics is thus of utmost importance for the design of CT scanning and injection protocols for current and future cardiovascular CT applications. Arterial enhancement is primarily controlled by the iodine flux (injection flow rate) and the injection duration versus a patient's cardiac output and local downstream physiology. The technical capabilities of modern CT equipment require precise scan timing. Together with automated tube current modulation and weight-based injection protocols, both radiation exposure and contrast medium enhancement can be individualized.",
"title": ""
},
{
"docid": "0e477f56c7f0e1c40eadbd499b226347",
"text": "In this paper, the channel stacked array (CSTAR) NAND flash memory with layer selection by multi-level operation (LSM) of string select transistor (SST) is proposed and investigated to solve problems of conventional channel stacked array. In case of LSM architecture, the stacked layers can be distinguished by combinations of multi-level states of SST and string select line (SSL) bias. Due to the layer selection performed by the bias of SSL, the placement of bit lines and word lines is similar to the conventional planar structure, and proposed CSTAR with LSM has no island-type SSLs. As a result of the advantages of the proposed architecture, various issues of conventional channel stacked NAND flash memory array can be solved.",
"title": ""
},
{
"docid": "dd51d7253e6e249980e4f1f945f93c84",
"text": "In real-time strategy games like StarCraft, skilled players often block the entrance to their base with buildings to prevent the opponent’s units from getting inside. This technique, called “walling-in”, is a vital part of player’s skill set, allowing him to survive early aggression. However, current artificial players (bots) do not possess this skill, due to numerous inconveniences surfacing during its implementation in imperative languages like C++ or Java. In this text, written as a guide for bot programmers, we address the problem of finding an appropriate building placement that would block the entrance to player’s base, and present a ready to use declarative solution employing the paradigm of answer set programming (ASP). We also encourage the readers to experiment with different declarative approaches to this problem.",
"title": ""
},
{
"docid": "b8b6dd35c714c1b95cda6f9c9a85598d",
"text": "There is significant current interest in the problem of influence maximization: given a directed social network with influence weights on edges and a number k, find k seed nodes such that activating them leads to the maximum expected number of activated nodes, according to a propagation model. Kempe et al. showed, among other things, that under the Linear Threshold Model, the problem is NP-hard, and that a simple greedy algorithm guarantees the best possible approximation factor in PTIME. However, this algorithm suffers from various major performance drawbacks. In this paper, we propose Simpath, an efficient and effective algorithm for influence maximization under the linear threshold model that addresses these drawbacks by incorporating several clever optimizations. Through a comprehensive performance study on four real data sets, we show that Simpath consistently outperforms the state of the art w.r.t. running time, memory consumption and the quality of the seed set chosen, measured in terms of expected influence spread achieved.",
"title": ""
},
{
"docid": "2c5750e6498bd97fdbbbd5b141819a86",
"text": "The ultimate goal of work in cognitive architecture is to provide the foundation for a system capable of general intelligent behavior. That is, the goal is to provide the underlying structure that would enable a system to perform the full range of cognitive tasks, employ the full range of problem-solving methods and representations appropriate for the tasks, and learn about all aspects of the tasks and its performance on them. In this article we present Soar, an implemented proposal for such an architecture. We describe its organizational principles, the system as currently implemented, and demonstrations of its capabilities. This research was sponsored by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 3597, monitored by the Air Force Avionics Laboratory under contracts F33615-81-K-1539 and N00039-83C-0136, and by the Personnel and Training Research Programs, Psychological Sciences Division, Office of Naval Research, under contract number N00014-82C-0067, contract authority identification number NR667-477. Additional partial support was provided by the Sloan Foundation and some computing support was supplied by the SUMEX-AIM facility (NIH grant number RR-00785). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency, the Office of Naval Research, the Sloan Foundation, the National Institute of Health, or the US Government.",
"title": ""
},
{
"docid": "3159856141b06a78f0d60ae8e118a251",
"text": "This paper introduces Associative Compression Networks (ACNs), a new framework for variational autoencoding with neural networks. The system differs from existing variational autoencoders in that the prior distribution used to model each code is conditioned on a similar code from the dataset. In compression terms this equates to sequentially transmitting the data using an ordering determined by proximity in latent space. As the prior need only account for local, rather than global variations in the latent space, the coding cost is greatly reduced, leading to rich, informative codes, even when autoregressive decoders are used. Experimental results on MNIST, CIFAR-10, ImageNet and CelebA show that ACNs can yield improved dataset compression relative to orderagnostic generative models, with an upper bound of 73.9 nats per image on binarized MNIST. They also demonstrate that ACNs learn high-level features such as object class, writing style, pose and facial expression, which can be used to cluster and classify the data, as well as to generate diverse and convincing samples.",
"title": ""
},
{
"docid": "30394ae468bc521e8e00db030f19e983",
"text": "A peer-to-peer network, enabling different parties to jointly store and run computations on data while keeping the data completely private. Enigma’s computational model is based on a highly optimized version of secure multi-party computation, guaranteed by a verifiable secret-sharing scheme. For storage, we use a modified distributed hashtable for holding secret-shared data. An external blockchain is utilized as the controller of the network, manages access control, identities and serves as a tamper-proof log of events. Security deposits and fees incentivize operation, correctness and fairness of the system. Similar to Bitcoin, Enigma removes the need for a trusted third party, enabling autonomous control of personal data. For the first time, users are able to share their data with cryptographic guarantees regarding their privacy.",
"title": ""
},
{
"docid": "e777794833a060f99e11675952cd3342",
"text": "In this paper we propose a novel method to utilize the skeletal structure not only for supporting force but for releasing heat by latent heat.",
"title": ""
},
{
"docid": "adf2ed7bde8b051dea88d4907ec9f10c",
"text": "The strong emotional reaction elicited by privacy issues is well documented (e.g., [12, 8]). The emotional aspect of privacy makes it difficult to evaluate privacy concern, and directly asking about a privacy issue may result in an emotional reaction and a biased response. This effect may be partly responsible for the dramatic privacy concern ratings coming from recent surveys, ratings that often seem to be at odds with user behavior. In this paper we propose indirect techniques for measuring content privacy concerns through surveys, thus hopefully diminishing any emotional response. We present a design for indirect surveys and test the design's use as (1) a means to measure relative privacy concerns across content types, (2) a tool for predicting unwillingness to share content (a possible indicator of privacy concern), and (3) a gauge for two underlying dimensions of privacy - content importance and the willingness to share content. Our evaluation consists of 3 surveys, taken by 200 users each, in which privacy is never asked about directly, but privacy warnings are issued with increasing escalation in the instructions and individual question-wording. We demonstrate that this escalation results in statistically and practically significant differences in responses to individual questions. In addition, we compare results against a direct privacy survey and show that rankings of privacy concerns are increasingly preserved as privacy language increases in the indirect surveys, thus indicating our mapping of the indirect questions to privacy ratings is accurately reflecting privacy concerns.",
"title": ""
}
] |
scidocsrr
|
62f117f6b4351713bf46a800db214ddd
|
Corona Discharge Surface Treater Without High Voltage Transformer
|
[
{
"docid": "24a6a976899de474d6a9e1cbc3b3bfb0",
"text": "The authors describe a 50 kHz 5 kVA voltage-source inverter using insulated gate bipolar transistors (IGBTs), a series-resonant circuit including a step-up transformer of turn ratio 1:10, and a corona surface treater. The series-resonant circuit is used as a matching circuit between the inverter of output voltage 250 V and the corona surface treater of input voltage 10 kV. Experimental results obtained from the prototype inverter system are shown to verify the stable inverter operation and proper corona discharge irrespective of load conditions. The estimated inverter efficiency is 95%, and the measured overall efficiency of the system is 74%.<<ETX>>",
"title": ""
}
] |
[
{
"docid": "adb02577e7fba530c2406fbf53571d14",
"text": "Event-related potentials (ERPs) recorded from the human scalp can provide important information about how the human brain normally processes information and about how this processing may go awry in neurological or psychiatric disorders. Scientists using or studying ERPs must strive to overcome the many technical problems that can occur in the recording and analysis of these potentials. The methods and the results of these ERP studies must be published in a way that allows other scientists to understand exactly what was done so that they can, if necessary, replicate the experiments. The data must then be analyzed and presented in a way that allows different studies to be compared readily. This paper presents guidelines for recording ERPs and criteria for publishing the results.",
"title": ""
},
{
"docid": "8f9f1bdc6f41cb5fd8b285a9c41526c1",
"text": "The rivalry between the cathode-ray tube and flat-panel displays (FPDs) has intensified as performance of some FPDs now exceeds that of that entrenched leader in many cases. Besides the wellknown active-matrix-addressed liquid-crystal display, plasma, organic light-emitting diodes, and liquid-crystal-on-silicon displays are now finding new applications as the manufacturing, process engineering, materials, and cost structures become standardized and suitable for large markets.",
"title": ""
},
{
"docid": "a9dfddc3812be19de67fc4ffbc2cad77",
"text": "Many real-world problems, such as network packet routing and the coordination of autonomous vehicles, are naturally modelled as cooperative multi-agent systems. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents’ policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent’s action, while keeping the other agents’ actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actorcritic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state.",
"title": ""
},
{
"docid": "01e7d8764ee81508633a3d463a6a8709",
"text": "Facial action units (AU) are the fundamental units to decode human facial expressions. At least three aspects affect performance of automated AU detection: spatial representation, temporal modeling, and AU correlation. Unlike most studies that tackle these aspects separately, we propose a hybrid network architecture to jointly model them. Specifically, spatial representations are extracted by a Convolutional Neural Network (CNN), which, as analyzed in this paper, is able to reduce person-specific biases caused by hand-crafted descriptors (e.g., HOG and Gabor). To model temporal dependencies, Long Short-Term Memory (LSTMs) are stacked on top of these representations, regardless of the lengths of input videos. The outputs of CNNs and LSTMs are further aggregated into a fusion network to produce per-frame prediction of 12 AUs. Our network naturally addresses the three issues together, and yields superior performance compared to existing methods that consider these issues independently. Extensive experiments were conducted on two large spontaneous datasets, GFT and BP4D, with more than 400,000 frames coded with 12 AUs. On both datasets, we report improvements over a standard multi-label CNN and feature-based state-of-the-art. Finally, we provide visualization of the learned AU models, which, to our best knowledge, reveal how machines see AUs for the first time.",
"title": ""
},
{
"docid": "2efe5c0228e6325cdbb8e0922c19924f",
"text": "Patient interactions with health care providers result in entries to electronic health records (EHRs). EHRs were built for clinical and billing purposes but contain many data points about an individual. Mining these records provides opportunities to extract electronic phenotypes that can be paired with genetic data to identify genes underlying common human diseases. This task remains challenging: high quality phenotyping is costly and requires physician review; many fields in the records are sparsely filled; and our definitions of diseases are continuing to improve over time. Here we develop and evaluate a semi-supervised learning method for EHR phenotype extraction using denoising autoencoders for phenotype stratification. By combining denoising autoencoders with random forests we find classification improvements across simulation models, particularly in cases where only a small number of patients have high quality phenotype. This situation is commonly encountered in research with EHRs. Denoising autoencoders perform dimensionality reduction allowing visualization and clustering for the discovery of new subtypes of disease. This method represents a promising approach to clarify disease subtypes and improve genotype-phenotype association studies that leverage EHRs.",
"title": ""
},
{
"docid": "7d7f0968d5c6010542f76273dfd7a353",
"text": "Numerous single image blind deblurring algorithms have been proposed to restore latent sharp images under camera motion. However, these algorithms are mainly evaluated using either synthetic datasets or few selected real blurred images. It is thus unclear how these algorithms would perform on images acquired \"in the wild\" and how we could gauge the progress in the field. In this paper, we aim to bridge this gap. We present the first comprehensive perceptual study and analysis of single image blind deblurring using real-world blurred images. First, we collect a dataset of real blurred images and a dataset of synthetically blurred images. Using these datasets, we conduct a large-scale user study to quantify the performance of several representative state-of-the-art blind deblurring algorithms. Second, we systematically analyze subject preferences, including the level of agreement, significance tests of score differences, and rationales for preferring one method over another. Third, we study the correlation between human subjective scores and several full-reference and noreference image quality metrics. Our evaluation and analysis indicate the performance gap between synthetically blurred images and real blurred image and sheds light on future research in single image blind deblurring.",
"title": ""
},
{
"docid": "aaa7da397279fc5b17a110b1e5d56cb0",
"text": "This study evaluates whether focusing on using specific muscles during bench press can selectively activate these muscles. Altogether 18 resistance-trained men participated. Subjects were familiarized with the procedure and performed one-maximum repetition (1RM) test during the first session. In the second session, 3 different bench press conditions were performed with intensities of 20, 40, 50, 60 and 80 % of the pre-determined 1RM: regular bench press, and bench press focusing on selectively using the pectoralis major and triceps brachii, respectively. Surface electromyography (EMG) signals were recorded for the triceps brachii and pectoralis major muscles. Subsequently, peak EMG of the filtered signals were normalized to maximum maximorum EMG of each muscle. In both muscles, focusing on using the respective muscles increased muscle activity at relative loads between 20 and 60 %, but not at 80 % of 1RM. Overall, a threshold between 60 and 80 % rather than a linear decrease in selective activation with increasing intensity appeared to exist. The increased activity did not occur at the expense of decreased activity of the other muscle, e.g. when focusing on activating the triceps muscle the activity of the pectoralis muscle did not decrease. On the contrary, focusing on using the triceps muscle also increased pectoralis EMG at 50 and 60 % of 1RM. Resistance-trained individuals can increase triceps brachii or pectarilis major muscle activity during the bench press when focusing on using the specific muscle at intensities up to 60 % of 1RM. A threshold between 60 and 80 % appeared to exist.",
"title": ""
},
{
"docid": "369cb3790a031d167deb5eb41e74e3ab",
"text": "Utterance classification is a critical pre-processing step for many speech understanding and dialog systems. In multi-user settings, one needs to first identify if an utterance is even directed at the system, followed by another level of classification to determine the intent of the user’s input. In this work, we propose RNN and LSTM models for both these tasks. We show how both models outperform baselines based on ngram-based language models (LMs), feedforward neural network LMs, and boosting classifiers. To deal with the high rate of singleton and out-of-vocabulary words in the data, we also investigate a word input encoding based on character ngrams, and show how this representation beats the standard one-hot vector word encoding. Overall, these proposed approaches achieve over 30% relative reduction in equal error rate compared to boosting classifier baseline on an ATIS utterance intent classification task, and over 3.9% absolute reduction in equal error rate compared to a the maximum entropy LM baseline of 27.0% on an addressee detection task. We find that RNNs work best when utterances are short, while LSTMs are best when utterances are longer.",
"title": ""
},
{
"docid": "c83ec9a4ec6f58ea2fe57bf2e4fa0c37",
"text": "Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, letting alone the unsupervised retrieval task. We propose the selective convolutional descriptor aggregation (SCDA) method. The SCDA first localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and the dimensionality is reduced into a short feature vector using the best practices we found. The SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained data sets confirm the effectiveness of the SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA’s high-mean average precision in fine-grained retrieval. Moreover, on general image retrieval data sets, the SCDA achieves comparable retrieval results with the state-of-the-art general image retrieval approaches.",
"title": ""
},
{
"docid": "ec3542685d1b6e71e523cdcafc59d849",
"text": "The goal of subspace segmentation is to partition a set of data drawn from a union of subspace into their underlying subspaces. The performance of spectral clustering based approaches heavily depends on learned data affinity matrices, which are usually constructed either directly from the raw data or from their computed representations. In this paper, we propose a novel method to simultaneously learn the representations of data and the affinity matrix of representation in a unified optimization framework. A novel Augmented Lagrangian Multiplier based algorithm is designed to effectively and efficiently seek the optimal solution of the problem. The experimental results on both synthetic and real data demonstrate the efficacy of the proposed method and its superior performance over the state-of-the-art alternatives.",
"title": ""
},
{
"docid": "9430b0f220538e878d99ef410fdc1ab2",
"text": "The prevalence of pregnancy, substance abuse, violence, and delinquency among young people is unacceptably high. Interventions for preventing problems in large numbers of youth require more than individual psychological interventions. Successful interventions include the involvement of prevention practitioners and community residents in community-level interventions. The potential of community-level interventions is illustrated by a number of successful studies. However, more inclusive reviews and multisite comparisons show that although there have been successes, many interventions did not demonstrate results. The road to greater success includes prevention science and newer community-centered models of accountability and technical assistance systems for prevention.",
"title": ""
},
{
"docid": "1de75569c40c5f8352a41307bab7a293",
"text": "We explore the use of neural networks trained with dropout in predicting epileptic seizures from electroencephalographic data (scalp EEG). The input to the neural network is a 126 feature vector containing 9 features for each of the 14 EEG channels obtained over 1-second, non-overlapping windows. The models in our experiments achieved high sensitivity and specificity on patient records not used in the training process. This is demonstrated using leave-one-out-cross-validation across patient records, where we hold out one patient’s record as the test set and use all other patients’ records for training; repeating this procedure for all patients in the database.",
"title": ""
},
{
"docid": "e1cf81f4d1f1b4a97358b9f421b178ad",
"text": "Motivated by the problem of learning to detect and recognize objects with minimal supervision, we develop a hierarchical probabilistic model for the spatial structure of visual scenes. In contrast with most existing models, our approach explicitly captures uncertainty in the numberof object instances depicted in a given image. Our scene model is based on the transformed Dirichlet process (TDP), a novel extension of the hierarchical DP in which a set of stochastically transformed mixture components are shared between multiple groups of data. For visual scenes, mixture components describe the spatial structure of visual features in an object–centered coordinate frame, while transformations model the object positions in a particular image. Learning and inference in the TDP, which has many potential applications beyond computer vision, is based on an empirically effective Gibbs sampler. Applied to a dataset of partially labeled street scenes, we show that the TDP’s inclusion of spatial structure improves detection performance, flexibly exploiting partially labeled training images.",
"title": ""
},
{
"docid": "1cbaabb7514b7323aac7f0648dff6260",
"text": "While traditional database systems optimize for performance on one-shot query processing, emerging large-scale monitoring applications require continuous tracking of complex data-analysis queries over collections of physically distributed streams. Thus, effective solutions have to be simultaneously space/time efficient (at each remote monitor site), communication efficient (across the underlying communication network), and provide continuous, guaranteed-quality approximate query answers. In this paper, we propose novel algorithmic solutions for the problem of continuously tracking a broad class of complex aggregate queries in such a distributed-streams setting. Our tracking schemes maintain approximate query answers with provable error guarantees, while simultaneously optimizing the storage space and processing time at each remote site, and the communication cost across the network. In a nutshell, our algorithms rely on tracking general-purpose randomized sketch summaries of local streams at remote sites along with concise prediction models of local site behavior in order to produce highly communication- and space/time-efficient solutions. The end result is a powerful approximate query tracking framework that readily incorporates several complex analysis queries (including distributed join and multi-join aggregates, and approximate wavelet representations), thus giving the first known low-overhead tracking solution for such queries in the distributed-streams model. Experiments with real data validate our approach, revealing significant savings over naive solutions as well as our analytical worst-case guarantees.",
"title": ""
},
{
"docid": "a2da0b3dde5d54f68616d3ca78a17c08",
"text": "The increase in storage capacity and the progress in information technology today lead to a rapid growth in the amount of stored data. In increasing amounts of data, gaining insight becomes rapidly more difficult. Existing automatic analysis approaches are not sufficient for the analysis of the data. The problem that the amount of stored data increases faster than the computing power to analyse the data is called information overload phenomenon. Visual analytics is an approach to overcome this problem. It combines the strengths of computers to quickly identify re-occurring patterns and to process large amounts of data with human strengths such as flexibility, intuition, and contextual knowledge. In the process of visual analytics knowledge is applied by expert users to conduct the analysis. In many settings the expert users will apply the similar knowledge continuously in several iterations or across various comparable analytical tasks. This approach is time consuming, costly and possibly frustrating for the expert users. Therefore a demand for concepts and methods to prevent repetitive analysis steps can be identified. This thesis presents a reference architecture for knowledge-based visual analytics systems, the KnoVA RA, that provides concepts and methods to represent, extract and reapply knowledge in visual analytic systems. The basic idea of the reference architecture is to extract knowledge that was applied in the analysis process in order to enhance or to derive automated analysis steps. The objective is to reduce the work-load of the experts and to enhance the traceability and reproducibility of results. The KnoVA RA consist of four parts: a model of the analysis process, the KnoVA process model, a meta data model for knowledge-based visual analytics systems, the KnoVA meta model, concepts and algorithms for the extraction of knowledge and concepts and algorithms for the reapplication of knowledge. With these concepts, the reference architecture servers as a blueprint for knowledge-based visual analytics systems. To create the reference architecture, in this thesis, two real-world scenarios from different application domains (automotive and healthcare) are introduced. These scenarios provide requirements that lead to implications for the design of the reference architecture. On the example of the motivating scenarios the KnovA RA is implemented in two visual analytics applications: TOAD, for the analysis of message traces of in-car bus communication networks and CARELIS, for the aggregation of medical records on an interactive visual interface. These systems illustrate the applicability of the KnoVA RA across different analytical challenges and problem classes.",
"title": ""
},
{
"docid": "6059ad37cced50133792086a5c95f050",
"text": "The paper discusses and evaluates the effects of an information security awareness programme. The programme emphasised employee participation, dialogue and collective reflection in groups. The intervention consisted of small-sized workshops aimed at improving information security awareness and behaviour. An experimental research design consisting of one survey before and two after the intervention was used to evaluate whether the intended changes occurred. Statistical analyses revealed that the intervention was powerful enough to significantly change a broad range of awareness and behaviour indicators among the intervention participants. In the control group, awareness and behaviour remained by and large unchanged during the period of the study. Unlike the approach taken by the intervention studied in this paper, mainstream information security awareness measures are typically top-down, and seek to bring about changes at the individual level by means of an expert-based approach directed at a large population, e.g. through formal presentations, e-mail messages, leaflets and posters. This study demonstrates that local employee participation, collective reflection and group processes produce changes in short-term information security awareness and behaviour. a 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7b43d79b6d1634ca8613d2ba85525496",
"text": "MTC are expected to play an essential role within future 5G systems. In the FP7 project METIS, MTC has been further classified into mMTC and uMTC. While mMTC is about wireless connectivity to tens of billions of machinetype terminals, uMTC is about availability, low latency, and high reliability. The main challenge in mMTC is scalable and efficient connectivity for a massive number of devices sending very short packets, which is not done adequately in cellular systems designed for human-type communications. Furthermore, mMTC solutions need to enable wide area coverage and deep indoor penetration while having low cost and being energy-efficient. In this article, we introduce the PHY and MAC layer solutions developed within METIS to address this challenge.",
"title": ""
},
{
"docid": "cb4cc56b013ca35250c4d966da843d58",
"text": "Cyber-Physical System (CPS) is a system of system which integrates physical system with cyber capability in order to improve the physical performance. It is being widely used in areas closely related to national economy and people's livelihood, therefore CPS security problems have drawn a global attention and an appropriate risk assessment for CPS is in urgent need. Existing risk assessment for CPS always focuses on the reliability assessment, using Probability Risk Assessment (PRA). In this way, the assessment of physical part and cyber part is isolated as PRA is difficult to quantify the risks from the cyber world. Methodologies should be developed to assess the both parts as a whole system, considering this integrated system has a high coupling between the physical layer and cyber layer. In this paper, a risk assessment idea for CPS with the use of attack tree is proposed. Firstly, it presents a detailed description about the threat and vulnerability attributes of each leaf in an attack tree and tells how to assign value to its threat and vulnerability vector. Then this paper focuses on calculating the threat and vulnerability vector of an attack path with the use of the leaf vector values. Finally, damage is taken into account and an idea to calculate the risk value of the whole attack path is given.",
"title": ""
},
{
"docid": "1180bfc5d6181697ae2ef586490e38e5",
"text": "A novel soft-switching hybrid converter combining the phase-shift full-bridge (FB) and half-bridge (HB) LLC resonant converters' configuration with shared zero-voltage switching (ZVS) lagging leg is proposed to ensure the switches in the lagging leg operating at fully ZVS condition. The dual outputs of the proposed hybrid FB-HB converter are connected in series and the whole dc-output voltage can be regulated by the PWM phase-shift control within the desired voltage range. A resonant circuit is used in the secondary side of the FB converter to reset the primary current during the freewheeling period, as well as to transfer more input energy and clamp secondary rectifier voltage. The proposed converter is attractive for hybrid electric vehicle/electric vehicle on-board charger applications. The principle of operation, the validity, and performance are illustrated and verified on a 3.7-kW experimental circuit. Experimental results show that the proposed converter can get good efficiency curves at different operation points, and the maximum efficiency is 98.30%.",
"title": ""
},
{
"docid": "781ef0722d8a03024924a556aa1dc61e",
"text": "Digital 3D mosaics generation is a current trend of NPR (Non Photorealistic Rendering) field; in this demo we present an interactive system realized in JAVA where the user can simulate ancient mosaic in a 3D environment starting for any input image. Different simulation engines able to render the so-called \"Opus Musivum\"and \"Opus Vermiculatum\" are employed. Different parameters can be dynamically adjusted to obtain very impressive results.",
"title": ""
}
] |
scidocsrr
|
2eab747db33dd9f740894fc374ce6c31
|
Microstrip Branch-Line Couplers for Crossover Application
|
[
{
"docid": "5b3709a34402fc135fdd135c77454f11",
"text": "A Ka-band 4×4 Butler matrix feeding a 4-element linear antenna array has been presented in this paper. The Butler matrix is based on a rectangular coaxial structure, constructed using five layers of gold coated micromachined silicon slices. The patch antennas are of an air-filled microstrip type, and spaced by half a wavelength at 38 GHz to form the array. The demonstrated device is 26 mm by 23 mm in size and 1.5 mm in height. The measured return losses at all input ports are better than −10 dB between 34.4 and 38.3 GHz. The measured radiation pattern of one beam has shown good agreement with the simulations.",
"title": ""
}
] |
[
{
"docid": "c6aed5c5e899898083f33eb5f42d4706",
"text": "Intelligent systems often depend on data provided by information agents, for example, sensor data or crowdsourced human computation. Providing accurate and relevant data requires costly effort that agents may not always be willing to provide. Thus, it becomes important not only to verify the correctness of data, but also to provide incentives so that agents that provide highquality data are rewarded while those that do not are discouraged by low rewards. We cover different settings and the assumptions they admit, including sensing, human computation, peer grading, reviews, and predictions. We survey different incentive mechanisms, including proper scoring rules, prediction markets and peer prediction, Bayesian Truth Serum, Peer Truth Serum, Correlated Agreement, and the settings where each of them would be suitable. As an alternative, we also consider reputation mechanisms. We complement the gametheoretic analysis with practical examples of applications in prediction platforms, community sensing, and peer grading.",
"title": ""
},
{
"docid": "ba391ddf37a4757bc9b8d9f4465a66dc",
"text": "Adverse childhood experiences (ACEs) have been linked with risky health behaviors and the development of chronic diseases in adulthood. This study examined associations between ACEs, chronic diseases, and risky behaviors in adults living in Riyadh, Saudi Arabia in 2012 using the ACE International Questionnaire (ACE-IQ). A cross-sectional design was used, and adults who were at least 18 years of age were eligible to participate. ACEs event scores were measured for neglect, household dysfunction, abuse (physical, sexual, and emotional), and peer and community violence. The ACE-IQ was supplemented with questions on risky health behaviors, chronic diseases, and mood. A total of 931 subjects completed the questionnaire (a completion rate of 88%); 57% of the sample was female, 90% was younger than 45 years, 86% had at least a college education, 80% were Saudi nationals, and 58% were married. One-third of the participants (32%) had been exposed to 4 or more ACEs, and 10%, 17%, and 23% had been exposed to 3, 2, or 1 ACEs respectively. Only 18% did not have an ACE. The prevalence of risky health behaviors ranged between 4% and 22%. The prevalence of self-reported chronic diseases ranged between 6% and 17%. Being exposed to 4 or more ACEs increased the risk of having chronic diseases by 2-11 fold, and increased risky health behaviors by 8-21 fold. The findings of this study will contribute to the planning and development of programs to prevent child maltreatment and to alleviate the burden of chronic diseases in adults.",
"title": ""
},
{
"docid": "169db6ecec2243e3566079cd473c7afe",
"text": "Aspect-level sentiment classification is a finegrained task in sentiment analysis. Since it provides more complete and in-depth results, aspect-level sentiment analysis has received much attention these years. In this paper, we reveal that the sentiment polarity of a sentence is not only determined by the content but is also highly related to the concerned aspect. For instance, “The appetizers are ok, but the service is slow.”, for aspect taste, the polarity is positive while for service, the polarity is negative. Therefore, it is worthwhile to explore the connection between an aspect and the content of a sentence. To this end, we propose an Attention-based Long Short-Term Memory Network for aspect-level sentiment classification. The attention mechanism can concentrate on different parts of a sentence when different aspects are taken as input. We experiment on the SemEval 2014 dataset and results show that our model achieves state-ofthe-art performance on aspect-level sentiment classification.",
"title": ""
},
{
"docid": "b6d47dc227f767009c40599f65e25c5f",
"text": "Radio frequency (RF) tomography is proposed to detect underground voids, such as tunnels or caches, over relatively wide areas of regard. The RF tomography approach requires a set of low-cost transmitters and receivers arbitrarily deployed on the surface of the ground or slightly buried. Using the principles of inverse scattering and diffraction tomography, a simplified theory for below-ground imaging is developed. In this paper, the principles and motivations in support of RF tomography are introduced. Furthermore, several inversion schemes based on arbitrarily deployed sensors are devised. Then, limitations to performance and system considerations are discussed. Finally, the effectiveness of RF tomography is demonstrated by presenting images reconstructed via the processing of synthetic data.",
"title": ""
},
{
"docid": "5ce8a143ccb977917df41b93de16aa40",
"text": "The graduated optimization approach, also known as the continuation method, is a popular heuristic to solving non-convex problems that has received renewed interest over the last decade. Despite being popular, very little is known in terms of its theoretical convergence analysis. In this paper we describe a new first-order algorithm based on graduated optimization and analyze its performance. We characterize a family of non-convex functions for which this algorithm provably converges to a global optimum. In particular, we prove that the algorithm converges to an ε-approximate solution within O(1/ε) gradient-based steps. We extend our algorithm and analysis to the setting of stochastic non-convex optimization with noisy gradient feedback, attaining the same convergence rate. Additionally, we discuss the setting of “zeroorder optimization”, and devise a variant of our algorithm which converges at rate of O(d/ε).",
"title": ""
},
{
"docid": "45ea8e1e27f6c687d957af561aca5188",
"text": "Impedance matching networks for nonlinear devices such as amplifiers and rectifiers are normally very challenging to design, particularly for broadband and multiband devices. A novel design concept for a broadband high-efficiency rectenna without using matching networks is presented in this paper for the first time. An off-center-fed dipole antenna with relatively high input impedance over a wide frequency band is proposed. The antenna impedance can be tuned to the desired value and directly provides a complex conjugate match to the impedance of a rectifier. The received RF power by the antenna can be delivered to the rectifier efficiently without using impedance matching networks; thus, the proposed rectenna is of a simple structure, low cost, and compact size. In addition, the rectenna can work well under different operating conditions and using different types of rectifying diodes. A rectenna has been designed and made based on this concept. The measured results show that the rectenna is of high power conversion efficiency (more than 60%) in two wide bands, which are 0.9–1.1 and 1.8–2.5 GHz, for mobile, Wi-Fi, and ISM bands. Moreover, by using different diodes, the rectenna can maintain its wide bandwidth and high efficiency over a wide range of input power levels (from 0 to 23 dBm) and load values (from 200 to 2000 Ω). It is, therefore, suitable for high-efficiency wireless power transfer or energy harvesting applications. The proposed rectenna is general and simple in structure without the need for a matching network hence is of great significance for many applications.",
"title": ""
},
{
"docid": "b468726c2901146f1ca02df13936e968",
"text": "Chinchillas have been successfully maintained in captivity for almost a century. They have only recently been recognized as excellent, long-lived, and robust pets. Most of the literature on diseases of chinchillas comes from farmed chinchillas, whereas reports of pet chinchilla diseases continue to be sparse. This review aims to provide information on current, poorly reported disorders of pet chinchillas, such as penile problems, urolithiasis, periodontal disease, otitis media, cardiac disease, pseudomonadal infections, and giardiasis. This review is intended to serve as a complement to current veterinary literature while providing valuable and clinically relevant information for veterinarians treating chinchillas.",
"title": ""
},
{
"docid": "c71d229d69d79747eca7e87e342ba6d8",
"text": "This paper proposes a road detection approach based solely on dense 3D-LIDAR data. The approach is built up of four stages: (1) 3D-LIDAR points are projected to a 2D reference plane; then, (2) dense height maps are computed using an upsampling method; (3) applying a sliding-window technique in the upsampled maps, probability distributions of neighbouring regions are compared according to a similarity measure; finally, (4) morphological operations are used to enhance performance against disturbances. Our detection approach does not depend on road marks, thus it is suitable for applications on rural areas and inner-city with unmarked roads. Experiments have been carried out in a wide variety of scenarios using the recent KITTI-ROAD benchmark, obtaining promising results when compared to other state-of-art approaches.",
"title": ""
},
{
"docid": "ca5b9cd1634431254e1a454262eecb40",
"text": "This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.",
"title": ""
},
{
"docid": "ffbab4b090448de06ff5237d43c5e293",
"text": "Motivated by a project to create a system for people who are deaf or hard-of-hearing that would use automatic speech recognition (ASR) to produce real-time text captions of spoken English during in-person meetings with hearing individuals, we have augmented a transcript of the Switchboard conversational dialogue corpus with an overlay of word-importance annotations, with a numeric score for each word, to indicate its importance to the meaning of each dialogue turn. Further, we demonstrate the utility of this corpus by training an automatic word importance labeling model; our best performing model has an F-score of 0.60 in an ordinal 6-class word-importance classification task with an agreement (concordance correlation coefficient) of 0.839 with the human annotators (agreement score between annotators is 0.89). Finally, we discuss our intended future applications of this resource, particularly for the task of evaluating ASR performance, i.e. creating metrics that predict ASR-output caption text usability for DHH users better than Word Error Rate (WER).",
"title": ""
},
{
"docid": "b2626cc0e91d63378575caf91c89cfed",
"text": "BACKGROUND\nThe aim of the study was to compare clinical outcomes and quality of life in patients undergoing surgery for pilonidal disease with unroofing and marsupialization (UM) or rhomboid excision and Limberg flap (RELP) procedures.\n\n\nMETHODS\nOne hundred forty consecutive patients with pilonidal sinus were randomly assigned to receive either UM or RELP procedures. A specifically designed questionnaire was administered at three months to assess time from the operation until the patient was able to walk, return to daily activities, or sit without pain, time to return to work or school, and time to healing. Postoperative pain was assessed with a visual analog scale and the McGill Pain Questionnaire. Patients' quality of life was evaluated with the Cardiff Wound Impact Schedule (CWIS). Questionnaires were administered by a clinician blinded to treatment.\n\n\nRESULTS\nCompared with RELP, patients receiving UM had significantly shorter duration of operation and hospital stay, shorter time periods to walk, return to daily activities, or sit without pain and to return to work or school, and fewer complications. Time to final healing was significantly shorter and quality of life scores on the CWIS were higher in patients receiving RELP than in those receiving UM. Patients with UM had lower levels of pain one week after surgery.\n\n\nCONCLUSION\nThe unroofing and marsupialization procedure provides more clinical benefits in the treatment of pilonidal disease than rhomboid excision and Limberg flap and should be considered the procedure of choice. However, it may be associated with more inconvenience in wound care and longer healing time than rhomboid excision and Lindberg flap.",
"title": ""
},
{
"docid": "4a8c8c09fe94cddbc9cadefa014b1165",
"text": "A solution to trajectory-tracking control problem for a four-wheel-steering vehicle (4WS) is proposed using sliding-mode approach. The advantage of this controller over current control procedure is that it is applicable to a large class of vehicles with single or double steering and to a tracking velocity that is not necessarily constant. The sliding-mode approach make the solutions robust with respect to errors and disturbances, as demonstrated by the simulation results.",
"title": ""
},
{
"docid": "3a314a72ea2911844a5a3462d052f4e7",
"text": "While increasing income inequality in China has been commented on and studied extensively, relatively little analysis is available on inequality in other dimensions of human development. Using data from different sources, this paper presents some basic facts on the evolution of spatial inequalities in education and healthcare in China over the long run. In the era of economic reforms, as the foundations of education and healthcare provision have changed, so has the distribution of illiteracy and infant mortality. Across provinces and within provinces, between rural and urban areas and within rural and urban areas, social inequalities have increased substantially since the reforms began.",
"title": ""
},
{
"docid": "113cf34bf2a86a8f1a041cfd366c00b7",
"text": "People perceive and conceive of activity in terms of discrete events. Here the authors propose a theory according to which the perception of boundaries between events arises from ongoing perceptual processing and regulates attention and memory. Perceptual systems continuously make predictions about what will happen next. When transient errors in predictions arise, an event boundary is perceived. According to the theory, the perception of events depends on both sensory cues and knowledge structures that represent previously learned information about event parts and inferences about actors' goals and plans. Neurological and neurophysiological data suggest that representations of events may be implemented by structures in the lateral prefrontal cortex and that perceptual prediction error is calculated and evaluated by a processing pathway, including the anterior cingulate cortex and subcortical neuromodulatory systems.",
"title": ""
},
{
"docid": "96763245ab037e57abb3546aa12bc4fb",
"text": "This paper seeks understanding the user behavior in a social network created essentially by video interactions. We present a characterization of a social network created by the video interactions among users on YouTube, a popular social networking video sharing system. Our results uncover typical user behavioral patterns as well as show evidences of anti-social behavior such as self-promotion and other types of content pollution.",
"title": ""
},
{
"docid": "03a009652837b608d36ca5541b5fbcb4",
"text": "This Paper presents two different models for smartphone based 3D-handwritten character and gesture recognition. Smartphones available today are equipped with inbuilt sensors like accelerometer, gyroscope, and gravity sensor, which are able to provide data on motion of the device in 3D space. The sensor readings are easily accessible through native operating system provided by the smartphone. The accelerometer and gyroscope sensors data have been used in the models to make the systems more expressive and to disambiguate recognition. The acceleration generated by the hand motions are generated by the 3-axes accelerometer present in smartphone. Data from gyroscope is used to obtain the quaternion rotation matrix, which is used to remove the tilting and inclination offset. An automatic segmentation algorithm is implemented to identify individual gesture or character in a sequence. First model is based on training and evaluation mode, while second one is based on training-less algorithm. The recognition models as presented in this paper can be used for both user dependent and user independent 3d-character and gesture recognition. Results shows that, Model I gives an efficient recognition rate ranging from 90% to 94% obtained with minimal training sequence,",
"title": ""
},
{
"docid": "45d3e3e34b3a6217c59e5196d09774ef",
"text": "While showing great promise, Bitcoin requires users to wait tens of minutes for transactions to commit, and even then, offering only probabilistic guarantees. This paper introduces ByzCoin, a novel Byzantine consensus protocol that leverages scalable collective signing to commit Bitcoin transactions irreversibly within seconds. ByzCoin achieves Byzantine consensus while preserving Bitcoin’s open membership by dynamically forming hash power-proportionate consensus groups that represent recently-successful block miners. ByzCoin employs communication trees to optimize transaction commitment and verification under normal operation while guaranteeing safety and liveness under Byzantine faults, up to a near-optimal tolerance of f faulty group members among 3 f + 2 total. ByzCoin mitigates double spending and selfish mining attacks by producing collectively signed transaction blocks within one minute of transaction submission. Tree-structured communication further reduces this latency to less than 30 seconds. Due to these optimizations, ByzCoin achieves a throughput higher than Paypal currently handles, with a confirmation latency of 15-20 seconds.",
"title": ""
},
{
"docid": "c32a719ac619e7a48adf12fd6a534e7c",
"text": "Using smart devices and apps in clinical trials has great potential: this versatile technology is ubiquitously available, broadly accepted, user friendly and it offers integrated sensors for primary data acquisition and data sending features to allow for a hassle free communication with the study sites. This new approach promises to increase efficiency and to lower costs. This article deals with the ethical and legal demands of using this technology in clinical trials with respect to regulation, informed consent, data protection and liability.",
"title": ""
},
{
"docid": "4f1b28d567e2a72aa2dbc025ae17a139",
"text": "Referring to recent research calls regarding the role of individual differences on technology adoption and use, this paper reports on an empirical investigation of the influence of a user’s personality on the usage of the European career-oriented social network XING and its usage intensity (n = 760). Using structural equation modeling, a significant influence of personality on the intensity of XING usage was found (R2 = 12.4%,α = 0.758). More specifically, results indicated the major role played by the personality traits Extraversion, Emotional Stability and Openness to Experience as proper predictors for XING usage. Contrary to prior research on private-oriented social media, I discovered a significant positive Emotional Stability–XING usage intensity relationship instead of a negative relationship which is explained by Goffman’s Self Presentation Theory.",
"title": ""
},
{
"docid": "e6a913ca404c59cd4e0ecffaf18144e5",
"text": "SPARQL is the standard language for querying RDF data. In this article, we address systematically the formal study of the database aspects of SPARQL, concentrating in its graph pattern matching facility. We provide a compositional semantics for the core part of SPARQL, and study the complexity of the evaluation of several fragments of the language. Among other complexity results, we show that the evaluation of general SPARQL patterns is PSPACE-complete. We identify a large class of SPARQL patterns, defined by imposing a simple and natural syntactic restriction, where the query evaluation problem can be solved more efficiently. This restriction gives rise to the class of well-designed patterns. We show that the evaluation problem is coNP-complete for well-designed patterns. Moreover, we provide several rewriting rules for well-designed patterns whose application may have a considerable impact in the cost of evaluating SPARQL queries.",
"title": ""
}
] |
scidocsrr
|
6a2635cc89632291377fa4ab00003d1e
|
Multimedia Big Data Analytics: A Survey
|
[
{
"docid": "0cfac94bf56f39386802571ecd45cd3b",
"text": "Cloud Computing provides functionality for managing information data in a distributed, ubiquitous and pervasive manner supporting several platforms, systems and applications. This work presents the implementation of a mobile system that enables electronic healthcare data storage, update and retrieval using Cloud Computing. The mobile application is developed using Google's Android operating system and provides management of patient health records and medical images (supporting DICOM format and JPEG2000 coding). The developed system has been evaluated using the Amazon's S3 cloud service. This article summarizes the implementation details and presents initial results of the system in practice.",
"title": ""
},
{
"docid": "4d24a09dcbac1cc33a88bbabc89102d8",
"text": "Streaming data analysis in real time is becoming the fastest and most efficient way to obtain useful knowledge from what is happening now, allowing organizations to react quickly when problems appear or to detect new trends helping to improve their performance. Evolving data streams are contributing to the growth of data created over the last few years. We are creating the same quantity of data every two days, as we created from the dawn of time up until 2003. Evolving data streams methods are becoming a low-cost, green methodology for real time online prediction and analysis. We discuss the current and future trends of mining evolving data streams, and the challenges that the field will have to overcome during the next years.",
"title": ""
}
] |
[
{
"docid": "eedcc78c372ec6aba012c3cc5cf2f71f",
"text": "Recent evidence indicates the involvement of microRNAs (miRNAs), in cell growth control, differentiation, and apoptosis, thus playing a role in tumorigenesis. Single-nucleotide polymorphisms (SNPs) located at miRNA-binding sites (miRNA-binding SNPs) are likely to affect the expression of the miRNA target and may contribute to the susceptibility of humans to common diseases. We genotyped SNPs hsa-mir196a2 (rs11614913), hsa-mir146a (rs2910164), and hsa-mir499 (rs3746444) in a case–control study including 159 prostate cancer patients and 230 matched controls. Patients with heterozygous genotype in hsa-mir196a2 and hsa-mir499, showed significant risk for developing prostate cancer (P = 0.01; OR = 1.70 and P ≤ 0.001; OR = 2.27, respectively). Similarly, the variant allele carrier was also associated with prostate cancer, (P = 0.01; OR = 1.66 and P ≤ 0.001; OR = 1.97, respectively) whereas, hsa-mir146a revealed no association in prostate cancer. None of the miRNA polymorphisms were associated with Gleason grade and bone metastasis. This is the first study on Indian population substantially presenting that individual as well as combined genotypes of miRNA-related variants may be used to predict the risk of prostate cancer and may be useful for identifying patients at high risk.",
"title": ""
},
{
"docid": "df96263c86a36ed30e8a074354b09239",
"text": "We propose three iterative superimposed-pilot based channel estimators for Orthogonal Frequency Division Multiplexing (OFDM) systems. Two are approximate maximum-likelihood, derived by using a Taylor expansion of the conditional probability density function of the received signal or by approximating the OFDM time signal as Gaussian, and one is minimum-mean square error. The complexity per iteration of these estimators is given by approximately O(NL2), O(N3) and O(NL), where N is the number of OFDM subcarriers and L is the channel length (time). Two direct (non-iterative) data detectors are also derived by averaging the log likelihood function over the channel statistics. These detectors require minimising the cost metric in an integer space, and we suggest the use of the sphere decoder for them. The Cramér--Rao bound for superimposed pilot based channel estimation is derived, and this bound is achieved by the proposed estimators. The optimal pilot placement is shown to be the equally spaced distribution of pilots. The bit error rate of the proposed estimators is simulated for N = 32 OFDM system. Our estimators perform fairly close to a separated training scheme, but without any loss of spectral efficiency. Copyright © 2011 John Wiley & Sons, Ltd. *Correspondence Chintha Tellambura, Department of Electrical and Computer Engineering, University Alberta, Edmonton, Alberta, Canada T6G 2C5. E-mail: chintha@ece.ualberta.ca Received 20 July 2009; Revised 23 July 2010; Accepted 13 October 2010",
"title": ""
},
{
"docid": "a48ac362b2206e608303231593cf776b",
"text": "Model-based test case generation is gaining acceptance to the software practitioners. Advantages of this are the early detection of faults, reducing software development time etc. In recent times, researchers have considered different UML diagrams for generating test cases. Few work on the test case generation using activity diagrams is reported in literatures. However, the existing work consider activity diagrams in method scope and mainly follow UML 1.x for modeling. In this paper, we present an approach of generating test cases from activity diagrams using UML 2.0 syntax and with use case scope. We consider a test coverage criterion, called activity path coverage criterion. The test cases generated using our approach are capable of detecting more faults like synchronization faults, loop faults unlike the existing approaches.",
"title": ""
},
{
"docid": "e16f1b1d4b583f5d198eac8d01d12c48",
"text": "Mathematical models have been widely used in the studies of biological signaling pathways. Among these studies, two systems biology approaches have been applied: top-down and bottom-up systems biology. The former approach focuses on X-omics researches involving the measurement of experimental data in a large scale, for example proteomics, metabolomics, or fluxomics and transcriptomics. In contrast, the bottom-up approach studies the interaction of the network components and employs mathematical models to gain some insights about the mechanisms and dynamics of biological systems. This chapter introduces how to use the bottom-up approach to establish mathematical models for cell signaling studies.",
"title": ""
},
{
"docid": "a1a97d01518aed3573e934bb9d0428f3",
"text": "The use of social networking websites has become a current international phenomenon. Popular websites include MySpace, Facebook, and Friendster. Their rapid widespread use warrants a better understanding. However, there has been little empirical research studying the factors that determine the use of this hedonic computer-mediated communication technology This study contributes to our understanding of the antecedents that influence adoption and use of social networking websites by examining the effect of the perceptions of playfulness, critical mass, trust, and normative pressure on the use of social networking sites.. Structural equation modeling was used to examine the patterns of inter-correlations among the constructs and to empirically test the hypotheses. Each of the antecedents has a significant direct effect on intent to use social networking websites, with playfulness and critical mass the strongest indicators. Intent to use and playfulness had a significant direct effect on actual usage.",
"title": ""
},
{
"docid": "f56ee543d0f4d2a0f707e5f471d8d7e6",
"text": "Programmers commonly reuse existing frameworks or libraries to reduce software development efforts. One common problem in reusing the existing frameworks or libraries is that the programmers know what type of object that they need, but do not know how to get that object with a specific method sequence. To help programmers to address this issue, we have developed an approach that takes queries of the form \"Source object type → Destination object type\" as input, and suggests relevant method-invocation sequences that can serve as solutions that yield the destination object from the source object given in the query. Our approach interacts with a code search engine (CSE) to gather relevant code samples and performs static analysis over the gathered samples to extract required sequences. As code samples are collected on demand through CSE, our approach is not limited to queries of any specific set of frameworks or libraries. We have implemented our approach with a tool called PARSEWeb, and conducted four different evaluations to show that our approach is effective in addressing programmer's queries. We also show that PARSEWeb performs better than existing related tools: Prospector and Strathcona",
"title": ""
},
{
"docid": "34e6033c7eb0bc0e16847c8c9b9d113c",
"text": "Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models. We achieve this by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihoodproblem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). We show that in the nonparametric limit our method yields an exact maximum-likelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation. Contrary to competing approaches which combine VAEs with GANs, our approach has a clear theoretical justification, retains most advantages of standard Variational Autoencoders and is easy to implement.",
"title": ""
},
{
"docid": "e9672c597e9fbd9e3dda468a86d1db80",
"text": "Objectives Depression is one of the common mental health problems noticed in immigrants because of the experiences related to their resettlement which is the case for Somali population. Depression increases mortality, morbidity, disability, and costs of health care which can be controlled by screening depression in the primary care setting using a culturally and linguistically congruent screening tool. The aim of the current study is to translate the English PHQ-9 into Somali language using evidence-based translational methodology and establish psychometric properties of the Somali PHQ-9. Methods The initial validation of the Somali PHQ-9 was studied by comparing the original and back translation versions using the comparability and interpretability tool in a sample of 56 monolingual health care professionals. The reliability and validity of the Somali version were established by psychometric analysis in a sample of 47 bilingual health-care workers. Results Cronbach's alpha was 0.79 for the Somali version with the inter-item correlation mean of 0.33 and item-to-total correlation mean in the range of 0.40-0.80 ( p < 0.01). Pearson correlation for the item-to-item correlation between English and Somali version was between 0.70 and 0.93 ( p < 0.01) with the paired t-test showing no significant difference between the item means. Conclusions The Somali PHQ-9 showed a good reliability, homogeneity, and internal consistency. The construct validity for the Somali PHQ-9 was also established showing that the Somali PHQ-9 has similar reliability and validity like the other translated versions of PHQ-9.",
"title": ""
},
{
"docid": "37a0c6ac688c7d7f2dd622ebbe3ec184",
"text": "Prior research shows that directly applying phrase-based SMT on lexical tokens to migrate Java to C# produces much semantically incorrect code. A key limitation is the use of sequences in phrase-based SMT to model and translate source code with well-formed structures. We propose mppSMT, a divide-and-conquer technique to address that with novel training and migration algorithms using phrase-based SMT in three phases. First, mppSMT treats a program as a sequence of syntactic units and maps/translates such sequences in two languages to one another. Second, with a syntax-directed fashion, it deals with the tokens within syntactic units by encoding them with semantic symbols to represent their data and token types. This encoding via semantic symbols helps better migration of API usages. Third, the lexical tokens corresponding to each sememe are mapped or migrated. The resulting sequences of tokens are merged together to form the final migrated code. Such divide-and-conquer and syntax-direction strategies enable phrase-based SMT to adapt well to syntactical structures in source code, thus, improving migration accuracy. Our empirical evaluation on several real-world systems shows that 84.8 -- 97.9% and 70 -- 83% of the migrated methods are syntactically and semantically correct, respectively. 26.3 -- 51.2% of total migrated methods are exactly matched to the human-written C# code in the oracle. Compared to Java2CSharp, a rule-based migration tool, it achieves higher semantic accuracy from 6.6 -- 57.7% relatively. Importantly, it does not require manual labeling for training data or manual definition of rules.",
"title": ""
},
{
"docid": "9d089af812c0fdd245a218362d88b62a",
"text": "Interaction is increasingly a public affair, taking place in our theatres, galleries, museums, exhibitions and on the city streets. This raises a new design challenge for HCI - how should spectators experience a performer's interaction with a computer? We classify public interfaces (including examples from art, performance and exhibition design) according to the extent to which a performer's manipulations of an interface and their resulting effects are hidden, partially revealed, fully revealed or even amplified for spectators. Our taxonomy uncovers four broad design strategies: 'secretive,' where manipulations and effects are largely hidden; 'expressive,' where they tend to be revealed enabling the spectator to fully appreciate the performer's interaction; 'magical,' where effects are revealed but the manipulations that caused them are hidden; and finally 'suspenseful,' where manipulations are apparent but effects are only revealed as the spectator takes their turn.",
"title": ""
},
{
"docid": "90ee6bedafe6a0ad7d6fd2c07bab5af9",
"text": "With over 40 years of history, image understanding, in particular, scene classification and recognition remains central to machine vision. With an abundance of image and video databases, it is necessary to be able to sort and retrieve the images and videos in a way that is both efficient and effective. This is possible only if the categories of images and/or their context are known to a user. Hence, the ability to classify and recognize scenes accurately is of utmost importance. This paper presents a brief survey of the advances in scene recognition and classification algorithms. Depending on its goal, image understanding(IU) can be defined in many different ways. However, in general, IU means describing the image content, the objects in it, location and relations between objects, and most recently, describing the events in an image. In (Ralescu 1995) IU is equated with producing a verbal description of the image content. Scene analysis (as part of IU) and categorization is a highly useful ability of humans, who are able to categorize complex natural scenes containing animals or vehicles very quickly (Thorpe, Fize, and Marlot 1996), with little or no attention (Li et al. 2003). When a scene is presented to humans, they are able to quickly identify the scene, i.e., within a short period of exposure (< 100 ms). How do humans perform all of these tasks the way they do, is yet to be fully understood. To date, the classic text by Marr (Marr 1982) remains one of the sources of understanding the human vision systems. Many researchers have tried to imbibe this incredible capability of the human vision system into their algorithms for image processing, scene understanding and recognition. In the presence of a wealth of literature on this and related subjects, surveys of the field, even a limited one, as the present one necessarily is (due to space constraints) are bound to be very useful, by reviewing the methods for scene recognition and classification. Perhaps, the first issue to consider is the concept of scene as a technical concept to capture the natural concept. According to Xiao et al. (Xiao et al. 2010) a scene is a place in which a human can act within, or a place to which a human being could navigate. Therefore, scene recognition and scene classification algorithms must delve into understanding the semantic context of the scene. According to how a scene is recognized in an image, scene recognition algorithms can be broadly divided into two categories. • Scene recognition based on object detection. • Scene recognition using low-level image features Scene recognition using object recognition (SR-OR) Using object recognition for scene classification is a straight-forward and intuitive approach to scene classification and it can assist in distinguishing very complex scenes which might otherwise prove difficult to do using standard low level features. In the paper by Li-Jia Li et al. (Li et al. 2010) the authors argue that although ”robust low-level image features have been proven to be effective representations for scene classification; but pixels, or even local image patches, carry little semantic meanings. For high level visual tasks, such low-level image representations are potentially not enough. ” To combat this drawback of local features, they propose a high-level image representation, called the Object Bank(OB), where an image is represented by integrating the response of the image to various object detectors. These object detectors or filters are blind to the testing dataset or visual task. Using OB representation, superior performances on high level visual recognition tasks can be achieved with simple regularized logistic regression. Their algorithm uses the current state-ofthe-art object detectors of Felzenszwalb et al. (Felzenszwalb et al. 2010), as well as the geometric context classifiers (stuff detectors) of Hoeim et al. (Hoiem, Efros, and Hebert 2005) for pre-training the object detectors. OB offers a rich set of object features, while presenting a challenge – curse of dimensionality due to the presence of multiple class of objects within a single image, which then yields feature vectors of very high dimension. The performance of the system plateaus at a point when the number of object detection filters is too high. According to the authors, the system performance is best, when the number of object filters is moderate. Vineeta Singh et al. MAICS 2017 pp. 85–91",
"title": ""
},
{
"docid": "b5913ebc2449061de01f0384a8f40a93",
"text": "Public procurement of information systems (IS) and IS services provides several challenges to the stakeholders involved in the procurement processes. This paper reports initial results from a Delphi study, which involved 46 experienced procurement managers, chief information officers, and vendor representatives in the Norwegian public sector. The participants identified altogether 98 challenges related to IS procurement, divided further into 13 categories: requirements specification, change management, cooperation among stakeholders, competence, competition, contracting, inter-municipal cooperation, governmental management, procurement process, rules and regulations, technology and infrastructure, vendors, and IT governance. The results contribute by supporting a few previous findings from conceptual and case-based studies, and by suggesting additional issues which deserve both further research and managerial and governmental attention. As such, the results provide altogether a rich overview of the IS procurement challenges in the Norwegian public context.",
"title": ""
},
{
"docid": "3a2729b235884bddc05dbdcb6a1c8fc9",
"text": "The people of Tumaco-La Tolita culture inhabited the borders of present-day Colombia and Ecuador. Already extinct by the time of the Spaniards arrival, they left a huge collection of pottery artifacts depicting everyday life; among these, disease representations were frequently crafted. In this article, we present the results of the personal examination of the largest collections of Tumaco-La Tolita pottery in Colombia and Ecuador; cases of Down syndrome, achondroplasia, mucopolysaccharidosis I H, mucopolysaccharidosis IV, a tumor of the face and a benign tumor in an old woman were found. We believe these to be among the earliest artistic representations of disease.",
"title": ""
},
{
"docid": "23834e86743caf5d181349bac556399c",
"text": "BACKGROUND\nThere is conflicting evidence about the relationship between vitamin D deficiency and depression, and a systematic assessment of the literature has not been available.\n\n\nAIMS\nTo determine the relationship, if any, between vitamin D deficiency and depression.\n\n\nMETHOD\nA systematic review and meta-analysis of observational studies and randomised controlled trials was conducted.\n\n\nRESULTS\nOne case-control study, ten cross-sectional studies and three cohort studies with a total of 31 424 participants were analysed. Lower vitamin D levels were found in people with depression compared with controls (SMD = 0.60, 95% CI 0.23-0.97) and there was an increased odds ratio of depression for the lowest v. highest vitamin D categories in the cross-sectional studies (OR = 1.31, 95% CI 1.0-1.71). The cohort studies showed a significantly increased hazard ratio of depression for the lowest v. highest vitamin D categories (HR = 2.21, 95% CI 1.40-3.49).\n\n\nCONCLUSIONS\nOur analyses are consistent with the hypothesis that low vitamin D concentration is associated with depression, and highlight the need for randomised controlled trials of vitamin D for the prevention and treatment of depression to determine whether this association is causal.",
"title": ""
},
{
"docid": "d70a4fb982aeb2bd502519fb0a7d5c7b",
"text": "We introduce a notion of algorithmic stability of learning algorithms—that we term hypothesis stability—that captures stability of the hypothesis output by the learning algorithm in the normed space of functions from which hypotheses are selected. e main result of the paper bounds the generalization error of any learning algorithm in terms of its hypothesis stability. e bounds are based on martingale inequalities in the Banach space to which the hypotheses belong. We apply the general bounds to bound the performance of some learning algorithms based on empirical risk minimization and stochastic gradient descent. Parts of the work were done when Tongliang Liu was a visiting PhD student at Pompeu Fabra University. School of Information Technologies, Faculty Engineering and Information Technologies, University of Sydney, Sydney, Australia, tliang.liu@gmail.com, dacheng.tao@sydney.edu.au Department of Economics and Business, Pompeu Fabra University, Barcelona, Spain, gabor.lugosi@upf.edu ICREA, Pg. Llus Companys 23, 08010 Barcelona, Spain Barcelona Graduate School of Economics AI group, DTIC, Universitat Pompeu Fabra, Barcelona, Spain, gergely.neu@gmail.com 1",
"title": ""
},
{
"docid": "0481c35949653971b75a3a4c3051c590",
"text": "Handling appearance variations is a very challenging problem for visual tracking. Existing methods usually solve this problem by relying on an effective appearance model with two features: 1) being capable of discriminating the tracked target from its background 2) being robust to the target’s appearance variations during tracking. Instead of integrating the two requirements into the appearance model, in this paper, we propose a tracking method that deals with these problems separately based on sparse representation in a particle filter framework. Each target candidate defined by a particle is linearly represented by the target and background templates with an additive representation error. Discriminating the target from its background is achieved by activating the target templates or the background templates in the linear system in a competitive manner. The target’s appearance variations are directly modeled as the representation error. An online algorithm is used to learn the basis functions that sparsely span the representation error. The linear system is solved via l1 minimization. The candidate with the smallest reconstruction error using the target templates is selected as the tracking result. We test the proposed approach using four sequences with heavy occlusions, large pose variations, drastic illumination changes and low foreground-background contrast. The proposed approach shows excellent performance in comparison with two latest state-of-the-art trackers.",
"title": ""
},
{
"docid": "ef9235285ebbef109254bfb5968d2d6b",
"text": "This paper proposes Dyadic Memory Networks (DyMemNN), a novel extension of end-to-end memory networks (memNN) for aspect-based sentiment analysis (ABSA). Originally designed for question answering tasks, memNN operates via a memory selection operation in which relevant memory pieces are adaptively selected based on the input query. In the problem of ABSA, this is analogous to aspects and documents in which the relationship between each word in the document is compared with the aspect vector. In the standard memory networks, simple dot products or feed forward neural networks are used to model the relationship between aspect and words which lacks representation learning capability. As such, our dyadic memory networks ameliorates this weakness by enabling rich dyadic interactions between aspect and word embeddings by integrating either parameterized neural tensor compositions or holographic compositions into the memory selection operation. To this end, we propose two variations of our dyadic memory networks, namely the Tensor DyMemNN and Holo DyMemNN. Overall, our two models are end-to-end neural architectures that enable rich dyadic interaction between aspect and document which intuitively leads to better performance. Via extensive experiments, we show that our proposed models achieve the state-of-the-art performance and outperform many neural architectures across six benchmark datasets.",
"title": ""
},
{
"docid": "fbf6e584128b09b6ae00f4a51bca3571",
"text": "Accurate event analysis in real time is of paramount importance for high-fidelity situational awareness such that proper actions can take place before any isolated faults escalate to cascading blackouts. For large-scale power systems, due to the large intra-class variance and inter-class similarity, the nonlinear nature of the system, and the large dynamic range of the event scale, multi-event analysis presents an intriguing problem. Existing approaches are limited to detecting only single or double events or a specified event type. Although some previous works can well distinguish multiple events in small-scale power systems, the performance tends to degrade dramatically in large-scale systems. In this paper, we focus on multiple event detection, recognition, and temporal localization in large-scale power systems. We discover that there always exist groups of buses whose reaction to each event shows high degree similarity, and the group membership generally remains the same regardless of the type of event(s). We further verify that this reaction to multiple events can be approximated as a linear combination of reactions to each constituent event. Based on these findings, we propose a novel method, referred to as cluster-based sparse coding (CSC), to extract all the underlying single events involved in a multi-event scenario. Experimental results based on simulated large-scale system model (i.e., NPCC) show that the proposed CSC algorithm presents high detection and recognition rate with low false alarms.",
"title": ""
},
{
"docid": "6a41320c9692b601fefda9a2fbce7f26",
"text": "One of the most critical reliability problems that the LED COBs are facing is the failure of wire bonding connections. The LED COBs would easily break down due to the mechanical shocks or material stress on the gold wire in conventional package roadmaps. Flip chip technologies are excellent options for solving this problem. This paper has been focused on the challenges during the design and manufacturing of flip chip LED COBs based on metal core print circuit boards (MCPCBs). The properties of solder paste, Au/Sn eutectic, and ceramic buffer die-bonding structures are studied. It is found out that the former two COBs have better thermal and optical performance, while the last COB shows good reliability with low leakage current.",
"title": ""
},
{
"docid": "fdbdac5f319cd46aeb73be06ed64cbb9",
"text": "Recently deep neural networks (DNNs) have been used to learn speaker features. However, the quality of the learned features is not sufficiently good, so a complex back-end model, either neural or probabilistic, has to be used to address the residual uncertainty when applied to speaker verification. This paper presents a convolutional time-delay deep neural network structure (CT-DNN) for speaker feature learning. Our experimental results on the Fisher database demonstrated that this CT-DNN can produce high-quality speaker features: even with a single feature (0.3 seconds including the context), the EER can be as low as 7.68%. This effectively confirmed that the speaker trait is largely a deterministic short-time property rather than a longtime distributional pattern, and therefore can be extracted from just dozens of frames.",
"title": ""
}
] |
scidocsrr
|
d89527476f739a1deb6e2442011ecd28
|
Paraphrase acquisition via crowdsourcing and machine learning
|
[
{
"docid": "0b5616a9e272183502e198886a251513",
"text": "Recently, Amazon Mechanical Turk has gained a lot of attention as a tool for conducting different kinds of relevance evaluations. In this paper we show a series of experiments on TREC data, evaluate the outcome, and discuss the results. Our position, supported by these preliminary experimental results, is that crowdsourcing is a viable alternative for relevance assessment.",
"title": ""
}
] |
[
{
"docid": "a7db9f3f1bb5883f6a5a873dd661867b",
"text": "Psychologists and sociologists usually interpret happiness scores as cardinal and comparable across respondents, and thus run OLS regressions on happiness and changes in happiness. Economists usually assume only ordinality and have mainly used ordered latent response models, thereby not taking satisfactory account of fixed individual traits. We address this problem by developing a conditional estimator for the fixed-effect ordered logit model. We find that assuming ordinality or cardinality of happiness scores makes little difference, whilst allowing for fixed-effects does change results substantially. We call for more research into the determinants of the personality traits making up these fixed-effects.",
"title": ""
},
{
"docid": "e5c7026b7970276a2814001c489792df",
"text": "The three level buck converter can offer high efficiency and high power density in VR and POL applications. The gains are made possible by adding a flying capacitor that reduces the MOSFET voltage stress by half allowing for the use of low voltage devices, doubles the effective switching frequency, and decreases the inductor size by reducing the volt-second across the inductor. To achieve high efficiency and power density the flying capacitor must be balanced at half of the input voltage and the circuit must be started up without the MOSFETs seeing the full input voltage for protection purposes. This paper provides a new novel control method to balance the flying capacitor with the use of current control and offers a simple startup solution to protect the MOSFETs during start up. Experimental verification shows the efficiency gains and inductance reduction.",
"title": ""
},
{
"docid": "42f5e355ddf13e5e339bd46d5ff584fd",
"text": "The phenomenal growth of the Internet in the last decade and society's increasing dependence on it has brought along, a flood of security attacks on the networking and computing infrastructure. Intrusion detection/prevention systems provide defenses against these attacks by monitoring headers and payload of packets flowing through the network. Multiple string matching that can compare hundreds of string patterns simultaneously is a critical component of these systems, and is a well-studied problem. Most of the string matching solutions today are based on the classic Aho-Corasick algorithm, which has an inherent limitation; they can process only one input character in one cycle. As memory speed is not growing at the same pace as network speed, this limitation has become a bottleneck in the current network, having speeds of tens of gigabits per second. In this paper, we propose a novel multiple string matching algorithm that can process multiple characters at a time thus achieving multi-gigabit rate search speeds. We also propose an architecture for an efficient implementation on TCAM-based hardware. We additionally propose novel optimizations by making use of the properties of TCAMs to significantly reduce the memory requirements of the proposed algorithm. We finally present extensive simulation results of network-based virus/worm detection using real signature databases to illustrate the effectiveness of the proposed scheme.",
"title": ""
},
{
"docid": "debcc046323ffbd9a093c8e07d37960e",
"text": "This review discusses the theory and practical application of independent component analysis (ICA) to multi-channel EEG data. We use examples from an audiovisual attention-shifting task performed by young and old subjects to illustrate the power of ICA to resolve subtle differences between evoked responses in the two age groups. Preliminary analysis of these data using ICA suggests a loss of task specificity in independent component (IC) processes in frontal and somatomotor cortex during post-response periods in older as compared to younger subjects, trends not detected during examination of scalp-channel event-related potential (ERP) averages. We discuss possible approaches to component clustering across subjects and new ways to visualize mean and trial-by-trial variations in the data, including ERP-image plots of dynamics within and across trials as well as plots of event-related spectral perturbations in component power, phase locking, and coherence. We believe that widespread application of these and related analysis methods should bring EEG once again to the forefront of brain imaging, merging its high time and frequency resolution with enhanced cm-scale spatial resolution of its cortical sources.",
"title": ""
},
{
"docid": "7d4167e9b5944b978ec5be08b4f01665",
"text": "Recently, incremental and on-line learning gained more attention especially in the context of big data and learning from data streams, conflicting with the traditional assumption of complete data availability. Even though a variety of different methods are available, it often remains unclear which of them is suitable for a specific task and how they perform in comparison to each other. We analyze the key properties of eight popular incremental methods representing different algorithm classes. Thereby, we evaluate them with regards to their on-line classification error as well as to their behavior in the limit. Further, we discuss the often neglected issue of hyperparameter optimization specifically for each method and test how robustly it can be done based on a small set of examples. Our extensive evaluation on data sets with different characteristics gives an overview of the performance with respect to accuracy, convergence speed as well as model complexity, facilitating the choice of the best method for a given application.",
"title": ""
},
{
"docid": "67d126ce0e2060c5a94f9171c972fffc",
"text": "We study the relationship between social media output and National Football League (NFL) games, using a dataset containing messages from Twitter and NFL game statistics. Specifically, we consider tweets pertaining to specific teams and games in the NFL season and use them alongside statistical game data to build predictive models for future game outcomes (which team will win?) and sports betting outcomes (which team will win with the point spread? will the total points be over/under the line?). We experiment with several feature sets and find that simple features using large volumes of tweets can match or exceed the performance of more traditional features that use game statistics.",
"title": ""
},
{
"docid": "0db28b5ec56259c8f92f6cc04d4c2601",
"text": "The application of neuroscience to marketing, and in particular to the consumer psychology of brands, has gained popularity over the past decade in the academic and the corporate world. In this paper, we provide an overview of the current and previous research in this area and explainwhy researchers and practitioners alike are excited about applying neuroscience to the consumer psychology of brands. We identify critical issues of past research and discuss how to address these issues in future research. We conclude with our vision of the future potential of research at the intersection of neuroscience and consumer psychology. © 2011 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a1826398c8f5e94ed1fe2f6fa76ab21c",
"text": "In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN [14], which was the state-of-the-art, from 31% to 50.3% on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1%. Detailed component-wise analysis is also provided through extensive experimental evaluation, which provide a global view for people to understand the deep learning object detection pipeline.",
"title": ""
},
{
"docid": "ae3d93605ada1dddfffdef5d65b72975",
"text": "The use of brain monitoring based on EEG, in natural environments and over long time periods, is hindered by the limited portability of current wearable systems, and the invasiveness of implanted systems. To that end, we introduce an ear-EEG recording device based on generic earpieces which meets key patient needs (discreet, unobstrusive, user-friendly, robust) and that is low-cost and suitable for off-the-shelf use; thus promising great advantages for healthcare applications. Its feasibility is validated in a comprehensive comparative study with our established prototype, based on a personalized earpiece, for a key EEG paradigm.",
"title": ""
},
{
"docid": "2a7bd6fbce4fef6e319664090755858d",
"text": "AIM\nThis paper is a report of a study conducted to determine which occupational stressors are present in nurses' working environment; to describe and compare occupational stress between two educational groups of nurses; to estimate which stressors and to what extent predict nurses' work ability; and to determine if educational level predicts nurses' work ability.\n\n\nBACKGROUND\nNurses' occupational stress adversely affects their health and nursing quality. Higher educational level has been shown to have positive effects on the preservation of good work ability.\n\n\nMETHOD\nA cross-sectional study was conducted in 2006-2007. Questionnaires were distributed to a convenience sample of 1392 (59%) nurses employed at four university hospitals in Croatia (n = 2364). The response rate was 78% (n = 1086). Data were collected using the Occupational Stress Assessment Questionnaire and Work Ability Index Questionnaire.\n\n\nFINDINGS\nWe identified six major groups of occupational stressors: 'Organization of work and financial issues', 'public criticism', 'hazards at workplace', 'interpersonal conflicts at workplace', 'shift work' and 'professional and intellectual demands'. Nurses with secondary school qualifications perceived Hazards at workplace and Shift work as statistically significantly more stressful than nurses a with college degree. Predictors statistically significantly related with low work ability were: Organization of work and financial issues (odds ratio = 1.69, 95% confidence interval 122-236), lower educational level (odds ratio = 1.69, 95% confidence interval 122-236) and older age (odds ratio = 1.07, 95% confidence interval 1.05-1.09).\n\n\nCONCLUSION\nHospital managers should develop strategies to address and improve the quality of working conditions for nurses in Croatian hospitals. Providing educational and career prospects can contribute to decreasing nurses' occupational stress levels, thus maintaining their work ability.",
"title": ""
},
{
"docid": "cc56706151e027c89eea5639486d4cd3",
"text": "To refine user interest profiling, this paper focuses on extending scientific subject ontology via keyword clustering and on improving the accuracy and effectiveness of recommendation of the electronic academic publications in online services. A clustering approach is proposed for domain keywords for the purpose of the subject ontology extension. Based on the keyword clusters, the construction of user interest profiles is presented on a rather fine granularity level. In the construction of user interest profiles, we apply two types of interest profiles: explicit profiles and implicit profiles. The explicit eighted keyword graph",
"title": ""
},
{
"docid": "f3e63f3fb0ce0e74697e0a74867d9671",
"text": "Convolutional Neural Networks (CNN) have been successfully applied to autonomous driving tasks, many in an end-to-end manner. Previous end-to-end steering control methods take an image or an image sequence as the input and directly predict the steering angle with CNN. Although single task learning on steering angles has reported good performances, the steering angle alone is not sufficient for vehicle control. In this work, we propose a multi-task learning framework to predict the steering angle and speed control simultaneously in an end-to-end manner. Since it is nontrivial to predict accurate speed values with only visual inputs, we first propose a network to predict discrete speed commands and steering angles with image sequences. Moreover, we propose a multi-modal multi-task network to predict speed values and steering angles by taking previous feedback speeds and visual recordings as inputs. Experiments are conducted on the public Udacity dataset and a newly collected SAIC dataset. Results show that the proposed model predicts steering angles and speed values accurately. Furthermore, we improve the failure data synthesis methods to solve the problem of error accumulation in real road tests.",
"title": ""
},
{
"docid": "cd67a650969aa547cad8e825511c45c2",
"text": "We present DAPIP, a Programming-By-Example system that learns to program with APIs to perform data transformation tasks. We design a domainspecific language (DSL) that allows for arbitrary concatenations of API outputs and constant strings. The DSL consists of three family of APIs: regular expression-based APIs, lookup APIs, and transformation APIs. We then present a novel neural synthesis algorithm to search for programs in the DSL that are consistent with a given set of examples. The search algorithm uses recently introduced neural architectures to encode input-output examples and to model the program search in the DSL. We show that synthesis algorithm outperforms baseline methods for synthesizing programs on both synthetic and real-world benchmarks.",
"title": ""
},
{
"docid": "78ec561e9a6eb34972ab238a02fdb40a",
"text": "OBJECTIVE\nTo evaluate the safety and efficacy of mass circumcision performed using a plastic clamp.\n\n\nMETHODS\nA total of 2013 males, including infants, children, adolescents, and adults were circumcised during a 7-day period by using a plastic clamp technique. Complications were analyzed retrospectively in regard to 4 different age groups. Postcircumcision sexual function and satisfaction rates of the adult males were also surveyed.\n\n\nRESULTS\nThe mean duration of circumcision was 3.6±1.2 minutes. Twenty-six males who were lost to follow-up were excluded from the study. The total complication rate was found to be 2.47% among the remaining 1987 males, with a mean age of 7.8±2.5 years. The highest complication rate (2.93%) was encountered among the children<2 years age, which was because of the high rate of buried penis (0.98%) and excessive foreskin (0.98%) observed in this group. The complication rates of older children, adolescents, and adults were slightly lower than the children<2 years age, at 2.39%, 2.51%, and 2.40%, respectively. Excessive foreskin (0.7%) was the most common complication observed after mass circumcision. Bleeding (0.6%), infection (0.55%), wound dehiscence (0.25%), buried penis (0.25%), and urine retention (0.1%) were other encountered complications. The erectile function and sexual libido in adolescents and adults was not affected by circumcision and a 96% satisfaction rate was obtained.\n\n\nCONCLUSIONS\nMass circumcision performed by a plastic clamp technique was found to be a safe and time-saving method of circumcising a large number of males at any age.",
"title": ""
},
{
"docid": "c0915edefb5650a80466c6cf0766cec6",
"text": "This paper presents the design and implementation of a fully integrated high-voltage (HV) front-end transducer for MEMS ultrasonic applications. Each of the front-end transducers in the array includes a HV transmitting driver and a 50MHz capacitive micro-machined ultrasound transducer (CMUT). The design adapts low voltage integrated circuit (IC) design techniques to create integrated high voltage interfaces. Effective methods to implement high voltage interfaces to integrated CMOS/DMOS circuits are presented. Using a 0.18μm CMOS/DMOS process, simulation results of this interface IC show that can generated a 30V high voltage pulse from a 1.8V input triggering signal to drive a 50MHz MEMS ultrasound transducer with the largest 44pF capacitive load.",
"title": ""
},
{
"docid": "db2ab575c10cbd334469a9d7c294d9dd",
"text": "Bias in online information has recently become a pressing issue, with search engines, social networks and recommendation services being accused of exhibiting some form of bias. In this vision paper, we make the case for a systematic approach towards measuring bias. To this end, we discuss formal measures for quantifying the various types of bias, we outline the system components necessary for realizing them, and we highlight the related research challenges and open problems.",
"title": ""
},
{
"docid": "6549a00df9fadd56b611ee9210102fe8",
"text": "Ontology editors are software tools that allow the creation and maintenance of ontologies through a graphical user interface. As the Semantic Web effort grows, a larger community of users for this kind of tools is expected. New users include people not specifically skilled in the use of ontology formalisms. In consequence, the usability of ontology editors can be viewed as a key adoption precondition for Semantic Web technologies. In this paper, the usability evaluation of several representative ontology editors is described. This evaluation is carried out by combining a heuristic pre-assessment and a subsequent user-testing phase. The target population comprises people with no specific ontology-creation skills that have a general knowledge about domain modelling. The problems found point out that, for this kind of users, current editors are adequate for the creation and maintenance of simple ontologies, but also that there is room for improvement, especially in browsing mechanisms, help systems and visualization metaphors.",
"title": ""
},
{
"docid": "545b41a21edb2fa08fd6680d3d20afaf",
"text": "SUMMARY This paper demonstrate how Gaussian Markov random fields (conditional autoregressions) can be fast sampled using numerical techniques for sparse matrices. The algorithm is general , surprisingly efficient, and expands easily to various forms for conditional simulation and evaluation of normalisation constants. I demonstrate its use in Markov chain Monte Carlo algorithms for disease mapping, space varying regression model, spatial non-parametrics, hierarchical space-time modelling and Bayesian imaging. Håkon Tjelmeland and Darren J. Wilkinson for stimulating discussions, and Leo Knorr-Held for providing the oral cavity cancer data and the region-map of Germany. on \" Computational and Statistical methods for the analysis of spatial data \" .",
"title": ""
},
{
"docid": "15df2ec32bc856f0eb7cd7b511346a95",
"text": "Massive online social networks with hundreds of millions of active users are increasingly being used by Cyber criminals to spread malicious software (malware) to exploit vulnerabilities on the machines of users for personal gain. Twitter is particularly susceptible to such activity as, with its 140 character limit, it is common for people to include URLs in their tweets to link to more detailed information, evidence, news reports and so on. URLs are often shortened so the endpoint is not obvious before a person clicks the link. Cyber criminals can exploit this to propagate malicious URLs on Twitter, for which the endpoint is a malicious server that performs unwanted actions on the person's machine. This is known as a drive-by-download. In this paper we develop a machine classification system to distinguish between malicious and benign URLs within seconds of the URL being clicked (i.e. 'real-time'). We train the classifier using machine activity logs created while interacting with URLs extracted from Twitter data collected during a large global event -- the Superbowl -- and test it using data from another large sporting event -- the Cricket World Cup. The results show that machine activity logs produce precision performances of up to 0.975 on training data from the first event and 0.747 on a test data from a second event. Furthermore, we examine the properties of the learned model to explain the relationship between machine activity and malicious software behaviour, and build a learning curve for the classifier to illustrate that very small samples of training data can be used with only a small detriment to performance.",
"title": ""
},
{
"docid": "ee17d65e01a597b24ab65753bfedf55d",
"text": "The name \"WiMAX\" was created by the “WiMAX Forum”, which was formed in June. WiMAX (Worldwide Interoperability for Microwave Access) standards define formal specifications for deployment of broadband wireless metropolitan area networks (wireless MANs).Wireless MANs as needed in WiMAX standards provide wireless broadband access anywhere, anytime, and on virtually any device. Introducing the various type of scheduling algorithm, like FIFO,PQ,WFQ, for comparison of four type of scheduling service, with its own QoS needs and also introducing OPNET modeler support for Worldwide Interoperability for Microwave Access (WiMAX) network. The simulation results indicate the correctness and the effectiveness of theses algorithm. This paper presents a WiMAX simulation model designed with OPNET modeler 14 to measure the delay, load and the throughput performance factors.",
"title": ""
}
] |
scidocsrr
|
bcaf0981d8d195a7548f08ee1b6ec846
|
Machine Comprehension using Rich Semantic Representations
|
[
{
"docid": "af0dfe672a8828587e3b27ef473ea98e",
"text": "Machine comprehension of text is the overarching goal of a great deal of research in natural language processing. The Machine Comprehension Test (Richardson et al., 2013) was recently proposed to assess methods on an open-domain, extensible, and easy-to-evaluate task consisting of two datasets. In this paper we develop a lexical matching method that takes into account multiple context windows, question types and coreference resolution. We show that the proposed method outperforms the baseline of Richardson et al. (2013), and despite its relative simplicity, is comparable to recent work using machine learning. We hope that our approach will inform future work on this task. Furthermore, we argue that MC500 is harder than MC160 due to the way question answer pairs were created.",
"title": ""
},
{
"docid": "19ebc55c61424906312acc5a706ab209",
"text": "In this paper, we propose a walk-based graph kernel that generalizes the notion of treekernels to continuous spaces. Our proposed approach subsumes a general framework for word-similarity, and in particular, provides a flexible way to incorporate distributed representations. Using vector representations, such an approach captures both distributional semantic similarities among words as well as the structural relations between them (encoded as the structure of the parse tree). We show an efficient formulation to compute this kernel using simple matrix operations. We present our results on three diverse NLP tasks, showing state-of-the-art results.",
"title": ""
},
{
"docid": "00a0ab98af151a80fe7b51d6277cb996",
"text": "Meaning Representation for Sembanking",
"title": ""
},
{
"docid": "cfb08af0088de56519960beb9ee56607",
"text": "Research into corpus-based semantics has focused on the development of ad hoc models that treat single tasks, or sets of closely related tasks, as unrelated challenges to be tackled by extracting different kinds of distributional information from the corpus. As an alternative to this “one task, one model” approach, the Distributional Memory framework extracts distributional information once and for all from the corpus, in the form of a set of weighted word-link-word tuples arranged into a third-order tensor. Different matrices are then generated from the tensor, and their rows and columns constitute natural spaces to deal with different semantic problems. In this way, the same distributional information can be shared across tasks such as modeling word similarity judgments, discovering synonyms, concept categorization, predicting selectional preferences of verbs, solving analogy problems, classifying relations between word pairs, harvesting qualia structures with patterns or example pairs, predicting the typical properties of concepts, and classifying verbs into alternation classes. Extensive empirical testing in all these domains shows that a Distributional Memory implementation performs competitively against task-specific algorithms recently reported in the literature for the same tasks, and against our implementations of several state-of-the-art methods. The Distributional Memory approach is thus shown to be tenable despite the constraints imposed by its multi-purpose nature.",
"title": ""
}
] |
[
{
"docid": "fd2450f5b02a2599be29b90a599ad31d",
"text": "Male genital injuries, demand prompt management to prevent long-term sexual and psychological damage. Injuries to the scrotum and contents may produce impaired fertility.We report our experience in diagnosing and managing a case of a foreign body in the scrotum following a boat engine blast accident. This case report highlights the need for a good history and thorough general examination to establish the mechanism of injury in order to distinguish between an embedded penetrating projectile injury and an injury with an exit wound. Prompt surgical exploration with hematoma evacuation limits complications.",
"title": ""
},
{
"docid": "1af7a41e5cac72ed9245b435c463b366",
"text": "We present a novel method for key term extraction from text documents. In our method, document is modeled as a graph of semantic relationships between terms of that document. We exploit the following remarkable feature of the graph: the terms related to the main topics of the document tend to bunch up into densely interconnected subgraphs or communities, while non-important terms fall into weakly interconnected communities, or even become isolated vertices. We apply graph community detection techniques to partition the graph into thematically cohesive groups of terms. We introduce a criterion function to select groups that contain key terms discarding groups with unimportant terms. To weight terms and determine semantic relatedness between them we exploit information extracted from Wikipedia.\n Using such an approach gives us the following two advantages. First, it allows effectively processing multi-theme documents. Second, it is good at filtering out noise information in the document, such as, for example, navigational bars or headers in web pages.\n Evaluations of the method show that it outperforms existing methods producing key terms with higher precision and recall. Additional experiments on web pages prove that our method appears to be substantially more effective on noisy and multi-theme documents than existing methods.",
"title": ""
},
{
"docid": "c6cd755fa1d993eeb8931e56aec1d68e",
"text": "In this paper, we make two contributions to unsupervised domain adaptation (UDA) using the convolutional neural network (CNN). First, our approach transfers knowledge in all the convolutional layers through attention alignment. Most previous methods align high-level representations, e.g., activations of the fully connected (FC) layers. In these methods, however, the convolutional layers which underpin critical lowlevel domain knowledge cannot be updated directly towards reducing domain discrepancy. Specifically, we assume that the discriminative regions in an image are relatively invariant to image style changes. Based on this assumption, we propose an attention alignment scheme on all the target convolutional layers to uncover the knowledge shared by the source domain. Second, we estimate the posterior label distribution of the unlabeled data for target network training. Previous methods, which iteratively update the pseudo labels by the target network and refine the target network by the updated pseudo labels, are vulnerable to label estimation errors. Instead, our approach uses category distribution to calculate the cross-entropy loss for training, thereby ameliorating the error accumulation of the estimated labels. The two contributions allow our approach to outperform the state-of-the-art methods by +2.6% on the Office-31 dataset.",
"title": ""
},
{
"docid": "ffbe5d7219abcb5f7cef4be54302e3a0",
"text": "Modern medical care is influenced by two paradigms: 'evidence-based medicine' and 'patient-centered medicine'. In the last decade, both paradigms rapidly gained in popularity and are now both supposed to affect the process of clinical decision making during the daily practice of physicians. However, careful analysis shows that they focus on different aspects of medical care and have, in fact, little in common. Evidence-based medicine is a rather young concept that entered the scientific literature in the early 1990s. It has basically a positivistic, biomedical perspective. Its focus is on offering clinicians the best available evidence about the most adequate treatment for their patients, considering medicine merely as a cognitive-rational enterprise. In this approach the uniqueness of patients, their individual needs and preferences, and their emotional status are easily neglected as relevant factors in decision-making. Patient-centered medicine, although not a new phenomenon, has recently attracted renewed attention. It has basically a humanistic, biopsychosocial perspective, combining ethical values on 'the ideal physician', with psychotherapeutic theories on facilitating patients' disclosure of real worries, and negotiation theories on decision making. It puts a strong focus on patient participation in clinical decision making by taking into account the patients' perspective, and tuning medical care to the patients' needs and preferences. However, in this approach the ideological base is better developed than its evidence base. In modern medicine both paradigms are highly relevant, but yet seem to belong to different worlds. The challenge for the near future is to bring these separate worlds together. The aim of this paper is to give an impulse to this integration. Developments within both paradigms can benefit from interchanging ideas and principles from which eventually medical care will benefit. In this process a key role is foreseen for communication and communication research.",
"title": ""
},
{
"docid": "4bc85c4035c8bd4d502b13613147272c",
"text": "We present the first real-time method for refinement of depth data using shape-from-shading in general uncontrolled scenes. Per frame, our real-time algorithm takes raw noisy depth data and an aligned RGB image as input, and approximates the time-varying incident lighting, which is then used for geometry refinement. This leads to dramatically enhanced depth maps at 30Hz. Our algorithm makes few scene assumptions, handling arbitrary scene objects even under motion. To enable this type of real-time depth map enhancement, we contribute a new highly parallel algorithm that reformulates the inverse rendering optimization problem in prior work, allowing us to estimate lighting and shape in a temporally coherent way at video frame-rates. Our optimization problem is minimized using a new regular grid Gauss-Newton solver implemented fully on the GPU. We demonstrate results showing enhanced depth maps, which are comparable to offline methods but are computed orders of magnitude faster, as well as baseline comparisons with online filtering-based methods. We conclude with applications of our higher quality depth maps for improved real-time surface reconstruction and performance capture.",
"title": ""
},
{
"docid": "144bb8e869671843cb5d8053e2ee861d",
"text": "We investigate whether physicians' financial incentives influence health care supply, technology diffusion, and resulting patient outcomes. In 1997, Medicare consolidated the geographic regions across which it adjusts physician payments, generating area-specific price shocks. Areas with higher payment shocks experience significant increases in health care supply. On average, a 2 percent increase in payment rates leads to a 3 percent increase in care provision. Elective procedures such as cataract surgery respond much more strongly than less discretionary services. Non-radiologists expand their provision of MRIs, suggesting effects on technology adoption. We estimate economically small health impacts, albeit with limited precision.",
"title": ""
},
{
"docid": "b1cad8dde7d9ceb1bb973fb323652d05",
"text": "Sites for online classified ads selling sex are widely used by human traffickers to support their pernicious business. The sheer quantity of ads makes manual exploration and analysis unscalable. In addition, discerning whether an ad is advertising a trafficked victim or an independent sex worker is a very difficult task. Very little concrete ground truth (i.e., ads definitively known to be posted by a trafficker) exists in this space. In this work, we develop tools and techniques that can be used separately and in conjunction to group sex ads by their true owner (and not the claimed author in the ad). Specifically, we develop a machine learning classifier that uses stylometry to distinguish between ads posted by the same vs. different authors with 90% TPR and 1% FPR. We also design a linking technique that takes advantage of leakages from the Bitcoin mempool, blockchain and sex ad site, to link a subset of sex ads to Bitcoin public wallets and transactions. Finally, we demonstrate via a 4-week proof of concept using Backpage as the sex ad site, how an analyst can use these automated approaches to potentially find human traffickers.",
"title": ""
},
{
"docid": "7e7f88c872d1dd49c49830b667af960f",
"text": "The influence of Artificial Intelligence (AI) and Artificial Life (ALife) technologies upon society, and their potential to fundamentally shape the future evolution of humankind, are topics very much at the forefront of current scientific, governmental and public debate. While these might seem like very modern concerns, they have a long history that is often disregarded in contemporary discourse. Insofar as current debates do acknowledge the history of these ideas, they rarely look back further than the origin of the modern digital computer age in the 1940s–50s. In this paper we explore the earlier history of these concepts. We focus in particular on the idea of self-reproducing and evolving machines, and potential implications for our own species. We show that discussion of these topics arose in the 1860s, within a decade of the publication of Darwin’s The Origin of Species, and attracted increasing interest from scientists, novelists and the general public in the early 1900s. After introducing the relevant work from this period, we categorise the various visions presented by these authors of the future implications of evolving machines for humanity. We suggest that current debates on the co-evolution of society and technology can be enriched by a proper appreciation of the long history of the ideas involved.",
"title": ""
},
{
"docid": "3861e3655de5593526184df4b17f1493",
"text": "A new approach to Image Quality Assessment (IQA) is presented. The idea is based on the fact that two images are similar if their structural relationship within their blocks is preserved. To this end, a transition matrix is defined which exploits structural transitions between corresponding blocks of two images. The matrix contains valuable information about differences of two images, which should be transformed to a quality index. Eigen-value analysis over the transition matrix leads to a new distance measure called Eigen-gap. According to simulation results, the Eigen-gap is not only highly correlated to subjective scores but also, its performance is as good as the SSIM, a trustworthy index.",
"title": ""
},
{
"docid": "2da0db20b51b06036fa2fda8342202e3",
"text": "Recent advances in research tools for the systematic analysis of textual data are enabling exciting new research throughout the social sciences. For comparative politics, scholars who are often interested in nonEnglish and possibly multilingual textual datasets, these advances may be difficult to access. This article discusses practical issues that arise in the processing, management, translation, and analysis of textual data with a particular focus on how procedures differ across languages. These procedures are combined in two applied examples of automated text analysis using the recently introduced Structural Topic Model. We also show how the model can be used to analyze data that have been translated into a single language via machine translation tools. All the methods we describe here are implemented in open-source software packages available from the authors.",
"title": ""
},
{
"docid": "71757d1cee002bb235a591cf0d5aafd5",
"text": "There is an old Wall Street adage goes, ‘‘It takes volume to make price move”. The contemporaneous relation between trading volume and stock returns has been studied since stock markets were first opened. Recent researchers such as Wang and Chin [Wang, C. Y., & Chin S. T. (2004). Profitability of return and volume-based investment strategies in China’s stock market. Pacific-Basin Finace Journal, 12, 541–564], Hodgson et al. [Hodgson, A., Masih, A. M. M., & Masih, R. (2006). Futures trading volume as a determinant of prices in different momentum phases. International Review of Financial Analysis, 15, 68–85], and Ting [Ting, J. J. L. (2003). Causalities of the Taiwan stock market. Physica A, 324, 285–295] have found the correlation between stock volume and price in stock markets. To verify this saying, in this paper, we propose a dual-factor modified fuzzy time-series model, which take stock index and trading volume as forecasting factors to predict stock index. In empirical analysis, we employ the TAIEX (Taiwan stock exchange capitalization weighted stock index) and NASDAQ (National Association of Securities Dealers Automated Quotations) as experimental datasets and two multiplefactor models, Chen’s [Chen, S. M. (2000). Temperature prediction using fuzzy time-series. IEEE Transactions on Cybernetics, 30 (2), 263–275] and Huarng and Yu’s [Huarng, K. H., & Yu, H. K. (2005). A type 2 fuzzy time-series model for stock index forecasting. Physica A, 353, 445–462], as comparison models. The experimental results indicate that the proposed model outperforms the listing models and the employed factors, stock index and the volume technical indicator, VR(t), are effective in stock index forecasting. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "002fe3efae0fc9f88690369496ce5e7d",
"text": "Experimental evidence suggests that emotions can both speed-up and slow-down the internal clock. Speeding up has been observed for to-be-timed emotional stimuli that have the capacity to sustain attention, whereas slowing down has been observed for to-be-timed neutral stimuli that are presented in the context of emotional distractors. These effects have been explained by mechanisms that involve changes in bodily arousal, attention, or sentience. A review of these mechanisms suggests both merits and difficulties in the explanation of the emotion-timing link. Therefore, a hybrid mechanism involving stimulus-specific sentient representations is proposed as a candidate for mediating emotional influences on time. According to this proposal, emotional events enhance sentient representations, which in turn support temporal estimates. Emotional stimuli with a larger share in ones sentience are then perceived as longer than neutral stimuli with a smaller share.",
"title": ""
},
{
"docid": "0bd30308a11711f1dc71b8ff8ae8e80c",
"text": "Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of the client through the auditing of whether his data stored in the cloud are indeed intact, which can be important in achieving economies of scale for Cloud Computing. The support for data dynamics via the most general forms of data operation, such as block modification, insertion, and deletion, is also a significant step toward practicality, since services in Cloud Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public auditability or dynamic data operations, this paper achieves both. We first identify the difficulties and potential security problems of direct extensions with fully dynamic data updates from prior works and then show how to construct an elegant verification scheme for the seamless integration of these two salient features in our protocol design. In particular, to achieve efficient data dynamics, we improve the existing proof of storage models by manipulating the classic Merkle Hash Tree construction for block tag authentication. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multiuser setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis show that the proposed schemes are highly efficient and provably secure.",
"title": ""
},
{
"docid": "47a484d75b1635139f899d2e1875d8f4",
"text": "This work presents the concept and methodology as well as the architecture and physical implementation of an integrated node for smart-city applications. The presented integrated node lies on active RFID technology whereas the use case illustrated, with results from a small-scale verification of the presented node, refers to common-type waste-bins. The sensing units deployed for the use case are ultrasonic sensors that provide ranging information which is translated to fill-level estimations; however the use of a versatile active RFID tag within the node is able to afford multiple sensors for a variety of smart-city applications. The most important benefits of the presented node are power minimization, utilization of low-cost components and accurate fill-level estimation with a tiny data-load fingerprint, regarding the specific use case on waste-bins, whereas the node has to be deployed on public means of transportation or similar standard route vehicles within an urban or suburban context.",
"title": ""
},
{
"docid": "d5bc5837349333a6f1b0b47f16844c13",
"text": "Personalized news recommender systems have gained increasing attention in recent years. Within a news reading community, the implicit correlations among news readers, news articles, topics and named entities, e.g., what types of named entities in articles are preferred by users, and why users like the articles, could be valuable for building an effective news recommender. In this paper, we propose a novel news personalization framework by mining such correlations. We use hypergraph to model various high-order relations among different objects in news data, and formulate news recommendation as a ranking problem on fine-grained hypergraphs. In addition, by transductive inference, our proposed algorithm is capable of effectively handling the so-called cold-start problem. Extensive experiments on a data set collected from various news websites have demonstrated the effectiveness of our proposed algorithm.",
"title": ""
},
{
"docid": "de298bb631dd0ca515c161b6e6426a85",
"text": "We address the problem of sharpness enhancement of images. Existing hierarchical techniques that decompose an image into a smooth image and high frequency components based on Gaussian filter and bilateral filter suffer from halo effects, whereas techniques based on weighted least squares extract low contrast features as detail. Other techniques require multiple images and are not tolerant to noise.",
"title": ""
},
{
"docid": "9b085f5cd0a080560d7ae17b7d4d6878",
"text": "The commercial roll-type corona-electrostatic separators, which are currently employed for the recovery of metals and plastics from mm-size granular mixtures, are inappropriate for the processing of finely-grinded wastes. The aim of the present work is to demonstrate that a belt-type corona-electrostatic separator could be an appropriate solution for the selective sorting of conductive and non-conductive products contained in micronized wastes. The experiments are carried out on a laboratory-scale multi-functional electrostatic separator designed by the authors. The corona discharge is generated between a wire-type dual electrode and the surface of the metal belt conveyor. The distance between the wire and the belt and the applied voltage are adjusted to values that permit particles charging without having an electric wind that puts them into motion on the surface of the belt. The separation is performed in the electric field generated between a high-voltage roll-type electrode (diameter 30 mm) and the grounded belt electrode. The study is conducted according to experimental design methodology, to enable the evaluation of the effects of the various factors that affect the efficiency of the separation: position of the roll-type electrode and applied high-voltage. The conclusions of this study will serve at the optimum design of an industrial belt-type corona-electrostatic separator for the recycling of metals and plastics from waste electric and electronic equipment.",
"title": ""
},
{
"docid": "2fed3f693a52ca9852c9238d3c9abf36",
"text": "A thin artificial magnetic conductor (AMC) structure is designed and breadboarded for radar cross-section (RCS) Reduction applications. The design presented in this paper shows the advantage of geometrical simplicity while simultaneously reducing the overall thickness (for the current design ). The design is very pragmatic and is based on a combination of AMC and perfect electric conductor (PEC) cells in a chessboard like configuration. An array of Sievenpiper's mushrooms constitutes the AMC part, while the PEC part is formed by full metallic patches. Around the operational frequency of the AMC-elements, the reflection of the AMC and PEC have opposite phase, so for any normal incident plane wave the reflections cancel out, thus reducing the RCS. The same applies to specular reflections for off-normal incidence angles. A simple basic model has been implemented in order to verify the behavior of this structure, while Ansoft-HFSS software has been used to provide a more thorough analysis. Both bistatic and monostatic measurements have been performed to validate the approach.",
"title": ""
},
{
"docid": "7182c5b1fac4a4d0d43a15c1feb28be1",
"text": "This paper provides an objective evaluation of the performance impacts of binary XML encodings, using a fast stream-based XQuery processor as our representative application. Instead of proposing one binary format and comparing it against standard XML parsers, we investigate the individual effects of several binary encoding techniques that are shared by many proposals. Our goal is to provide a deeper understanding of the performance impacts of binary XML encodings in order to clarify the ongoing and often contentious debate over their merits, particularly in the domain of high performance XML stream processing.",
"title": ""
},
{
"docid": "c5d9b3cf2332e06c883dc2f41e0f2ae8",
"text": "We assess the reliability of isobaric-tags for relative and absolute quantitation (iTRAQ), based on different types of replicate analyses taking into account technical, experimental, and biological variations. In total, 10 iTRAQ experiments were analyzed across three domains of life involving Saccharomyces cerevisiae KAY446, Sulfolobus solfataricus P2, and Synechocystis sp. PCC 6803. The coverage of protein expression of iTRAQ analysis increases as the variation tolerance increases. In brief, a cutoff point at +/-50% variation (+/-0.50) would yield 88% coverage in quantification based on an analysis of biological replicates. Technical replicate analysis produces a higher coverage level of 95% at a lower cutoff point of +/-30% variation. Experimental or iTRAQ variations exhibit similar behavior as biological variations, which suggest that most of the measurable deviations come from biological variations. These findings underline the importance of replicate analysis as a validation tool and benchmarking technique in protein expression analysis.",
"title": ""
}
] |
scidocsrr
|
820850a62df499fa78e9d82223dbad9c
|
Fragmentation or cohesion? Visualizing the process and consequences of information system diversity, 1993-2012
|
[
{
"docid": "04a00711497e5a627288f5dd5dda7c7c",
"text": "eywords: echnology acceptance model The goal of this paper is to present a visual mapping of intellectual structure in two-dimensions and to identify the subfields of the technology acceptance model through co-citation analysis. All the citation documents are included in the ISI Web of Knowledge database between 1989 and 2006. By using a sequence of statistical analyses including factor analysis, multidimensional scaling, and cluster analysis, we identified three main trends: task-related systems, e-commerce systems, and hedonic systems. The ial im o-citation -Commerce edonic findings yielded manager",
"title": ""
},
{
"docid": "6c175d7a90ed74ab3b115977c82b0ffa",
"text": "We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of connections follow power laws that indicate a scale-free pattern of connectivity, with most nodes having relatively few connections joined together through a small number of hubs with many connections. These regularities have also been found in certain other complex natural networks, such as the World Wide Web, but they are not consistent with many conventional models of semantic organization, based on inheritance hierarchies, arbitrarily structured networks, or high-dimensional vector spaces. We propose that these structures reflect the mechanisms by which semantic networks grow. We describe a simple model for semantic growth, in which each new word or concept is connected to an existing network by differentiating the connectivity pattern of an existing node. This model generates appropriate small-world statistics and power-law connectivity distributions, and it also suggests one possible mechanistic basis for the effects of learning history variables (age of acquisition, usage frequency) on behavioral performance in semantic processing tasks.",
"title": ""
}
] |
[
{
"docid": "3adb2815bceb4a3bf11e5d3a595ac098",
"text": "Orientation estimation using low cost sensors is an important task for Micro Aerial Vehicles (MAVs) in order to obtain a good feedback for the attitude controller. The challenges come from the low accuracy and noisy data of the MicroElectroMechanical System (MEMS) technology, which is the basis of modern, miniaturized inertial sensors. In this article, we describe a novel approach to obtain an estimation of the orientation in quaternion form from the observations of gravity and magnetic field. Our approach provides a quaternion estimation as the algebraic solution of a system from inertial/magnetic observations. We separate the problems of finding the \"tilt\" quaternion and the heading quaternion in two sub-parts of our system. This procedure is the key for avoiding the impact of the magnetic disturbances on the roll and pitch components of the orientation when the sensor is surrounded by unwanted magnetic flux. We demonstrate the validity of our method first analytically and then empirically using simulated data. We propose a novel complementary filter for MAVs that fuses together gyroscope data with accelerometer and magnetic field readings. The correction part of the filter is based on the method described above and works for both IMU (Inertial Measurement Unit) and MARG (Magnetic, Angular Rate, and Gravity) sensors. We evaluate the effectiveness of the filter and show that it significantly outperforms other common methods, using publicly available datasets with ground-truth data recorded during a real flight experiment of a micro quadrotor helicopter.",
"title": ""
},
{
"docid": "6b94f2f88fb62de5bec8ae0ace3afa1c",
"text": "The purpose of this paper is to design a microstrip patch antenna with low pass filter for efficient rectenna design this structure having the property of rejecting higher harmonics than 2GHz. As the design frequency is 2GHz.in first step we design a patch antenna in second step we design patch antenna with low pass filter and combine these two. The IE3D software is used for the simulation of this structure.",
"title": ""
},
{
"docid": "06f99b18bae3f15e77db8ff2d8c159cc",
"text": "The exact nature of the relationship among species range sizes, speciation, and extinction events is not well understood. The factors that promote larger ranges, such as broad niche widths and high dispersal abilities, could increase the likelihood of encountering new habitats but also prevent local adaptation due to high gene flow. Similarly, low dispersal abilities or narrower niche widths could cause populations to be isolated, but such populations may lack advantageous mutations due to low population sizes. Here we present a large-scale, spatially explicit, individual-based model addressing the relationships between species ranges, speciation, and extinction. We followed the evolutionary dynamics of hundreds of thousands of diploid individuals for 200,000 generations. Individuals adapted to multiple resources and formed ecological species in a multidimensional trait space. These species varied in niche widths, and we observed the coexistence of generalists and specialists on a few resources. Our model shows that species ranges correlate with dispersal abilities but do not change with the strength of fitness trade-offs; however, high dispersal abilities and low resource utilization costs, which favored broad niche widths, have a strong negative effect on speciation rates. An unexpected result of our model is the strong effect of underlying resource distributions on speciation: in highly fragmented landscapes, speciation rates are reduced.",
"title": ""
},
{
"docid": "eb6572344dbaf8e209388f888fba1c10",
"text": "[Purpose] The present study was performed to evaluate the changes in the scapular alignment, pressure pain threshold and pain in subjects with scapular downward rotation after 4 weeks of wall slide exercise or sling slide exercise. [Subjects and Methods] Twenty-two subjects with scapular downward rotation participated in this study. The alignment of the scapula was measured using radiographic analysis (X-ray). Pain and pressure pain threshold were assessed using visual analogue scale and digital algometer. Patients were assessed before and after a 4 weeks of exercise. [Results] In the within-group comparison, the wall slide exercise group showed significant differences in the resting scapular alignment, pressure pain threshold, and pain after four weeks. The between-group comparison showed that there were significant differences between the wall slide group and the sling slide group after four weeks. [Conclusion] The results of this study found that the wall slide exercise may be effective at reducing pain and improving scapular alignment in subjects with scapular downward rotation.",
"title": ""
},
{
"docid": "39b02ea486f16b0e09c79b7f4d792531",
"text": "In this paper, we present the Functional Catalogue (FunCat), a hierarchically structured, organism-independent, flexible and scalable controlled classification system enabling the functional description of proteins from any organism. FunCat has been applied for the manual annotation of prokaryotes, fungi, plants and animals. We describe how FunCat is implemented as a highly efficient and robust tool for the manual and automatic annotation of genomic sequences. Owing to its hierarchical architecture, FunCat has also proved to be useful for many subsequent downstream bioinformatic applications. This is illustrated by the analysis of large-scale experiments from various investigations in transcriptomics and proteomics, where FunCat was used to project experimental data into functional units, as 'gold standard' for functional classification methods, and also served to compare the significance of different experimental methods. Over the last decade, the FunCat has been established as a robust and stable annotation scheme that offers both, meaningful and manageable functional classification as well as ease of perception.",
"title": ""
},
{
"docid": "f7ff2a89ed5aed67bbb2dc41defa30a8",
"text": "People with color-grapheme synesthesia experience color when viewing written letters or numerals, usually with a particular color evoked by each grapheme. Here, we report on data from 11 color-grapheme synesthetes who had startlingly similar color-grapheme pairings traceable to childhood toys containing colored letters. These are the first and only data to show learned synesthesia of this kind in more than a single individual. Whereas some researchers have focused on genetic and perceptual aspects of synesthesia, our results indicate that a complete explanation of synesthesia must also incorporate a central role for learning and memory. We argue that these two positions can be reconciled by thinking of synesthesia as the automatic retrieval of highly specific mnemonic associations, in which perceptual contents are brought to mind in a manner akin to mental imagery or the perceptual-reinstatement effects found in memory studies.",
"title": ""
},
{
"docid": "67995490350c68f286029d8b401d78d8",
"text": "OBJECTIVE\nModifiable risk factors for dementia were recently identified and compiled in a systematic review. The 'Lifestyle for Brain Health' (LIBRA) score, reflecting someone's potential for dementia prevention, was studied in a large longitudinal population-based sample with respect to predicting cognitive change over an observation period of up to 16 years.\n\n\nMETHODS\nLifestyle for Brain Health was calculated at baseline for 949 participants aged 50-81 years from the Maastricht Ageing Study. The predictive value of LIBRA for incident dementia and cognitive impairment was examined by using Cox proportional hazard models and by testing its relation with cognitive decline.\n\n\nRESULTS\nLifestyle for Brain Health predicted future risk of dementia, as well as risk of cognitive impairment. A one-point increase in LIBRA score related to 19% higher risk for dementia and 9% higher risk for cognitive impairment. LIBRA predicted rate of decline in processing speed, but not memory or executive functioning.\n\n\nCONCLUSIONS\nLifestyle for Brain Health (LIBRA) may help in identifying and monitoring risk status in dementia-prevention programmes, by targeting modifiable, lifestyle-related risk factors. Copyright © 2017 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "1a44645ee469e4bbaa978216d01f7e0d",
"text": "The growing popularity of mobile search and the advancement in voice recognition technologies have opened the door for web search users to speak their queries, rather than type them. While this kind of voice search is still in its infancy, it is gradually becoming more widespread. In this paper, we examine the logs of a commercial search engine's mobile interface, and compare the spoken queries to the typed-in queries. We place special emphasis on the semantic and syntactic characteristics of the two types of queries. %Our analysis suggests that voice queries focus more on audio-visual content and question answering, and less on social networking and adult domains. We also conduct an empirical evaluation showing that the language of voice queries is closer to natural language than typed queries. Our analysis reveals further differences between voice and text search, which have implications for the design of future voice-enabled search tools.",
"title": ""
},
{
"docid": "fb8e6eac761229fc8c12339fb68002ed",
"text": "Cerebrovascular disease results from any pathological process of the blood vessels supplying the brain. Stroke, characterised by its abrupt onset, is the third leading cause of death in humans. This rare condition in dogs is increasingly being recognised with the advent of advanced diagnostic imaging. Magnetic resonance imaging (MRI) is the first choice diagnostic tool for stroke, particularly using diffusion-weighted images and magnetic resonance angiography for ischaemic stroke and gradient echo sequences for haemorrhagic stroke. An underlying cause is not always identified in either humans or dogs. Underlying conditions that may be associated with canine stroke include hypothyroidism, neoplasia, sepsis, hypertension, parasites, vascular malformation and coagulopathy. Treatment is mainly supportive and recovery often occurs within a few weeks. The prognosis is usually good if no underlying disease is found.",
"title": ""
},
{
"docid": "046ae00fa67181dff54e170e48a9bacf",
"text": "For the evaluation of grasp quality, different measures have been proposed that are based on wrench spaces. Almost all of them have drawbacks that derive from the non-uniformity of the wrench space, composed of force and torque dimensions. Moreover, many of these approaches are computationally expensive. We address the problem of choosing a proper task wrench space to overcome the problems of the non-uniform wrench space and show how to integrate it in a well-known, high precision and extremely fast computable grasp quality measure.",
"title": ""
},
{
"docid": "34ea56262e83b63a6e08591ae86b03ef",
"text": "This article focuses on the variants and imaging pitfalls in the ankle and foot.",
"title": ""
},
{
"docid": "eb9aa36113813166248e3b8d5f2cb426",
"text": "In this paper, the nonlinear behavior of coplanar waveguide (CPW) transmission lines fabricated on Si and high-resistivity (HR) Si substrates is thoroughly investigated. Simulations and experimental characterization of 50- Ω CPW lines are analyzed under small- and large-signal operation at 900 MHz for a wide variety of Si substrates with nominal resistivities from 10 Ω-cm up to values higher than 10 kΩ-cm. The introduction of a trap-rich layer to recover the Si substrate nominal HR characteristics is also considered. We experimentally demonstrate that the distortion level of a CPW line lying on Si substrate decreases with the effective resistivity sensed by the coplanar structure. Si substrates of effective resistivity higher than 3 kΩ-cm present harmonic levels below -80 dBm for an output power of +15 dBm.",
"title": ""
},
{
"docid": "1a0b377e18e696088f7ad80bd23ef59d",
"text": "The ability to introspect into the behavior of software at runtime is crucial for many security-related tasks, such as virtual machine-based intrusion detection and low-artifact malware analysis. Although some progress has been made in this task by automatically creating programs that can passively retrieve kernel-level information, two key challenges remain. First, it is currently difficult to extract useful information from user-level applications, such as web browsers. Second, discovering points within the OS and applications to hook for active monitoring is still an entirely manual process. In this paper we propose a set of techniques to mine the memory accesses made by an operating system and its applications to locate useful places to deploy active monitoring, which we call tap points. We demonstrate the efficacy of our techniques by finding tap points for useful introspection tasks such as finding SSL keys and monitoring web browser activity on five different operating systems (Windows 7, Linux, FreeBSD, Minix and Haiku) and two processor architectures (ARM and x86).",
"title": ""
},
{
"docid": "51b91ef1b46d6696a0e99eb8649d6447",
"text": "A solid-state drive (SSD) gains fast I/O speed and is becoming an ideal replacement for traditional rotating storage. However, its speed and responsiveness heavily depend on internal fragmentation. With a high degree of fragmentation, an SSD may experience sharp performance degradation. Hence, minimizing fragmentation in the SSD is an effective way to sustain its high performance. In this paper, we propose an innovative file data placement strategy for Rocks DB, a widely used embedded NoSQL database. The proposed strategy steers data to a write unit exposed by an SSD according to predicted data lifetime. By placing data with similar lifetime in the same write unit, fragmentation in the SSD is controlled at the time of data write. We evaluate our proposed strategy using the Yahoo! Cloud Serving Benchmark. Our experimental results demonstrate that the proposed strategy improves the Rocks DB performance significantly: the throughput can be increased by up to 41%, 99.99%ile latency reduced by 59%, and SSD lifetime extended by up to 18%.",
"title": ""
},
{
"docid": "5ca6f2aaa70a7c7593e68f25999697d8",
"text": "Traditional text detection methods mostly focus on quadrangle text. In this study we propose a novel method named sliding line point regression (SLPR) in order to detect arbitrary-shape text in natural scene. SLPR regresses multiple points on the edge of text line and then utilizes these points to sketch the outlines of the text. The proposed SLPR can be adapted to many object detection architectures such as Faster R-CNN and R-FCN. Specifically, we first generate the smallest rectangular box including the text with region proposal network (RPN), then isometrically regress the points on the edge of text by using the vertically and horizontally sliding lines. To make full use of information and reduce redundancy, we calculate x-coordinate or y-coordinate of target point by the rectangular box position, and just regress the remaining y-coordinate or x-coordinate. Accordingly we can not only reduce the parameters of system, but also restrain the points which will generate more regular polygon. Our approach achieved competitive results on traditional ICDAR2015 Incidental Scene Text benchmark and curve text detection dataset CTW1500.",
"title": ""
},
{
"docid": "601896f7ccafbc97eb01e3b3f02cc6ec",
"text": "Changes to network structure can substantially affect when and how widely new ideas, products, and conventions are adopted. In models of biological contagion, interventions that randomly rewire edges (generally making them “longer”) accelerate spread. However, there are other models relevant to social contagion, such as those motivated by myopic best-response in games with strategic complements, in which an individual’s behavior is described by a threshold number of adopting neighbors above which adoption occurs (i.e., complex contagions). Recent work has argued that highly clustered, rather than random, networks facilitate spread of these complex contagions. This conclusion is based primarily on theoretical and simulation results assuming adoption never occurs with only one adopting neighbor. Here we show that minor modifications to this model, which make it more realistic, reverse this result. The modification is that we allow very rare below-threshold adoption, i.e., very rarely adoption occurs when there is only one adopting neighbor. To model the trade-off between long and short edges we consider networks that are the union of cycle-power-k graphs and random graphs on n nodes. We study how the time to global spread changes as we replace the cycle edges with (random) long ties. Allowing adoptions below threshold to occur with order 1/ √ n probability along some cycle edges is enough to ensure that random rewiring accelerates spread. Simulations illustrate the robustness of these results to other commonly-posited models for noisy best-response behavior. We then examine empirical social networks, where we find that hypothetical interventions that (a) randomly rewire existing edges, or (b) add random edges, reduce time to spread compared with the original network or addition of “short”, triad-closing edges, respectively. This substantially revises conclusions about how interventions change the spread of behavior, suggesting that those wanting to increase spread should induce formation of long ties, rather than triad-closing ties. More generally, this highlights the importance of noise in game-theoretic analyses of behavior.",
"title": ""
},
{
"docid": "74260280ebe49952537858ba82c3cbfc",
"text": "Pretarsal roll augmentation with dermal hyaluronic acid filler injection focuses on restoring pretarsal fullness. This study aimed to introduce a method of pretarsal roll augmentation with dermal hyaluronic acid filler injection and establish the level of difficulty, safety, and effectiveness of this method. Eighty female patients were enrolled in this study. Hyaluronic acid filler was used to perform pretarsal roll augmentation. Physician and patient satisfaction at 1 month and 4 months after surgery was investigated. The level of satisfaction was graded from points 1 to 5. The patient satisfaction and physician scores were 4.7 ± 1.1 (mean ± standard deviation) points at 1 month and 4.8 ± 0.9 points at 4 months and 4.6 ± 0.9 points at 1 month and 4.8 ± 1.0 points at 4 months, respectively. No major complications were observed. Our technique provided a natural and younger appearance with pretarsal fullness. This technique was easy to perform for the restoration of pretarsal fullness, and it improved periorbital contouring, rejuvenated the pretarsal roll, and provided excellent esthetic results. Level of Evidence: Level V, therapeutic study.",
"title": ""
},
{
"docid": "dc42ffc3d9a5833f285bac114e8a8b37",
"text": "In this paper, we present a recursive algorithm for extracting classification rules from feedforward neural networks (NNs) that have been trained on data sets having both discrete and continuous attributes. The novelty of this algorithm lies in the conditions of the extracted rules: the rule conditions involving discrete attributes are disjoint from those involving continuous attributes. The algorithm starts by first generating rules with discrete attributes only to explain the classification process of the NN. If the accuracy of a rule with only discrete attributes is not satisfactory, the algorithm refines this rule by recursively generating more rules with discrete attributes not already present in the rule condition, or by generating a hyperplane involving only the continuous attributes. We show that for three real-life credit scoring data sets, the algorithm generates rules that are not only more accurate but also more comprehensible than those generated by other NN rule extraction methods.",
"title": ""
},
{
"docid": "2871de581ee0efe242438567ca3a57dd",
"text": "The sparsity which is implicit in MR images is exploited to significantly undersample k-space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain-for example, in terms of spatial finite-differences or their wavelet coefficients. According to the recently developed mathematical theory of compressed-sensing, images with a sparse representation can be recovered from randomly undersampled k-space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise-like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo-random variable-density undersampling of phase-encodes. The reconstruction is performed by minimizing the l(1) norm of a transformed image, subject to data fidelity constraints. Examples demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin-echo brain imaging and 3D contrast enhanced angiography.",
"title": ""
},
{
"docid": "539dc7f8657f83ac2ae9590a283c7321",
"text": "This paper presents a review on Optical Character Recognition Techniques. Optical Character recognition (OCR) is a technology that allows machines to automatically recognize the characters through an optical mechanism. OCR can be described as Mechanical or electronic conversion of scanned images where images can be handwritten, typewritten or printed text. It converts the images into machine-encoded text that can be used in machine translation, text-to-speech and text mining. Various techniques are available for character recognition in optical character recognition system. This material can be useful for the researchers who wish to work in character recognition area.",
"title": ""
}
] |
scidocsrr
|
e0f002f256a89bb86e6891743aa4aa4c
|
A PID BASED ANFIS & FUZZY CONTROL OF INVERTED PENDULUM ON INCLINED PLANE ( IPIP )
|
[
{
"docid": "cc9ee1b5111974da999d8c52ba393856",
"text": "The back propagation (BP) neural network algorithm is a multi-layer feedforward network trained according to error back propagation algorithm and is one of the most widely applied neural network models. BP network can be used to learn and store a great deal of mapping relations of input-output model, and no need to disclose in advance the mathematical equation that describes these mapping relations. Its learning rule is to adopt the steepest descent method in which the back propagation is used to regulate the weight value and threshold value of the network to achieve the minimum error sum of square. This paper focuses on the analysis of the characteristics and mathematical theory of BP neural network and also points out the shortcomings of BP algorithm as well as several methods for improvement.",
"title": ""
}
] |
[
{
"docid": "750846bc27dc013bd0d392959caf3ecc",
"text": "Analysis of the WinZip en ryption method Tadayoshi Kohno May 8, 2004 Abstra t WinZip is a popular ompression utility for Mi rosoft Windows omputers, the latest version of whi h is advertised as having \\easy-to-use AES en ryption to prote t your sensitive data.\" We exhibit several atta ks against WinZip's new en ryption method, dubbed \\AE-2\" or \\Advan ed En ryption, version two.\" We then dis uss se ure alternatives. Sin e at a high level the underlying WinZip en ryption method appears se ure (the ore is exa tly En ryptthen-Authenti ate using AES-CTR and HMAC-SHA1), and sin e one of our atta ks was made possible be ause of the way that WinZip Computing, In . de ided to x a di erent se urity problem with its previous en ryption method AE-1, our atta ks further unders ore the subtlety of designing ryptographi ally se ure software.",
"title": ""
},
{
"docid": "704598402da135b6b7e3251de4c6edf8",
"text": "Almost every complex software system today is configurable. While configurability has many benefits, it challenges performance prediction, optimization, and debugging. Often, the influences of individual configuration options on performance are unknown. Worse, configuration options may interact, giving rise to a configuration space of possibly exponential size. Addressing this challenge, we propose an approach that derives a performance-influence model for a given configurable system, describing all relevant influences of configuration options and their interactions. Our approach combines machine-learning and sampling heuristics in a novel way. It improves over standard techniques in that it (1) represents influences of options and their interactions explicitly (which eases debugging), (2) smoothly integrates binary and numeric configuration options for the first time, (3) incorporates domain knowledge, if available (which eases learning and increases accuracy), (4) considers complex constraints among options, and (5) systematically reduces the solution space to a tractable size. A series of experiments demonstrates the feasibility of our approach in terms of the accuracy of the models learned as well as the accuracy of the performance predictions one can make with them.",
"title": ""
},
{
"docid": "bbfcce9ec7294cb542195cca1dfbcc6c",
"text": "We propose a new algorithm, DASSO, for fitting the entire coef fici nt path of the Dantzig selector with a similar computational cost to the LA RS algorithm that is used to compute the Lasso. DASSO efficiently constructs a piecewi s linear path through a sequential simplex-like algorithm, which is remarkably si milar to LARS. Comparison of the two algorithms sheds new light on the question of how th e Lasso and Dantzig selector are related. In addition, we provide theoretical c onditions on the design matrix, X, under which the Lasso and Dantzig selector coefficient esti mates will be identical for certain tuning parameters. As a consequence, in many instances, we are able to extend the powerful non-asymptotic bounds that have been de veloped for the Dantzig selector to the Lasso. Finally, through empirical studies o f imulated and real world data sets we show that in practice, when the bounds hold for th e Dantzig selector, they almost always also hold for the Lasso. Some key words : Dantzig selector; LARS; Lasso; DASSO",
"title": ""
},
{
"docid": "cb815a01960490760e2ac581e26f4486",
"text": "To solve the weakly-singular Volterra integro-differential equations, the combined method of the Laplace Transform Method and the Adomian Decomposition Method is used. As a result, series solutions of the equations are constructed. In order to explore the rapid decay of the equations, the pade approximation is used. The results present validity and great potential of the method as a powerful algorithm in order to present series solutions for singular kind of differential equations.",
"title": ""
},
{
"docid": "77e2aac8b42b0b9263278280d867cb40",
"text": "This paper explores the problem of breast tissue classification of microscopy images. Based on the predominant cancer type the goal is to classify images into four categories of normal, benign, in situ carcinoma, and invasive carcinoma. Given a suitable training dataset, we utilize deep learning techniques to address the classification problem. Due to the large size of each image in the training dataset, we propose a patch-based technique which consists of two consecutive convolutional neural networks. The first “patch-wise” network acts as an auto-encoder that extracts the most salient features of image patches while the second “image-wise” network performs classification of the whole image. The first network is pre-trained and aimed at extracting local information while the second network obtains global information of an input image. We trained the networks using the ICIAR 2018 grand challenge on BreAst Cancer Histology (BACH) dataset. The proposed method yields 95% accuracy on the validation set compared to previously reported 77% accuracy rates in the literature. Our code is publicly available at https://github.com/ImagingLab/ICIAR2018.",
"title": ""
},
{
"docid": "40ba65504518383b4ca2a6fabff261fe",
"text": "Fig. 1. Noirot and Quennedey's original classification of insect exocrine glands, based on a rhinotermitid sternal gland. The asterisk indicates a subcuticular space. Abbreviations: C, cuticle; D, duct cells; G1, secretory cells class 1; G2, secretory cells class 2; G3, secretory cells class 3; S, campaniform sensilla (modified after Noirot and Quennedey, 1974). ‘Describe the differences between endocrine and exocrine glands’, it sounds a typical exam question from a general biology course during our time at high school. Because of their secretory products being released to the outside world, exocrine glands definitely add flavour to our lives. Everybody is familiar with their secretions, from the salty and perhaps unpleasantly smelling secretions from mammalian sweat glands to the sweet exudates of the honey glands used by some caterpillars to attract ants, from the most painful venoms of bullet ants and scorpions to the precious wax that honeybees use to make their nest combs. Besides these functions, exocrine glands are especially known for the elaboration of a broad spectrum of pheromonal substances, and can also be involved in the production of antibiotics, lubricants, and digestive enzymes. Modern research in insect exocrinology started with the classical works of Charles Janet, who introduced a histological approach to the insect world (Billen and Wilson, 2007). The French school of insect anatomy remained strong since then, and the commonly used classification of insect exocrine glands generally follows the pioneer paper of Charles Noirot and Andr e Quennedey (1974). These authors were leading termite researchers using their extraordinary knowledge on termite glands to understand related phenomena, such as foraging and reproductive behaviour. They distinguish between class 1 with secretory cells adjoining directly to the cuticle, and class 3 with bicellular units made up of a large secretory cell and its accompanying duct cell that carries the secretion to the exterior (Fig. 1). The original classification included also class 2 secretory cells, but these are very rare and are only found in sternal and tergal glands of a cockroach and many termites (and also in the novel nasus gland described in this issue!). This classification became universally used, with the rather strange consequence that the vast majority of insect glands is illogically made up of class 1 and class 3 cells. In a follow-up paper, the uncommon class 2 cells were re-considered as oenocyte homologues (Noirot and Quennedey, 1991). Irrespectively of these objections, their 1974 pioneer paper is a cornerstone of modern works dealing with insect exocrine glands, as is also obvious in the majority of the papers in this special issue. This paper already received 545 citations at Web of Science and 588 at Google Scholar (both on 24 Aug 2015), so one can easily say that all researchers working on insect glands consider this work truly fundamental. Exocrine glands are organs of cardinal importance in all insects. The more common ones include mandibular and labial",
"title": ""
},
{
"docid": "ddfd02c12c42edb2607a6f193f4c242b",
"text": "We design the first Leakage-Resilient Identity-Based Encryption (LR-IBE) systems from static assumptions in the standard model. We derive these schemes by applying a hash proof technique from Alwen et.al. (Eurocrypt '10) to variants of the existing IBE schemes of Boneh-Boyen, Waters, and Lewko-Waters. As a result, we achieve leakage-resilience under the respective static assumptions of the original systems in the standard model, while also preserving the efficiency of the original schemes. Moreover, our results extend to the Bounded Retrieval Model (BRM), yielding the first regular and identity-based BRM encryption schemes from static assumptions in the standard model.\n The first LR-IBE system, based on Boneh-Boyen IBE, is only selectively secure under the simple Decisional Bilinear Diffie-Hellman assumption (DBDH), and serves as a stepping stone to our second fully secure construction. This construction is based on Waters IBE, and also relies on the simple DBDH. Finally, the third system is based on Lewko-Waters IBE, and achieves full security with shorter public parameters, but is based on three static assumptions related to composite order bilinear groups.",
"title": ""
},
{
"docid": "fc509e8f8c0076ad80df5ff6ee6b6f1e",
"text": "The purposes of this study are to construct an instrument to evaluate service quality of mobile value-added services and have a further discussion of the relationships among service quality, perceived value, customer satisfaction, and post-purchase intention. Structural equation modeling and multiple regression analysis were used to analyze the data collected from college and graduate students of fifteen major universities in Taiwan. The main findings are as follows: (1) service quality positively influences both perceived value and customer satisfaction; (2) perceived value positively influences on both customer satisfaction and post-purchase intention; (3) customer satisfaction positively influences post-purchase intention; (4) service quality has an indirect positive influence on post-purchase intention through customer satisfaction or perceived value; (5) among the dimensions of service quality, “customer service and system reliability” is most influential on perceived value and customer satisfaction, and the influence of “content quality” ranks second; (6) the proposed model is proven with the effectiveness in explaining the relationships among service quality, perceived value, customer satisfaction, and post-purchase intention in mobile added-value services.",
"title": ""
},
{
"docid": "3da8cb73f3770a803ca43b8e2a694ccc",
"text": "We present a novel framework for hallucinating faces of unconstrained poses and with very low resolution (face size as small as 5pxIOD). In contrast to existing studies that mostly ignore or assume pre-aligned face spatial configuration (e.g. facial landmarks localization or dense correspondence field), we alternatingly optimize two complementary tasks, namely face hallucination and dense correspondence field estimation, in a unified framework. In addition, we propose a new gated deep bi-network that contains two functionality-specialized branches to recover different levels of texture details. Extensive experiments demonstrate that such formulation allows exceptional hallucination quality on in-the-wild low-res faces with significant pose and illumination variations.",
"title": ""
},
{
"docid": "5536e605e0b8a25ee0a5381025484f60",
"text": "Relational Markov Random Fields are a general and flexible framework for reasoning about the joint distribution over attributes of a large number of interacting entities. The main computational difficulty in learning such models is inference. Even when dealing with complete data, where one can summarize a large domain by sufficient statistics, learning requires one to compute the expectation of the sufficient statistics given different parameter choices. The typical solution to this problem is to resort to approximate inference procedures, such as loopy belief propagation. Although these procedures are quite efficient, they still require computation that is on the order of the number of interactions (or features) in the model. When learning a large relational model over a complex domain, even such approximations require unrealistic running time. In this paper we show that for a particular class of relational MRFs, which have inherent symmetry, we can perform the inference needed for learning procedures using a template-level belief propagation. This procedure’s running time is proportional to the size of the relational model rather than the size of the domain. Moreover, we show that this computational procedure is equivalent to sychronous loopy belief propagation. This enables a dramatic speedup in inference and learning time. We use this procedure to learn relational MRFs for capturing the joint distribution of large protein-protein interaction networks.",
"title": ""
},
{
"docid": "c168fdc6e1e19280aea2eb011ec7a3b1",
"text": "OBJECTIVE\nThe study aimed to formulate an easy clinical approach that may be used by clinicians of all backgrounds to diagnose vulvar dermatological disorders.\n\n\nMATERIALS AND METHODS\nThe International Society for the Study of Vulvovaginal Disease appointed a committee with multinational members from the fields of dermatology, gynecology, and pathology and charged the committee to formulate a clinically based terminology and classification of vulvar dermatological disorders. The committee carried out its work by way of multiple rounds of e-mails extending over almost 2 year's time.\n\n\nRESULTS\nThe committee was able to formulate a consensus report containing terminology, classification, and a step-wise approach to clinical diagnosis of vulvar dermatological disorders. This report was presented and approved by the International Society for the Study of Vulvovaginal Disease at the XXI International Congress held in Paris, France, on September 3 to 8, 2011.\n\n\nCONCLUSIONS\nThe authors believe that the approach to terminology and classification as well as clinical diagnosis contained in this article allows clinicians to make highly accurate diagnoses of vulvar dermatological disorders within the clinical setting. This, in turn, will reduce the need for referrals and will improve the care for women with most vulvar disorders.",
"title": ""
},
{
"docid": "2eac0a94204b24132e496639d759f545",
"text": "Numerous algorithms have been proposed for transferring knowledge from a label-rich domain (source) to a label-scarce domain (target). Most of them are proposed for closed-set scenario, where the source and the target domain completely share the class of their samples. However, in practice, a target domain can contain samples of classes that are not shared by the source domain. We call such classes the “unknown class” and algorithms that work well in the open set situation are very practical. However, most existing distribution matching methods for domain adaptation do not work well in this setting because unknown target samples should not be aligned with the source. In this paper, we propose a method for an open set domain adaptation scenario, which utilizes adversarial training. This approach allows to extract features that separate unknown target from known target samples. During training, we assign two options to the feature generator: aligning target samples with source known ones or rejecting them as unknown target ones. Our method was extensively evaluated and outperformed other methods with a large margin in most settings.",
"title": ""
},
{
"docid": "effd314d69f6775b80dbe5570e3f37d8",
"text": "New paradigms in networking industry, such as Software Defined Networking (SDN) and Network Functions Virtualization (NFV), require the hypervisors to enable the execution of Virtual Network Functions in virtual machines (VMs). In this context, the virtual switch function is critical to achieve carrier grade performance, hardware independence, advanced features and programmability. SnabbSwitch is a virtual switch designed to run in user space with carrier grade performance targets, based on an efficient architecture which has driven the development of vhost-user (now also adopted by OVS-DPDK, the user space implementation of OVS based on Intel DPDK), easy to deploy and to program through its Lua scripting layer. This paper presents the SnabbSwitch virtual switch implementation along with its novelties (the vhost-user implementation and the usage of a trace compiler) and code optimizations, which have been merged in the mainline project repository. Extensive benchmarking activities, whose results are included in this paper, have been carried on to compare SnabbSwitch with other virtual switching solutions (i.e., OVS, OVS-DPDK, Linux Bridge, VFIO and SR-IOV). These results show that SnabbSwitch performs as well as hardware based solutions, such as SR-IOV and VFIO, while allowing for additional functional and flexible operation; they show also that SnabbSwitch is faster than the vhost-user based version (user space) of OVS-DPDK.",
"title": ""
},
{
"docid": "bdd1c64962bfb921762259cca4a23aff",
"text": "Ever since the emergence of social networking sites (SNSs), it has remained a question without a conclusive answer whether SNSs make people more or less lonely. To achieve a better understanding, researchers need to move beyond studying overall SNS usage. In addition, it is necessary to attend to personal attributes as potential moderators. Given that SNSs provide rich opportunities for social comparison, one highly relevant personality trait would be social comparison orientation (SCO), and yet this personal attribute has been understudied in social media research. Drawing on literature of psychosocial implications of social media use and SCO, this study explored associations between loneliness and various Instagram activities and the role of SCO in this context. A total of 208 undergraduate students attending a U.S. mid-southern university completed a self-report survey (Mage = 19.43, SD = 1.35; 78 percent female; 57 percent White). Findings showed that Instagram interaction and Instagram browsing were both related to lower loneliness, whereas Instagram broadcasting was associated with higher loneliness. SCO moderated the relationship between Instagram use and loneliness such that Instagram interaction was related to lower loneliness only for low SCO users. The results revealed implications for healthy SNS use and the importance of including personality traits and specific SNS use patterns to disentangle the role of SNS use in psychological well-being.",
"title": ""
},
{
"docid": "62b64b2182bbcd92a6bd84aec8927166",
"text": "Parasympathetic regulation of heart rate through the vagus nerve--often measured as resting respiratory sinus arrhythmia or cardiac vagal tone (CVT)--is a key biological correlate of psychological well-being. However, recent theorizing has suggested that many biological and psychological processes can become maladaptive when they reach extreme levels. This raises the possibility that CVT might not have an unmitigated positive relationship with well-being. In line with this reasoning, across 231 adult participants (Mage = 40.02 years; 52% female), we found that CVT was quadratically related to multiple measures of well-being, including life satisfaction and depressive symptoms. Individuals with moderate CVT had higher well-being than those with low or high CVT. These results provide the first direct evidence of a nonlinear relationship between CVT and well-being, adding to a growing body of research that has suggested some biological processes may cease being adaptive when they reach extreme levels.",
"title": ""
},
{
"docid": "289694f2395a6a2afc7d86d475b9c02d",
"text": "Recently, large breakthroughs have been observed in saliency modeling. The top scores on saliency benchmarks have become dominated by neural network models of saliency, and some evaluation scores have begun to saturate. Large jumps in performance relative to previous models can be found across datasets, image types, and evaluation metrics. Have saliency models begun to converge on human performance? In this paper, we re-examine the current state-of-the-art using a finegrained analysis on image types, individual images, and image regions. Using experiments to gather annotations for high-density regions of human eye fixations on images in two established saliency datasets, MIT300 and CAT2000, we quantify up to 60% of the remaining errors of saliency models. We argue that to continue to approach human-level performance, saliency models will need to discover higher-level concepts in images: text, objects of gaze and action, locations of motion, and expected locations of people in images. Moreover, they will need to reason about the relative importance of image regions, such as focusing on the most important person in the room or the most informative sign on the road. More accurately tracking performance will require finer-grained evaluations and metrics. Pushing performance further will require higher-level image understanding.",
"title": ""
},
{
"docid": "d00765c898151dd5977fab8e39c4d7e9",
"text": "Knowledge graphs (KG) play a crucial role in many modern applications. However, constructing a KG from natural language text is challenging due to the complex structure of the text. Recently, many approaches have been proposed to transform natural language text to triples to obtain KGs. Such approaches have not yet provided efficient results for mapping extracted elements of triples, especially the predicate, to their equivalent elements in a KG. Predicate mapping is essential because it can reduce the heterogeneity of the data and increase the searchability over a KG. In this article, we propose T2KG, an automatic KG creation framework for natural language text, to more effectively map natural language text to predicates. In our framework, a hybrid combination of a rule-based approach and a similarity-based approach is presented for mapping a predicate to its corresponding predicate in a KG. Based on experimental results, the hybrid approach can identify more similar predicate pairs than a baseline method in the predicate mapping task. An experiment on KG creation is also conducted to investigate the performance of the T2KG. The experimental results show that the T2KG also outperforms the baseline in KG creation. Although KG creation is conducted in open domains, in which prior knowledge is not provided, the T2KG still achieves an F1 score of approximately 50% when generating triples in the KG creation task. In addition, an empirical study on knowledge population using various text sources is conducted, and the results indicate the T2KG could be used to obtain knowledge that is not currently available from DBpedia. key words: knowledge graph, knowledge discovery, knowledge extraction, linked data",
"title": ""
},
{
"docid": "2b688f9ca05c2a79f896e3fee927cc0d",
"text": "This paper presents a new synchronous-reference frame (SRF)-based control method to compensate power-quality (PQ) problems through a three-phase four-wire unified PQ conditioner (UPQC) under unbalanced and distorted load conditions. The proposed UPQC system can improve the power quality at the point of common coupling on power distribution systems under unbalanced and distorted load conditions. The simulation results based on Matlab/Simulink are discussed in detail to support the SRF-based control method presented in this paper. The proposed approach is also validated through experimental study with the UPQC hardware prototype.",
"title": ""
},
{
"docid": "92d04ad5a9fa32c2ad91003213b1b86d",
"text": "You're being asked to quantify usability improvements with statistics. But even with a background in statistics, you are hesitant to statistically analyze the data, as you may be unsure about which statistical tests to...",
"title": ""
},
{
"docid": "1f5a2259bd57f35a604fb8d23538c741",
"text": "Can peer-to-peer lending (P2P) crowdfunding disintermediate and mitigate information frictions in lending such that choices and outcomes for at least some borrowers and investors are improved? I offer a framing of issues and survey the nascent literature on P2P. On the investor side, P2P disintermediates an asset class of consumer loans, and investors seem to capture some rents associated with the removal of the cost of that financial intermediation. Risk and portfolio choice questions linger prior to any inference. On the borrower side, evidence suggests that proximate knowledge (direct or inferred) unearths soft information, and by implication, P2P should be able to offer pricing and/or access benefits to potential borrowers. However, social connections require costly certification (skin in the game) to inform credit risk. Early research suggests an ever-increasing scope for use of Big Data and incentivized re-intermediation of underwriting. I ask many more questions than current research can answer, hoping to motivate future research.",
"title": ""
}
] |
scidocsrr
|
50eb4a04917ef30ee2e85fdb4b5107e5
|
Deep Convolutional Framelet Denosing for Low-Dose CT via Wavelet Residual Network
|
[
{
"docid": "776e04fa00628e249900b02f1edf9432",
"text": "We propose an algorithm for minimizing the total variation of an image, and provide a proof of convergence. We show applications to image denoising, zooming, and the computation of the mean curvature motion of interfaces.",
"title": ""
}
] |
[
{
"docid": "8aa6e81b8dd30fb562e5a31356e61a03",
"text": "In this paper, we propose a honeypot architecture for detecting and analyzing unknown network attacks. The main focus of our approach lies in improving the “significance” of recorded events and network traffic that need to be analyzed by a human network security operator in order to identify a new attacking pattern. Our architecture aims to achieve this goal by combining three main components: 1. a packet filter that suppresses all known attacking packets, 2. a proxy host that performs session-individual logging of network traffic, and 3. a honeypot host that executes actual network services to be potentially attacked from the Internet in a carefully supervised environment and that reports back to the proxy host upon the detection of suspicious behavior. Experiences with our first prototype of this concept show that it is relatively easy to specify suspicious behavior and that traffic belonging to an attack can be successfully identified and marked.",
"title": ""
},
{
"docid": "16e90e4dbf5597ce6721a6177344db15",
"text": "BACKGROUND\nScoping reviews are used to identify knowledge gaps, set research agendas, and identify implications for decision-making. The conduct and reporting of scoping reviews is inconsistent in the literature. We conducted a scoping review to identify: papers that utilized and/or described scoping review methods; guidelines for reporting scoping reviews; and studies that assessed the quality of reporting of scoping reviews.\n\n\nMETHODS\nWe searched nine electronic databases for published and unpublished literature scoping review papers, scoping review methodology, and reporting guidance for scoping reviews. Two independent reviewers screened citations for inclusion. Data abstraction was performed by one reviewer and verified by a second reviewer. Quantitative (e.g. frequencies of methods) and qualitative (i.e. content analysis of the methods) syntheses were conducted.\n\n\nRESULTS\nAfter searching 1525 citations and 874 full-text papers, 516 articles were included, of which 494 were scoping reviews. The 494 scoping reviews were disseminated between 1999 and 2014, with 45% published after 2012. Most of the scoping reviews were conducted in North America (53%) or Europe (38%), and reported a public source of funding (64%). The number of studies included in the scoping reviews ranged from 1 to 2600 (mean of 118). Using the Joanna Briggs Institute methodology guidance for scoping reviews, only 13% of the scoping reviews reported the use of a protocol, 36% used two reviewers for selecting citations for inclusion, 29% used two reviewers for full-text screening, 30% used two reviewers for data charting, and 43% used a pre-defined charting form. In most cases, the results of the scoping review were used to identify evidence gaps (85%), provide recommendations for future research (84%), or identify strengths and limitations (69%). We did not identify any guidelines for reporting scoping reviews or studies that assessed the quality of scoping review reporting.\n\n\nCONCLUSION\nThe number of scoping reviews conducted per year has steadily increased since 2012. Scoping reviews are used to inform research agendas and identify implications for policy or practice. As such, improvements in reporting and conduct are imperative. Further research on scoping review methodology is warranted, and in particular, there is need for a guideline to standardize reporting.",
"title": ""
},
{
"docid": "ae00d200eeb64e385cf0d534239acf23",
"text": "We extend the diffuse interface model developed in Wise et al. (2008) to study nonlinear tumor growth in 3-D. Extensions include the tracking of multiple viable cell species populations through a continuum diffuse-interface method, onset and aging of discrete tumor vessels through angiogenesis, and incorporation of individual cell movement using a hybrid continuum-discrete approach. We investigate disease progression as a function of cellular-scale parameters such as proliferation and oxygen/nutrient uptake rates. We find that heterogeneity in the physiologically complex tumor microenvironment, caused by non-uniform distribution of oxygen, cell nutrients, and metabolites, as well as phenotypic changes affecting cellular-scale parameters, can be quantitatively linked to the tumor macro-scale as a mechanism that promotes morphological instability. This instability leads to invasion through tumor infiltration of surrounding healthy tissue. Models that employ a biologically founded, multiscale approach, as illustrated in this work, could help to quantitatively link the critical effect of heterogeneity in the tumor microenvironment with clinically observed tumor growth and invasion. Using patient tumor-specific parameter values, this may provide a predictive tool to characterize the complex in vivo tumor physiological characteristics and clinical response, and thus lead to improved treatment modalities and prognosis.",
"title": ""
},
{
"docid": "861cec5b7546b915037322585ee6abc0",
"text": "A optimization framework for three-dimensional conformal radiation therapy is presented. In conformal therapy, beams of radiation are applied to a patient from different directions, where the aperture through which the beam is delivered from each direction is chosen to match the shape of the tumor, as viewed from that direction. Wedge filters may be used to produce a gradient in beam intensity across the aperture. Given a set of equispaced beam angles, a mixed-integer linear program can be solved to determine the most effective angles to be used in a treatment plan, the weight (exposure time) to be used for each beam, and the type and orientation of wedges to be used. Practical solution techniques for this problem are described; they include strengthening of the formulation and solution of smaller approximate problems obtained by a reduced parametrization of the treatment region. In addition, techniques for controlling the dose-volume histogram implicitly for various parts of the treatment region using hotand cold-spot control parameters are presented. Computational results are given that show the effectiveness of the proposed approach on practical data sets.",
"title": ""
},
{
"docid": "c4caf2968f7f2509b199d8d0ce5eec2d",
"text": "for competition that is based on information, their ability to exploit intangible assets has become far more decisive than their ability to invest in and manage physical assets. Several years ago, in recognition of this change, we introduced a concept we called the balanced scorecard. The balanced scorecard supplemented traditional fi nancial measures with criteria that measured performance from three additional perspectives – those of customers, internal business processes, and learning and growth. (See the exhibit “Translating Vision and Strategy: Four Perspectives.”) It therefore enabled companies to track fi nancial results while simultaneously monitoring progress in building the capabilities and acquiring the intangible assets they would need for future growth. The scorecard wasn’t Editor’s Note: In 1992, Robert S. Kaplan and David P. Norton’s concept of the balanced scorecard revolutionized conventional thinking about performance metrics. By going beyond traditional measures of fi nancial performance, the concept has given a generation of managers a better understanding of how their companies are really doing. These nonfi nancial metrics are so valuable mainly because they predict future fi nancial performance rather than simply report what’s already happened. This article, fi rst published in 1996, describes how the balanced scorecard can help senior managers systematically link current actions with tomorrow’s goals, focusing on that place where, in the words of the authors, “the rubber meets the sky.” Using the Balanced Scorecard as a Strategic Management System",
"title": ""
},
{
"docid": "01beae2504022968153e73be91d1765d",
"text": "User studies in the music information retrieval and music digital library fields have been gradually increasing in recent years, but large-scale studies that can help detect common user behaviors are still lacking. We have conducted a large-scale user survey in which we asked numerous questions related to users’ music needs, uses, seeking, and management behaviors. In this paper, we present our preliminary findings, specifically focusing on the responses to questions of users’ favorite music related websites/applications and the reasons why they like them. We provide a list of popular music services, as well as an analysis of how these services are used, and what qualities are valued. Our findings suggest several trends in the types of music services people like: an increase in the popularity of music streaming and mobile music consumption, the emergence of new functionality, such as music identification and cloud music services, an appreciation of music videos, serendipitous discovery of music, and customizability, as well as users’ changing expectations of particular types of music information.",
"title": ""
},
{
"docid": "6cbdfa5b3cf8d64a9e62f8e0c9bc26aa",
"text": "In this paper, a novel approach to video temporal decomposition into semantic units, termed scenes, is presented. In contrast to previous temporal segmentation approaches that employ mostly low-level visual or audiovisual features, we introduce a technique that jointly exploits low-level and high-level features automatically extracted from the visual and the auditory channel. This technique is built upon the well-known method of the scene transition graph (STG), first by introducing a new STG approximation that features reduced computational cost, and then by extending the unimodal STG-based temporal segmentation technique to a method for multimodal scene segmentation. The latter exploits, among others, the results of a large number of TRECVID-type trained visual concept detectors and audio event detectors, and is based on a probabilistic merging process that combines multiple individual STGs while at the same time diminishing the need for selecting and fine-tuning several STG construction parameters. The proposed approach is evaluated on three test datasets, comprising TRECVID documentary films, movies, and news-related videos, respectively. The experimental results demonstrate the improved performance of the proposed approach in comparison to other unimodal and multimodal techniques of the relevant literature and highlight the contribution of high-level audiovisual features toward improved video segmentation to scenes.",
"title": ""
},
{
"docid": "c8598e04ef93f6127333b79a83508daf",
"text": "Nitric oxide (NO) is an important signaling molecule in multicellular organisms. Most animals produce NO from L-arginine via a family of dedicated enzymes known as NO synthases (NOSes). A rare exception is the roundworm Caenorhabditis elegans, which lacks its own NOS. However, in its natural environment, C. elegans feeds on Bacilli that possess functional NOS. Here, we demonstrate that bacterially derived NO enhances C. elegans longevity and stress resistance via a defined group of genes that function under the dual control of HSF-1 and DAF-16 transcription factors. Our work provides an example of interspecies signaling by a small molecule and illustrates the lifelong value of commensal bacteria to their host.",
"title": ""
},
{
"docid": "5f3dc141b69eb50e17bdab68a2195e13",
"text": "The purpose of this study is to develop a fuzzy-AHP multi-criteria decision making model for procurement process. It aims to measure the procurement performance in the automotive industry. As such measurement of procurement will enable competitive advantage and provide a model for continuous improvement. The rapid growth in the market and the level of competition in the global economy transformed procurement as a strategic issue; which is broader in scope and responsibilities as compared to purchasing. This study reviews the existing literature in procurement performance measurement to identify the key areas of measurement and a hierarchical model is developed with a set of generic measures. In addition, a questionnaire is developed for pair-wise comparison and to collect opinion from practitioners, researchers, managers etc. The relative importance of the measurement criteria are assessed using Analytical Hierarchy Process (AHP) and fuzzy-AHP. The validity of the model is c onfirmed with the results obtained.",
"title": ""
},
{
"docid": "8a77ab964896d3fea327e76b2efad8ef",
"text": "We present the fundamental ideas underlying statistical hypothesis testing using the frequentist framework. We start with a simple example that builds up the one-sample t-test from the beginning, explaining important concepts such as the sampling distribution of the sample mean, and the iid assumption. Then we examine the meaning of the p-value in detail, and discuss several important misconceptions about what a p-value does and does not tell us. This leads to a discussion of Type I, II error and power, and Type S and M error. An important conclusion from this discussion is that one should aim to carry out appropriately powered studies. Next, we discuss two common issues we have encountered in psycholinguistics and linguistics: running experiments until significance is reached, and the “garden-of-forking-paths” problem discussed by Gelman and others. The best way to use frequentist methods is to run appropriately powered studies, check model assumptions, clearly separate exploratory data analysis from planned comparisons decided upon before the study was run, and always attempt to replicate results.",
"title": ""
},
{
"docid": "16a95d66bcd74cdfc0e7369db90366b2",
"text": "The problem of authorship attribution – attributing texts to their original authors – has been an active research area since the end of the 19th century, attracting increased interest in the last decade. Most of the work on authorship attribution focuses on scenarios with only a few candidate authors, but recently considered cases with tens to thousands of candidate authors were found to be much more challenging. In this paper, we propose ways of employing Latent Dirichlet Allocation in authorship attribution. We show that our approach yields state-of-the-art performance for both a few and many candidate authors, in cases where these authors wrote enough texts to be modelled effectively.",
"title": ""
},
{
"docid": "c621f8fb5ea935707aae0b8b7fa21301",
"text": "Several database systems have implemented temporal data support, partly according to the model specified in the last SQL standard and partly according to other, older temporal models. In this article we use the most important temporal concepts to investigate their implementations in enterprise database systems. Also, we discuss strengths and weaknesses of these implementations and give suggestions for future extensions.",
"title": ""
},
{
"docid": "7ab7a2270c364bfad24ea155f003a032",
"text": "In this letter, we present a method of two-dimensional canonical correlation analysis (2D-CCA) where we extend the standard CCA in such a way that relations between two different sets of image data are directly sought without reshaping images into vectors. We stress that 2D-CCA dramatically reduces the computational complexity, compared to the standard CCA. We show the useful behavior of 2D-CCA through numerical examples of correspondence learning between face images in different poses and illumination conditions.",
"title": ""
},
{
"docid": "83580c373e9f91b021d90f520011a5da",
"text": "Pathfinding for a single agent is the problem of planning a route from an initial location to a goal location in an environment, going around obstacles. Pathfinding for multiple agents also aims to plan such routes for each agent, subject to different constraints, such as restrictions on the length of each path or on the total length of paths, no self-intersecting paths, no intersection of paths/plans, no crossing/meeting each other. It also has variations for finding optimal solutions, e.g., with respect to the maximum path length, or the sum of plan lengths. These problems are important for many real-life applications, such as motion planning, vehicle routing, environmental monitoring, patrolling, computer games. Motivated by such applications, we introduce a formal framework that is general enough to address all these problems: we use the expressive high-level representation formalism and efficient solvers of the declarative programming paradigm Answer Set Programming. We also introduce heuristics to improve the computational efficiency and/or solution quality. We show the applicability and usefulness of our framework by experiments, with randomly generated problem instances on a grid, on a real-world road network, and on a real computer game terrain.",
"title": ""
},
{
"docid": "b63338d2b3d720471ee610cc92e6abf9",
"text": "This article illustrates how creativity is constituted by forces beyond the innovating individual, drawing examples from the career of the eminent chemist Linus Pauling. From a systems perspective, a scientific theory or other product is creative only if the innovation gains the acceptance of a field of experts and so transforms the culture. In addition to this crucial selective function vis-à-vis the completed work, the social field can play a catalytic role, fostering productive interactions between person and domain throughout a career. Pauling's case yields examples of how variously the social field contributes to creativity, shaping the individual's standards of judgment and providing opportunities, incentives, and critical evaluation. A formidable set of strengths suited Pauling for his scientific achievements, but examination of his career qualifies the notion of a lone genius whose brilliance carries the day.",
"title": ""
},
{
"docid": "c79a3f831a7bcbcd164397a499cece29",
"text": "A new MOS-C bandpass-low-pass filter using the current feedback operational amplifier (CFOA) is presented. The filter employs two CFOA’s, eight MOS transistors operating in the nonsaturation region, and two grounded capacitors. The proposed MOS-C filter has the advantage of independent control ofQ and !o. PSpice simulation results for the proposed filter are given.",
"title": ""
},
{
"docid": "1738a8ccb1860e5b85e2364f437d4058",
"text": "We describe a new algorithm for finding the hypothesis in a recognition lattice that is expected to minimize the word er ror rate (WER). Our approach thus overcomes the mismatch between the word-based performance metric and the standard MAP scoring paradigm that is sentence-based, and that can le ad to sub-optimal recognition results. To this end we first find a complete alignment of all words in the recognition lattice, identifying mutually supporting and competing word hypotheses . Finally, a new sentence hypothesis is formed by concatenating the words with maximal posterior probabilities. Experimental ly, this approach leads to a significant WER reduction in a large vocab ulary recognition task.",
"title": ""
},
{
"docid": "63b31d490c626241b067c3d4d65764bf",
"text": "Context: This research is positioned in the field of methods for creating software design and the teaching thereof. Goal: The goal of this research is to study the effects of using a collection of examples for creating a software design. Method: We ran a controlled experiment for evaluating the use of a broad collection of examples for creating software designs by software engineering students. In this study, we focus on software designs as represented through UML class diagrams. The treatment is the use of the collection of examples. These examples are offered via a searchable repository. The outcome variable we study is the quality of the design (as assessed by a group of experts). After this, all students were offered the opportunity to improve their design using the collection of examples. We ran a post-assignment questionnaire to collect qualitative data about the experience of the participants. Results: Considering six quality attributes measured by experts, our results show that: 1) the models of the students who used examples are 18% better than those of who did not use examples. 2) the models of the students who did not use examples for constructing became 19% better after updating their models using examples. We complement our statistical analysis with insights from the post assignment questionnaire. Also, we observed that students are more confident about their design when they use examples. Conclusion: Students deliver better software designs when they use a collection of example software designs.",
"title": ""
},
{
"docid": "2c0274a267871c12310e0fd1716563d9",
"text": "We survey on the theoretical and practical developments of the theory of fuzzy logic and soft computing. Specifically, we briefly review the history and main milestones of fuzzy logic (in the wide sense), the more recent development of soft computing, and finalise by presenting a panoramic view of applications: from the most abstract to the most practical ones.",
"title": ""
}
] |
scidocsrr
|
d7f73a12e9ed93e9546d3cecb642b310
|
"Automotive radar the key technology for autonomous driving: From detection and ranging to environmental understanding"
|
[
{
"docid": "5464889be41072ecff03355bf45c289f",
"text": "Grid map registration is an important field in mobile robotics. Applications in which multiple robots are involved benefit from multiple aligned grid maps as they provide an efficient exploration of the environment in parallel. In this paper, a normal distribution transform (NDT)-based approach for grid map registration is presented. For simultaneous mapping and localization approaches on laser data, the NDT is widely used to align new laser scans to reference scans. The original grid quantization-based NDT results in good registration performances but has poor convergence properties due to discontinuities of the optimization function and absolute grid resolution. This paper shows that clustering techniques overcome disadvantages of the original NDT by significantly improving the convergence basin for aligning grid maps. A multi-scale clustering method results in an improved registration performance which is shown on real world experiments on radar data.",
"title": ""
},
{
"docid": "86c0b7d49d0cecc3a2554b85ec08f3ed",
"text": "Advanced driver assistance systems and the environment perception for autonomous vehicles will benefit from systems robustly tracking objects while simultaneously estimating their shape. Unlike many recent approaches that represent object shapes by approximated models such as boxes or ellipses, this paper proposes an algorithm that estimates a free-formed shape derived from raw laser measurements. For that purpose local occupancy grid maps are used to model arbitrary object shapes. Beside shape estimation the algorithm keeps a stable reference point on the object. This will be important to avoid apparent motion if the observable part of an object contour changes. The algorithm is part of a perception system and is tested with two 4-layer laser scanners.",
"title": ""
},
{
"docid": "94013936968a4864167ed4e764398deb",
"text": "A prime requirement for autonomous driving is a fast and reliable estimation of the motion state of dynamic objects in the ego-vehicle's surroundings. An instantaneous approach for extended objects based on two Doppler radar sensors has recently been proposed. In this paper, that approach is augmented by prior knowledge of the object's heading angle and rotation center. These properties can be determined reliably by state-of-the-art methods based on sensors such as LIDAR or cameras. The information fusion is performed utilizing an appropriate measurement model, which directly maps the motion state in the Doppler velocity space. This model integrates the geometric properties. It is used to estimate the object's motion state using a linear regression. Additionally, the model allows a straightforward calculation of the corresponding variances. The resulting method shows a promising accuracy increase of up to eight times greater than the original approach.",
"title": ""
}
] |
[
{
"docid": "ef15cf49c90ef4b115b42ee96fa24f93",
"text": "Visual question answering (VQA) is challenging because it requires a simultaneous understanding of both the visual content of images and the textual content of questions. The approaches used to represent the images and questions in a fine-grained manner and questions and to fuse these multimodal features play key roles in performance. Bilinear pooling based models have been shown to outperform traditional linear models for VQA, but their high-dimensional representations and high computational complexity may seriously limit their applicability in practice. For multimodal feature fusion, here we develop a Multi-modal Factorized Bilinear (MFB) pooling approach to efficiently and effectively combine multi-modal features, which results in superior performance for VQA compared with other bilinear pooling approaches. For fine-grained image and question representation, we develop a ‘co-attention’ mechanism using an end-to-end deep network architecture to jointly learn both the image and question attentions. Combining the proposed MFB approach with co-attention learning in a new network architecture provides a unified model for VQA. Our experimental results demonstrate that the single MFB with co-attention model achieves new state-of-theart performance on the real-world VQA dataset. Code available at https://github.com/yuzcccc/mfb.",
"title": ""
},
{
"docid": "fe529aab49b0c985e40bab3ab0e0582c",
"text": "A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via dense connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Experiments on benchmark datasets with different degradation models show that our RDN achieves favorable performance against state-of-the-art methods.",
"title": ""
},
{
"docid": "4ace08e06cd27fdfb85708cc95791952",
"text": "In this research communication on commutative algebra it was proposed to deal with Grobner Bases and its applications in signals and systems domain.This is one of the pioneering communications in dealing with Cryo-EM Image Processing application using multi-disciplinary concepts involving thermodynamics and electromagnetics based on first principles approach. keywords: Commutative Algebra/HOL/Scala/JikesRVM/Cryo-EM Images/CoCoALib/JAS Introduction & Inspiration : Cryo-Electron Microscopy (Cryo-EM) is an expanding structural biology technique that has recently undergone a quantum leap progression in its applicability to the study of challenging nano-bio systems,because crystallization is not required,only small amounts of sample are needed, and because images can be classified using a computer, the technique has the promising potential to deal with compositional as well as conformational mixtures.Cryo-EM can be used to investigate the complete and fully functional macromolecular complexes in different functional states, providing a richness of nano-bio systems insight. In this short communication,pointing to some of the principles behind the Cryo-EM methodology of single particle analysis via references and discussing Grobner bases application to challenging systems of paramount nano-bio importance is interesting. Special emphasis is on new methodological developments that are leading to an explosion of new studies, many of which are reaching resolutions that could only be dreamed of just few years ago.[1-9][Figures I-IV] There are two main challenges facing researchers in Cryo-EM Image Processing : “(1) The first challenge is that the projection images are extremely noisy (due to the low electron dose that can interact with each molecule before it is destroyed). (2) The second is that the orientations of the molecules that produced every image is unknown (unlike crystallography where the molecules are packed in a form of a crystal and therefore share the same known orientation).Overcoming these two challenges are very much principal in the science of CryoEM. “ according to Prof. Hadani. In the context of above mentioned challenges we intend to investigate and suggest Grobner bases to process Cryo-EM Images using Thermodynamics and Electromagnetics principles.The inspiration to write this short communication was derived mainly from the works of Prof.Buchberger and Dr.Rolf Landauer. source : The physical nature of information Rolf Landauer IBM T.J. Watson Research Center, P.O. Box 218. Yorktown Heights, NY 10598, USA . source : Gröbner Bases:A Short Introduction for Systems Theorists -Bruno Buchberger Research Institute for Symbolic Computation University of Linz,A4232 Schloss,Hagenberg,Austria. Additional interesting facts are observed from an article by Jon Cohen : “Structural Biology – Is HighTech View of HIV Too Good To Be True ?”. (http://davidcrowe.ca/SciHealthEnv/papers/9599-IsHighTechViewOfHIVTooGoodToBeTrue.pdf) Researchers are only interested in finding better software tools to refine the cryo-em image processing tasks on hand using all the mathematical tools at their disposal.Commutative Algebra is one such promising tool.Hence the justification for using Grobner Bases. Informatics Framework Design,Implementation & Analysis : Figure I. Mathematical Algorithm Implementation and Software Architecture -Overall Idea presented in the paper.Self Explanatory Graphical Algorithm Please Note : “Understanding JikesRVM in the Context of Cryo-EM/TEM/SEM Imaging Algorithms and Applications – A General Informatics Introduction from a Software Architecture View Point” by Nirmal & Gagik 2016 could be useful. Figure II. Mathematical Algorithm with various Grobner Bases Mathematical Tools/Software.Self Explanatory Graphical Algorithm Figure III.Scala and Java based Software Architecture Flow Self Explanatory Graphical Algorithm Figure IV. Mathematical Algorithm involving EM Field Theory & Thermodynamics Self Explanatory Graphical Algorithm",
"title": ""
},
{
"docid": "364124b0bc3a2af0e1a7a837a4344f55",
"text": "We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem involves averaging over all possible models (i.e., combinations of predictors) when making inferences about quantities of Adrian E. Raftery is Professor of Statistics and Sociology, David Madigan is Assistant Professor of Statistics, both at the Department of Statistics,University of Washington, Box 354322, Seattle, WA 98195-4322. Jennifer Hoeting is Assistant Professor of Statistics at the Department of Statistics, Colorado State University, Fort Collins, CO 80523. The research of Raftery and Hoeting was partially supported by ONR Contract N-00014-91-J-1074. Madigan's research was partially supported by NSF grant no. DMS 92111627. The authors are grateful to Danika Lew for research assistance and the Editor, the Associate Editor, two anonymous referees and David Draper for very helpful comments that greatly improved the article.",
"title": ""
},
{
"docid": "e7cd57b352c86505304c47cda31e9177",
"text": "We introduce a new shape descriptor, the shape context , for measuring shape similarity and recovering point correspondences. The shape context describes the coarse arrangement of the shape with respect to a point inside or on the boundary of the shape. We use the shape context as a vector-valued attribute in a bipartite graph matching framework. Our proposed method makes use of a relatively small number of sample points selected from the set of detected edges; no special landmarks or keypoints are necessary. Tolerance and/or invariance to common image transformations are available within our framework. Using examples involving both silhouettes and edge images, we demonstrate how the solution to the graph matching problem provides us with correspondences and a dissimilarity score that can be used for object recognition and similarity-based retrieval.",
"title": ""
},
{
"docid": "933312292c64c916e69357c5aec42189",
"text": "Augmented reality annotations and virtual scene navigation add new dimensions to remote collaboration. In this paper, we present a touchscreen interface for creating freehand drawings as world-stabilized annotations and for virtually navigating a scene reconstructed live in 3D, all in the context of live remote collaboration. Two main focuses of this work are (1) automatically inferring depth for 2D drawings in 3D space, for which we evaluate four possible alternatives, and (2) gesture-based virtual navigation designed specifically to incorporate constraints arising from partially modeled remote scenes. We evaluate these elements via qualitative user studies, which in addition provide insights regarding the design of individual visual feedback elements and the need to visualize the direction of drawings.",
"title": ""
},
{
"docid": "acc960b2fd1066efce4655da837213f4",
"text": "0957-4174/$ see front matter 2013 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.12.082 ⇑ Corresponding author. Tel.: +562 978 4834. E-mail addresses: goberreu@ing.uchile.cl (G. Ober (J.D. Velásquez). URL: http://wi.dii.uchile.cl/ (J.D. Velásquez). Plagiarism detection is of special interest to educational institutions, and with the proliferation of digital documents on the Web the use of computational systems for such a task has become important. While traditional methods for automatic detection of plagiarism compute the similarity measures on a document-to-document basis, this is not always possible since the potential source documents are not always available. We do text mining, exploring the use of words as a linguistic feature for analyzing a document by modeling the writing style present in it. The main goal is to discover deviations in the style, looking for segments of the document that could have been written by another person. This can be considered as a classification problem using self-based information where paragraphs with significant deviations in style are treated as outliers. This so-called intrinsic plagiarism detection approach does not need comparison against possible sources at all, and our model relies only on the use of words, so it is not language specific. We demonstrate that this feature shows promise in this area, achieving reasonable results compared to benchmark models. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0a78c9305d4b5584e87327ba2236d302",
"text": "This paper presents GeoS, a new algorithm for the efficient segmentation of n-dimensional image and video data. The segmentation problem is cast as approximate energy minimization in a conditional random field. A new, parallel filtering operator built upon efficient geodesic distance computation is used to propose a set of spatially smooth, contrast-sensitive segmentation hypotheses. An economical search algorithm finds the solution with minimum energy within a sensible and highly restricted subset of all possible labellings. Advantages include: i) computational efficiency with high segmentation accuracy; ii) the ability to estimate an approximation to the posterior over segmentations; iii) the ability to handle generally complex energy models. Comparison with max-flow indicates up to 60 times greater computational efficiency as well as greater memory efficiency. GeoS is validated quantitatively and qualitatively by thorough comparative experiments on existing and novel ground-truth data. Numerous results on interactive and automatic segmentation of photographs, video and volumetric medical image data are presented.",
"title": ""
},
{
"docid": "7d442e46cce8dd52ace274841cbb6182",
"text": "This paper presents the results of a compact wire-bond SP9T antenna switch that was designed as technology demonstrator for a 2.5 V CMOS, 250 fs Ron-Coff thin-film SOI process. Through “layout-driven” circuit design, a small die size of 1.52 mm2 was achieved for a fully integrated switch die containing RF-section, I/O pads, ESD, decoder, level shifters and dual frequency charge pump to generate negative vss. The dual frequency charge pump was a requirement to achieve a fast start-up time of 10 μs and switch rise times of 3 μs. A low insertion loss of 0.42 dB for cellular low-band at 915 MHz and 0.55 dB for cellular high-band at 1910 MHz, harmonic powers better -76 dBc over battery and Band I/V IMD3 of -110 dBm were achieved. All ports show a high ESD tolerance of 2 kV HBM.",
"title": ""
},
{
"docid": "19f0bf4e45e40ae18616cdf55ee5ab40",
"text": "Fournier's gangrene is a rare process which affects soft tissue in the genital and perirectal area. It can also progress to all different stages of sepsis, and abdominal compartment syndrome can be one of its complications. Two patients in septic shock due to Fournier gangrene were admitted to the Intensive Care Unit of Emergency Department. In both cases, infection started from the scrotum and the necrosis quickly involved genitals, perineal, and inguinal regions. Patients were treated with surgical debridement, protective colostomy, hyperbaric oxygen therapy, and broad-spectrum antibacterial chemotherapy. Vacuum-assisted closure (VAC) therapy was applied to the wound with the aim to clean, decontaminate, and avoid abdominal compartmental syndrome development. Both patients survived and were discharged from Intensive Care Unit after hyperbaric oxygen therapy cycles and abdominal closure.",
"title": ""
},
{
"docid": "d5ddc141311afb6050a58be88303b577",
"text": "Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. ShapeShifter can generate adversarially perturbed stop signs that are consistently mis-detected by Faster RCNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.",
"title": ""
},
{
"docid": "66c548d14007f82d2ab1c5337965e2ae",
"text": "The objective of this paper is to provide a review of recent advances in automatic vibration- and audio-based fault diagnosis in machinery using condition monitoring strategies. It presents the most valuable techniques and results in this field and highlights the most profitable directions of research to present. Automatic fault diagnosis systems provide greater security in surveillance of strategic infrastructures, such as electrical substations and industrial scenarios, reduce downtime of machines, decrease maintenance costs, and avoid accidents which may have devastating consequences. Automatic fault diagnosis systems include signal acquisition, signal processing, decision support, and fault diagnosis. The paper includes a comprehensive bibliography of more than 100 selected references which can be used by researchers working in this field.",
"title": ""
},
{
"docid": "02f1d15b8149cfd4a39442ca43fc46c5",
"text": "The present paper criticizes Chalmers's discussion of the Singularity, viewed as the emergence of a superhuman intelligence via the self-amplifying development of artificial intelligence. The situated and embodied view of cognition rejects the notion that intelligence could arise in a closed \"brain-in-a-vat\" system, because intelligence is rooted in a high-bandwidth, sensory-motor interaction with the outside world. Instead, it is proposed that superhuman intelligence can emerge only in a distributed fashion, in the form of a self-organizing network of humans, computers, and other technologies: the \"Global Brain\".",
"title": ""
},
{
"docid": "9f04f8b2adc1c3afe23f8c2202528734",
"text": "Fluorodeoxyglucose positron emission tomography (FDG-PET) imaging based 3D topographic brain glucose metabolism patterns from normal controls (NC) and individuals with dementia of Alzheimer's type (DAT) are used to train a novel multi-scale ensemble classification model. This ensemble model outputs a FDG-PET DAT score (FPDS) between 0 and 1 denoting the probability of a subject to be clinically diagnosed with DAT based on their metabolism profile. A novel 7 group image stratification scheme is devised that groups images not only based on their associated clinical diagnosis but also on past and future trajectories of the clinical diagnoses, yielding a more continuous representation of the different stages of DAT spectrum that mimics a real-world clinical setting. The potential for using FPDS as a DAT biomarker was validated on a large number of FDG-PET images (N=2984) obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database taken across the proposed stratification, and a good classification AUC (area under the curve) of 0.78 was achieved in distinguishing between images belonging to subjects on a DAT trajectory and those images taken from subjects not progressing to a DAT diagnosis. Further, the FPDS biomarker achieved state-of-the-art performance on the mild cognitive impairment (MCI) to DAT conversion prediction task with an AUC of 0.81, 0.80, 0.77 for the 2, 3, 5 years to conversion windows respectively.",
"title": ""
},
{
"docid": "90cbe212501ce1bfb7756fed1a707de5",
"text": "Auctioning constitutes a market-driven scheme for the allocation of cloud-based computing capacities. It is practically applied today in the context of Infrastructure as a Service offers, specifically, virtual machines. However, the maximization of auction profits poses a challenging task for the cloud provider, because it involves the concurrent determination of equilibrium prices and distribution of virtual machine instances to the underlying physical hosts in the data center. In the work at hand, we propose an optimal approach, based on linear programming, as well as a heuristic approach to tackle this Equilibrium Price Auction Allocation Problem (EPAAP). Through an evaluation based on realistic data, we show the practical applicability and benefits of our contributions. Specifically, we find that the heuristic approach reduces the average computation time to solve an EPAAP by more than 99.9%, but still maintains a favorable average solution quality of 96.7% in terms of cloud provider profit, compared to the optimal approach.",
"title": ""
},
{
"docid": "343f45efbdbf654c421b99927c076c5d",
"text": "As software engineering educators, it is important for us to realize the increasing domain-specificity of software, and incorporate these changes in our design of teaching material. Bioinformatics software is an example of immensely complex and critical scientific software and this domain provides an excellent illustration of the role of computing in the life sciences. To study bioinformatics from a software engineering standpoint, we conducted an exploratory survey of bioinformatics developers. The survey had a range of questions about people, processes and products. We learned that practices like extreme programming, requirements engineering and documentation. As software engineering educators, we realized that the survey results had important implications for the education of bioinformatics professionals. We also investigated the current status of software engineering education in bioinformatics, by examining the curricula of more than fifty bioinformatics programs and the contents of over fifteen textbooks. We observed that there was no mention of the role and importance of software engineering practices essential for creating dependable software systems. Based on our findings and existing literature we present a set of recommendations for improving software engineering education in bioinformatics.",
"title": ""
},
{
"docid": "0bd34312fe7fd932cca206a791c085ec",
"text": "In this paper, an accurate implementation of American Sign Language Translator is presented. It is a portable electronic hand glove to be used by any deaf/mute person to communicate effectively with the othesr who don't understand sign language. It provides the visual and audible output on an LCD and through a speaker respectively. This glove consists of five flex sensors that senses the variation in different signs, an accelerometer to distinguish between the static and dynamic signs, a contact sensor, Arduino Mega 2560 for processing of the data, VoiceBox shield, LCD and Speaker for the outputs. There exists a communication gap between the normal and the disabled people. A simpler, easier, useful and efficient solution to fill this void is presented in this paper.",
"title": ""
},
{
"docid": "b44e7cfefa0ad351a86faa1e4baa038c",
"text": "Force-sensing system represents one of the vital components of robotic systems for physical interaction performances. This system is facilitated by the force controllable actuators. However, to make a force controllable actuator, it is still a challenging subject for the majority of the robotic system. This paper proposes a novel POWERPACK unit integrated a torque sensor, a harmonic drive and a motor for enabling the force control. The torque sensing element is based on the capacitance sensing to achieve a compact and a simple structure. In this research, reveals the practical details of the POWERPACK and evaluates the performance of the actuator unit.",
"title": ""
},
{
"docid": "0daa16a3f40612946187d6c66ccd96f4",
"text": "A 60 GHz frequency band planar diplexer based on Substrate Integrated Waveguide (SIW) technology is presented in this research. The 5th order millimeter wave SIW filter is investigated first, and then the 60 GHz SIW diplexer is designed and been simulated. SIW-microstrip transitions are also included in the final design. The relative bandwidths of up and down channels are 1.67% and 1.6% at 59.8 GHz and 62.2 GHz respectively. Simulation shows good channel isolation, small return losses and moderate insertion losses in pass bands. The diplexer can be easily integrated in millimeter wave integrated circuits.",
"title": ""
}
] |
scidocsrr
|
6725ff92fb19ccd0919733e1d79eff5a
|
WaterCooler: exploring an organization through enterprise social media
|
[
{
"docid": "43bf765a516109b885db5b6d1b873c33",
"text": "The attention economy motivates participation in peer-produced sites on the Web like YouTube and Wikipedia. However, this economy appears to break down at work. We studied a large internal corporate blogging community using log files and interviews and found that employees expected to receive attention when they contributed to blogs, but these expectations often went unmet. Like in the external blogosphere, a few people received most of the attention, and many people received little or none. Employees expressed frustration if they invested time and received little or no perceived return on investment. While many corporations are looking to adopt Web-based communication tools like blogs, wikis, and forums, these efforts will fail unless employees are motivated to participate and contribute content. We identify where the attention economy breaks down in a corporate blog community and suggest mechanisms for improvement.",
"title": ""
}
] |
[
{
"docid": "c76d8ac34709f84215e365e2412b9f4e",
"text": "Anti-virus vendors are confronted with a multitude of potentially malicious samples today. Receiving thousands of new samples every day is not uncommon. The signatures that detect confirmed malicious threats are mainly still created manually, so it is important to discriminate between samples that pose a new unknown threat and those that are mere variants of known malware.\n This survey article provides an overview of techniques based on dynamic analysis that are used to analyze potentially malicious samples. It also covers analysis programs that leverage these It also covers analysis programs that employ these techniques to assist human analysts in assessing, in a timely and appropriate manner, whether a given sample deserves closer manual inspection due to its unknown malicious behavior.",
"title": ""
},
{
"docid": "f6a1d7b206ca2796d4e91f3e8aceeed8",
"text": "Objective To develop a classifier that tackles the problem of determining the risk of a patient of suffering from a cardiovascular disease within the next ten years. The system has to provide both a diagnosis and an interpretable model explaining the decision. In this way, doctors are able to analyse the usefulness of the information given by the system. Methods Linguistic fuzzy rule-based classification systems are used, since they provide a good classification rate and a highly interpretable model. More specifically, a new methodology to combine fuzzy rule-based classification systems with interval-valued fuzzy sets is proposed, which is composed of three steps: 1) the modelling of the linguistic labels of the classifier using interval-valued fuzzy sets; 2) the use of theKα operator in the inference process and 3) the application of a genetic tuning to find the best ignorance degree that each interval-valued fuzzy set represents as well as the best value for the parameter α of theKα operator in each rule. Results Correspondingauthor. Tel:+34-948166048. Fax:+34-948168924 Email addresses: joseantonio.sanz@unavarra.es (Jośe Antonio Sanz ), mikel.galar@unavarra.es (Mikel Galar),aranzazu.jurio@unavarra.es (Aranzazu Jurio), antonio.brugos@unavarra.es (Antonio Brugos), miguel.pagola@unavarra.es (Miguel Pagola),bustince@unavarra.es (Humberto Bustince) Preprint submitted to Elsevier November 13, 2013 © 2013. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/",
"title": ""
},
{
"docid": "7db555e42bff7728edb8fb199f063cba",
"text": "The need for more post-secondary students to major and graduate in STEM fields is widely recognized. Students' motivation and strategic self-regulation have been identified as playing crucial roles in their success in STEM classes. But, how students' strategy use, self-regulation, knowledge building, and engagement impact different learning outcomes is not well understood. Our goal in this study was to investigate how motivation, strategic self-regulation, and creative competency were associated with course achievement and long-term learning of computational thinking knowledge and skills in introductory computer science courses. Student grades and long-term retention were positively associated with self-regulated strategy use and knowledge building, and negatively associated with lack of regulation. Grades were associated with higher study effort and knowledge retention was associated with higher study time. For motivation, higher learning- and task-approach goal orientations, endogenous instrumentality, and positive affect and lower learning-, task-, and performance-avoid goal orientations, exogenous instrumentality and negative affect were associated with higher grades and knowledge retention and also with strategic self-regulation and engagement. Implicit intelligence beliefs were associated with strategic self-regulation, but not grades or knowledge retention. Creative competency was associated with knowledge retention, but not grades, and with higher strategic self-regulation. Implications for STEM education are discussed.",
"title": ""
},
{
"docid": "41fa9841bcda62c2df3893dde53f874e",
"text": "In clustering analysis, data attributes may have different contributions to the detection of various clusters. To solve this problem, the subspace clustering technique has been developed, which aims at grouping the data objects into clusters based on the subsets of attributes rather than the entire data space. However, the most existing subspace clustering methods are only applicable to either numerical or categorical data, but not both. This paper, therefore, studies the soft subspace clustering of data with both of the numerical and categorical attributes (also simply called mixed data for short). Specifically, an attribute-weighted clustering model based on the definition of object-cluster similarity is presented. Accordingly, a unified weighting scheme for the numerical and categorical attributes is proposed, which quantifies the attribute-to-cluster contribution by taking into account both of intercluster difference and intracluster similarity. Moreover, a rival penalized competitive learning mechanism is further introduced into the proposed soft subspace clustering algorithm so that the subspace cluster structure as well as the most appropriate number of clusters can be learned simultaneously in a single learning paradigm. In addition, an initialization-oriented method is also presented, which can effectively improve the stability and accuracy of $k$ -means-type clustering methods on numerical, categorical, and mixed data. The experimental results on different benchmark data sets show the efficacy of the proposed approach.",
"title": ""
},
{
"docid": "8210e2eec6a7a6905bdf57e685289d92",
"text": "Attribute-Based Encryption (ABE) is a promising cryptographic primitive which significantly enhances the versatility of access control mechanisms. Due to the high expressiveness of ABE policies, the computational complexities of ABE key-issuing and decryption are getting prohibitively high. Despite that the existing Outsourced ABE solutions are able to offload some intensive computing tasks to a third party, the verifiability of results returned from the third party has yet to be addressed. Aiming at tackling the challenge above, we propose a new Secure Outsourced ABE system, which supports both secure outsourced key-issuing and decryption. Our new method offloads all access policy and attribute related operations in the key-issuing process or decryption to a Key Generation Service Provider (KGSP) and a Decryption Service Provider (DSP), respectively, leaving only a constant number of simple operations for the attribute authority and eligible users to perform locally. In addition, for the first time, we propose an outsourced ABE construction which provides checkability of the outsourced computation results in an efficient way. Extensive security and performance analysis show that the proposed schemes are proven secure and practical.",
"title": ""
},
{
"docid": "6c389a1e216519567cee3ceb9c79cef0",
"text": "This is the 9th edition of a successful textbook. The authors are two well-known and productive writers. K.C. Laudon, a professor of Information systems at Stern School of Business of the New York University, took his BA in Economics from Stanford and his Ph.D. from Columbia University. He is the author of twelve books and over forty articles about social, organizational and management impacts of information systems, privacy, ethics, and multimedia technology. Jane P. Laudon, a management consultant in information systems area, took her M.A. from Harvard University and her Ph.D. from Columbia University and authored seven books. Her main scientific interests are systems analysis, data management and software evaluation. The background, scientific interests and expertise of the authors and their previous works had an obvious impact on the manner this book was conceived, written and accompanied by auxiliary materials (the CD ROM and companion web site). The authors start from the premise that, nowadays, \"Information systems knowledge is essential for creating successful, competitive firms, managing global corporations, adding business value and providing useful products and service to customers\" (p. XIX). Moreover, they state that \"in many industries survival and even existence without the extensive use of IT is inconceivable\" (p.31). An important development the authors remark is the emergence of the digital firm, \"where nearly all core business processes and relationship with customers, suppliers and employees are digitally enabled\" (p.31). In the book the management information systems (MIS) is defined at large as \"the study of [computer based] information systems in business and management\" (p.44). Besides, the authors adopt a broader view of information systems (IS) \"which encompasses an understanding of the management and organizational dimensions as well as technical dimensions of the systems as information systems literacy\" (p.20). Consequently, this book can be viewed as an effort made by the Laudons with a view to contributing to building up and consolidating such an information system literacy for current and future managers, which are to be confronted with several \"major challenges concerning: a) \"information system investments\", b) \"strategic business\"„ c) \"globalization\", d) \"information infrastructure\", and e) \"ethics and security\" (p.28). The authors have noticed a \"user designer communication gap\". In Table 15.3 (p.552) they give several examples of that gap. While the user is concerned with problem solving related questions such as: \"Will the system deliver the information I need for my work?, \"How quickly can I access the data?\", ..., \"How will the system fit into my daily business schedule?\", the designer is preoccupied to find optimal answers to technology-oriented questions such as: \"How many lines of program code will it take to perform this function?\", ...,\"What database management system should we use? In order to help the future managers to successfully face the major challenges mentioned above and to solve the possible communication gap, the authors adopt a sociotechnical view and style of presentation. They combine technical aspects (drawn from computer science, management science, and operations",
"title": ""
},
{
"docid": "8e67a9de2f0d30de335f00bd1591aac5",
"text": "In recent years, IT Service Management (ITSM) has become one of the most researched areas of IT. Incident and Problem Management are two of the Service Operation processes in the IT Infrastructure Library (ITIL). These two processes aim to recognize, log, isolate and correct errors which occur in the environment and disrupt the delivery of services. Incident Management and Problem Management form the basis of the tooling provided by an Incident Ticket Systems (ITS).",
"title": ""
},
{
"docid": "64d9016ede168845d7e08c5eab1af448",
"text": "In the field of software engineering there are many new archetypes are introducing day to day Improve the efficiency and effectiveness of software development. Due to dynamic environment organizations are frequently exchanging their software constraint to meet their objectives. The propose research is a new approach by integrating the traditional V model and agile methodology to combining the strength of these models while minimizing their individual weakness.The fluctuating requirements of emerging a carried software system and accumulative cost of operational software are imposing researchers and experts to determine innovative and superior means for emerging software application at slight business or at enterprise level are viewing for. Agile methodology has its own benefits but there are deficiency several of the features of traditional software development methodologies that are essential for success. That’s why an embedded approach will be the right answer for software industry rather than a pure agile approach. This research shows how agile embedded traditional can play a vital role in development of software. A survey conducted to find the impact of this approach in industry. Both qualitative and quantitative analysis performed.",
"title": ""
},
{
"docid": "50471274efcc7fd7547dc6c0a1b3d052",
"text": "Recently, the UAS has been extensively exploited for data collection from remote and dangerous or inaccessible areas. While most of its existing applications have been directed toward surveillance and monitoring tasks, the UAS can play a significant role as a communication network facilitator. For example, the UAS may effectively extend communication capability to disaster-affected people (who have lost cellular and Internet communication infrastructures on the ground) by quickly constructing a communication relay system among a number of UAVs. However, the distance between the centers of trajectories of two neighboring UAVs, referred to as IUD, plays an important role in the communication delay and throughput. For instance, the communication delay increases rapidly while the throughput is degraded when the IUD increases. In order to address this issue, in this article, we propose a simple but effective dynamic trajectory control algorithm for UAVs. Our proposed algorithm considers that UAVs with queue occupancy above a threshold are experiencing congestion resulting in communication delay. To alleviate the congestion at UAVs, our proposal adjusts their center coordinates and also, if needed, the radius of their trajectory. The performance of our proposal is evaluated through computer-based simulations. In addition, we conduct several field experiments in order to verify the effectiveness of UAV-aided networks.",
"title": ""
},
{
"docid": "b9400c6d317f60dc324877d3a739fd17",
"text": "The present article presents a tutorial on how to estimate and interpret various effect sizes. The 5th edition of the Publication Manual of the American Psychological Association (2001) described the failure to report effect sizes as a “defect” (p. 5), and 23 journals have published author guidelines requiring effect size reporting. Although dozens of effect size statistics have been available for some time, many researchers were trained at a time when effect sizes were not emphasized, or perhaps even taught. Consequently, some readers may appreciate a review of how to estimate and interpret various effect sizes. In addition to the tutorial, the authors recommend effect size interpretations that emphasize direct and explicit comparisons of effects in a new study with those reported in the prior related literature, with a focus on evaluating result replicability.",
"title": ""
},
{
"docid": "7917e6a788cedd9f1dcb9c3fa132656e",
"text": "The smartphone industry has been one of the fastest growing technological areas in recent years. Naturally, the considerable market share of the Android OS and the diversity of app distribution channels besides the official Google Play Store has attracted the attention of malware authors. To deal with the increasing numbers of malicious Android apps in the wild, malware analysts typically rely on analysis tools to extract characteristic information about an app in an automated fashion. While the importance of such tools has been addressed by the research community [8], [24], [25], [27], the resulting prototypes remain limited in terms of analysis capabilities and availability. In this paper we present ANDRUBIS, a completely automated, publicly available and comprehensive analysis system for Android applications. ANDRUBIS combines static analysis techniques with dynamic analysis on both Dalvik VM and system level, as well as several stimulation techniques to increase code coverage.",
"title": ""
},
{
"docid": "719c0101da1ddd2029974f5a795a48f7",
"text": "This article describes color naming by 51 American English-speaking informants. A free-naming task produced 122 monolexemic color terms, with which informants named the 330 Munsell samples from the World Color Survey. Cluster analysis consolidated those terms into a glossary of 20 named color categories: the 11 Basic Color Term (BCT) categories of Berlin and Kay (1969, p. 2) plus nine nonbasic chromatic categories. The glossed data revealed two color-naming motifs: the green-blue motif of the World Color Survey and a novel green-teal-blue motif, which featured peach, teal, lavender, and maroon as high-consensus terms. Women used more terms than men, and more women expressed the novel motif. Under a constrained-naming protocol, informants supplied BCTs for the color samples previously given nonbasic terms. Most of the glossed nonbasic terms from the free-naming task named low-consensus colors located at the BCT boundaries revealed by the constrained-naming task. This study provides evidence for continuing evolution of the color lexicon of American English, and provides insight into the processes governing this evolution.",
"title": ""
},
{
"docid": "d057a41ac6d148e985ffd230fa27f13e",
"text": "Data services via wireless networks and mobile devices have experienced rapid growth worldwide. We investigated the factors influencing adoption of wireless mobile data services (WMDS) in China and tested our model for explaining adoption intentions there. We argued that individuals form their intention to adopt WMDS under the influence of wireless mobile technology, the social environment, personal innovativeness of IT, trust awareness, and the facilitating conditions. We examined the simultaneous effects of these five influences on beliefs in the context of wireless Internet data services via mobile phones. Survey data were collected from 1432 participants in several metro cities across China. The findings suggest that WMDS adoption intention in China is determined by consumers’ perceived usefulness and perceived ease of use of WMDS. Theoretical and practical implications are included in our paper. Published by Elsevier B.V. www.elsevier.com/locate/im Available online at www.sciencedirect.com Information & Management 45 (2008) 52–64",
"title": ""
},
{
"docid": "ac843bd6a18025bb2cac3002dfb6f811",
"text": "For more efficient photoelectrochemical water splitting, there is a dilemma that a photoelectrode needs both light absorption and electrocatalytic faradaic reaction. One of the promising strategies is to deposit a pattern of electrocatalysts onto a semiconductor surface, leaving sufficient bare surface for light absorption while minimizing concentration overpotential as well as resistive loss at the ultramicroelectrodes for faradaic reaction. This scheme can be successfully realized by \"maskless\" direct photoelectrochemical patterning of electrocatalyst onto an SiOx/amorphous Si (a-Si) surface by the light-guided electrodeposition technique. Electrochemical impedance spectroscopy at various pHs tells us much about how it works. The surface states at the SiOx/a-Si interface can mediate the photogenerated electrons for hydrogen evolution, whereas electroactive species in the solution undergo outer-sphere electron transfer, taking electrons tunneling across the SiOx layer from the conduction band. In addition to previously reported long-distance lateral electron transport behavior at a patterned catalyst/SiOx/a-Si interface, the charging process of the surface states plays a crucial role in proton reduction, leading to deeper understanding of the operation mechanisms for photoelectrochemical water splitting.",
"title": ""
},
{
"docid": "c3ef6598f869e40fc399c89baf0dffd8",
"text": "In this article, a novel hybrid genetic algorithm is proposed. The selection operator, crossover operator and mutation operator of the genetic algorithm have effectively been improved according to features of Sudoku puzzles. The improved selection operator has impaired the similarity of the selected chromosome and optimal chromosome in the current population such that the chromosome with more abundant genes is more likely to participate in crossover; such a designed crossover operator has possessed dual effects of self-experience and population experience based on the concept of tactfully combining PSO, thereby making the whole iterative process highly directional; crossover probability is a random number and mutation probability changes along with the fitness value of the optimal solution in the current population such that more possibilities of crossover and mutation could then be considered during the algorithm iteration. The simulation results show that the convergence rate and stability of the novel algorithm has significantly been improved.",
"title": ""
},
{
"docid": "244be1e978813811e3f5afc1941cd4f5",
"text": "In this paper we introduce a new publicly available dataset for verification against textual sources, FEVER: Fact Extraction and VERification. It consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as SUPPORTED, REFUTED or NOTENOUGHINFO by annotators achieving 0.6841 in Fleiss κ. For the first two classes, the annotators also recorded the sentence(s) forming the necessary evidence for their judgment. To characterize the challenge of the dataset presented, we develop a pipeline approach and compare it to suitably designed oracles. The best accuracy we achieve on labeling a claim accompanied by the correct evidence is 31.87%, while if we ignore the evidence we achieve 50.91%. Thus we believe that FEVER is a challenging testbed that will help stimulate progress on claim verification against textual sources.",
"title": ""
},
{
"docid": "fbe8379aa9af67d746df0c2335f3675a",
"text": "The large volume of data produced by the increasingly deployed Internet of Things (IoT), is shifting security priorities to consider data access control from a data-centric perspective. To secure the IoT, it becomes essential to implement a data access control solution that offers the necessary flexibility required to manage a large number of IoT devices. The concept of Ciphertext-Policy Attribute-based Encryption (CP-ABE) fulfills such requirement. It allows the data source to encrypt data while cryptographically enforcing a security access policy, whereby only authorized data users with the desired attributes are able to decrypt data. Yet, despite these manifest advantages; CP-ABE has not been designed taking into consideration energy efficiency. Many IoT devices, like sensors and actuators, cannot be part of CP-ABE enforcement points, because of their resource limitations in terms of CPU, memory, battery, etc. In this paper, we propose to extend the basic CP-ABE scheme using effective pre-computation techniques. We will experimentally compute the energy saving potential offered by the proposed variant of CP-ABE, and thus demonstrate the applicability of CP-ABE in the IoT.",
"title": ""
},
{
"docid": "74136e5c4090cc990f62c399781c9bb3",
"text": "This paper compares statistical techniques for text classification using Naïve Bayes and Support Vector Machines, in context of Urdu language. A large corpus is used for training and testing purpose of the classifiers. However, those classifiers cannot directly interpret the raw dataset, so language specific preprocessing techniques are applied on it to generate a standardized and reduced-feature lexicon. Urdu language is morphological rich language which makes those tasks complex. Statistical characteristics of corpus and lexicon are measured which show satisfactory results of text preprocessing module. The empirical results show that Support Vector Machines outperform Naïve Bayes classifier in terms of classification accuracy.",
"title": ""
},
{
"docid": "7b8c56a03653509c729b37e1ce4d33fc",
"text": "Systems for declarative large-scale machine learning (ML) algorithms aim at high-level algorithm specification and automatic optimization of runtime execution plans. State-ofthe-art compilers rely on algebraic rewrites and operator selection, including fused operators to avoid materialized intermediates, reduce memory bandwidth requirements, and exploit sparsity across chains of operations. However, the unlimited number of relevant patterns for rewrites and operators poses challenges in terms of development effort and high performance impact. Query compilation has been studied extensively in the database literature, but ML programs additionally require handling linear algebra and exploiting algebraic properties, DAG structures, and sparsity. In this paper, we introduce Spoof, an architecture to automatically (1) identify algebraic simplification rewrites, and (2) generate fused operators in a holistic framework. We describe a snapshot of the overall system, including key techniques of sum-product optimization and code generation. Preliminary experiments show performance close to hand-coded fused operators, significant improvements over a baseline without fused operators, and moderate compilation overhead.",
"title": ""
},
{
"docid": "c64751968597299dc5622f589742c37d",
"text": "OpenFlow switching and Network Operating System (NOX) have been proposed to support new conceptual networking trials for fine-grained control and visibility. The OpenFlow is expected to provide multi-layer networking with switching capability of Ethernet, MPLS, and IP routing. NOX provides logically centralized access to high-level network abstraction and exerts control over the network by installing flow entries in OpenFlow compatible switches. The NOX, however, is missing the necessary functions for QoS-guaranteed software defined networking (SDN) service provisioning on carrier grade provider Internet, such as QoS-aware virtual network embedding, end-to-end network QoS assessment, and collaborations among control elements in other domain network. In this paper, we propose a QoS-aware Network Operating System (QNOX) for SDN with Generalized OpenFlows. The functional modules and operations of QNOX for QoS-aware SDN service provisioning with the major components (e.g., service element (SE), control element (CE), management element (ME), and cognitive knowledge element (CKE)) are explained in detail. The current status of prototype implementation and performances are explained. The scalability of the QNOX is also analyzed to confirm that the proposed framework can be applied for carrier grade large scale provider Internet1.",
"title": ""
}
] |
scidocsrr
|
9effaeade1a16756f3625880c2879c12
|
A Generalization of Regenerating Codes for Clustered Storage Systems
|
[
{
"docid": "26597dea3d011243a65a1d2acdae19e8",
"text": "Erasure coding techniques are used to increase the reliability of distributed storage systems while minimizing storage overhead. The bandwidth required to repair the system after a node failure also plays a crucial role in the system performance. In [1] authors have shown that a tradeoff exists between storage and repair bandwidth. They also have introduced the scheme of regenerating codes which meet this tradeoff. In this paper, a scheme of Exact Regenerating Codes is introduced, which are regenerating codes with an additional property of regenerating back the same node upon failure. For the minimum bandwidth point, which is suitable for applications like distributed mail servers, explicit construction for exact regenerating codes is provided. A subspace approach is provided, using which the necessary and sufficient conditions for a linear code to be an exact regenerating code are derived. This leads to the uniqueness of our construction. For the minimum storage point which suits applications such as storage in peer-to-peer systems, an explicit construction of regenerating codes for certain suitable parameters is provided. This code supports variable number of nodes and can handle multiple simultaneous node failures. The constructions given for both the points require a low field size and have low complexity.",
"title": ""
}
] |
[
{
"docid": "12eff845ccb6e5cc2b2fbe74935aff46",
"text": "The study of this paper presents a new technique to use automatic number plate detection and recognition. This system plays a significant role throughout this busy world, owing to rise in use of vehicles day-by-day. Some of the applications of this software are automatic toll tax collection, unmanned parking slots, safety, and security. The current scenario happening in India is, people, break the rules of the toll and move away which can cause many serious issues like accidents. This system uses efficient algorithms to detect the vehicle number from real-time images. The system detects the license plate on the vehicle first and then captures the image of it. Vehicle number plate is localized and characters are segmented and further recognized with help of neural network. The system is designed for grayscale images so it detects the number plate regardless of its color. The resulting vehicle number plate is then compared with the available database of all vehicles which have been already registered by the users so as to come up with information about vehicle type and charge accordingly. The vehicle information such as date, toll amount is stored in the database to maintain the record.",
"title": ""
},
{
"docid": "0ccde44cffc4d888668b14370e147529",
"text": "Bitcoin is a crypto currency with several advantages over previous approaches. Transactions are con®rmed and stored by a peer-to-peer network in a blockchain. Therefore, all transactions are public and soon solutions where designed to increase privacy in Bitcoin Many come with downsides, like requiring a trusted third-party or requiring modi®cations to Bitcoin. In this paper, we compare these approaches according to several criteria. Based on our ®ndings, CoinJoin emerges as the best approach for anonymizing Bitcoins today.",
"title": ""
},
{
"docid": "df4d0112eecfcc5c6c57784d1a0d010d",
"text": "2 The design and measured results are reported on three prototype DC-DC converters which successfully demonstrate the design techniques of this thesis and the low-power enabling capabilities of DC-DC converters in portable applications. Voltage scaling for low-power throughput-constrained digital signal processing is reviewed and is shown to provide up to an order of magnitude power reduction compared to existing 3.3 V standards when enabled by high-efficiency low-voltage DC-DC conversion. A new ultra-low-swing I/O strategy, enabled by an ultra-low-voltage and low-power DCDC converter, is used to reduce the power of high-speed inter-chip communication by greater than two orders of magnitude. Dynamic voltage scaling is proposed to dynamically trade general-purpose processor throughput for energy-efficiency, yielding up to an order of magnitude improvement in the average energy per operation of the processor. This is made possible by a new class of voltage converter, called the dynamic DC-DC converter, whose primary performance objectives and design considerations are introduced in this thesis. Robert W. Brodersen, Chairman of Committee Table of",
"title": ""
},
{
"docid": "08a7621fe99afba5ec9a78c76192f43d",
"text": "Orthogonal Frequency Division Multiple Access (OFDMA) as well as other orthogonal multiple access techniques fail to achieve the system capacity limit in the uplink due to the exclusivity in resource allocation. This issue is more prominent when fairness among the users is considered in the system. Current Non-Orthogonal Multiple Access (NOMA) techniques introduce redundancy by coding/spreading to facilitate the users' signals separation at the receiver, which degrade the system spectral efficiency. Hence, in order to achieve higher capacity, more efficient NOMA schemes need to be developed. In this paper, we propose a NOMA scheme for uplink that removes the resource allocation exclusivity and allows more than one user to share the same subcarrier without any coding/spreading redundancy. Joint processing is implemented at the receiver to detect the users' signals. However, to control the receiver complexity, an upper limit on the number of users per subcarrier needs to be imposed. In addition, a novel subcarrier and power allocation algorithm is proposed for the new NOMA scheme that maximizes the users' sum-rate. The link-level performance evaluation has shown that the proposed scheme achieves bit error rate close to the single-user case. Numerical results show that the proposed NOMA scheme can significantly improve the system performance in terms of spectral efficiency and fairness comparing to OFDMA.",
"title": ""
},
{
"docid": "ff3229e4afdedd01a936c7e70f8d0d02",
"text": "This paper highlights an updated anatomy of parametrial extension with emphasis on magnetic resonance imaging (MRI) assessment of disease spread in the parametrium in patients with locally advanced cervical cancer. Pelvic landmarks were identified to assess the anterior and posterior extensions of the parametria, besides the lateral extension, as defined in a previous anatomical study. A series of schematic drawings and MRI images are shown to document the anatomical delineation of disease on MRI, which is crucial not only for correct image-based three-dimensional radiotherapy but also for the surgical oncologist, since neoadjuvant chemoradiotherapy followed by radical surgery is emerging in Europe as a valid alternative to standard chemoradiation.",
"title": ""
},
{
"docid": "f330cfad6e7815b1b0670217cd09b12e",
"text": "In this paper we study the effect of false data injection attacks on state estimation carried over a sensor network monitoring a discrete-time linear time-invariant Gaussian system. The steady state Kalman filter is used to perform state estimation while a failure detector is employed to detect anomalies in the system. An attacker wishes to compromise the integrity of the state estimator by hijacking a subset of sensors and sending altered readings. In order to inject fake sensor measurements without being detected the attacker will need to carefully design his actions to fool the estimator as abnormal sensor measurements would result in an alarm. It is important for a designer to determine the set of all the estimation biases that an attacker can inject into the system without being detected, providing a quantitative measure of the resilience of the system to such attacks. To this end, we will provide an ellipsoidal algorithm to compute its inner and outer approximations of such set. A numerical example is presented to further illustrate the effect of false data injection attack on state estimation.",
"title": ""
},
{
"docid": "27cc510f79a4ed76da42046b49bbb9fd",
"text": "This article reports the orthodontic treatment ofa 25-year-old female patient whose chief complaint was the inclination of the maxillary occlusal plane in front view. The individualized vertical placement of brackets is described. This placement made possible a symmetrical occlusal plane to be achieved in a rather straightforward manner without the need for further technical resources.",
"title": ""
},
{
"docid": "1e4ecef47048e1f724733fa19526935f",
"text": "Theories of aggressive behavior and ethological observations in animals and children suggest the existence of distinct forms of reactive (hostile) and proactive (instrumental) aggression. Toward the validation of this distinction, groups of reactive aggressive, proactive aggressive, and nonaggressive children were identified (n = 624 9-12-year-olds). Social information-processing patterns were assessed in these groups by presenting hypothetical vignettes to subjects. 3 hypotheses were tested: (1) only the reactive-aggressive children would demonstrate hostile biases in their attributions of peers' intentions in provocation situations (because such biases are known to lead to reactive anger); (2) only proactive-aggressive children would evaluate aggression and its consequences in relatively positive ways (because proactive aggression is motivated by its expected external outcomes); and (3) proactive-aggressive children would select instrumental social goals rather than relational goals more often than nonaggressive children. All 3 hypotheses were at least partially supported.",
"title": ""
},
{
"docid": "723cf2a8b6142a7e52a0ff3fb74c3985",
"text": "The Internet of Mobile Things (IoMT) requires support for a data lifecycle process ranging from sorting, cleaning and monitoring data streams to more complex tasks such as querying, aggregation, and analytics. Current solutions for stream data management in IoMT have been focused on partial aspects of a data lifecycle process, with special emphasis on sensor networks. This paper aims to address this problem by developing an offline and real-time data lifecycle process that incorporates a layered, data-flow centric, and an edge/cloud computing approach that is needed for handling heterogeneous, streaming and geographicallydispersed IoMT data streams. We propose an end to end architecture to support an instant intra-layer communication that establishes a stream data flow in real-time to respond to immediate data lifecycle tasks at the edge layer of the system. Our architecture also provides offline functionalities for later analytics and visualization of IoMT data streams at the core layer of the system. Communication and process are thus the defining factors in the design of our stream data management solution for IoMT. We describe and evaluate our prototype implementation using real-time transit data feeds and a commercial edge-based platform. Preliminary results are showing the advantages of running data lifecycle tasks at the edge of the network for reducing the volume of data streams that are redundant and should not be transported to the cloud. Keywords—stream data lifecycle, edge computing, cloud computing, Internet of Mobile Things, end to end architectures",
"title": ""
},
{
"docid": "f6d08e76bfad9c4988253b643163671a",
"text": "This paper proposes a technique for unwanted lane departure detection. Initially, lane boundaries are detected using a combination of the edge distribution function and a modified Hough transform. In the tracking stage, a linear-parabolic lane model is used: in the near vision field, a linear model is used to obtain robust information about lane orientation; in the far field, a quadratic function is used, so that curved parts of the road can be efficiently tracked. For lane departure detection, orientations of both lane boundaries are used to compute a lane departure measure at each frame, and an alarm is triggered when such measure exceeds a threshold. Experimental results indicate that the proposed system can fit lane boundaries in the presence of several image artifacts, such as sparse shadows, lighting changes and bad conditions of road painting, being able to detect in advance involuntary lane crossings. q 2005 Elsevier Ltd All rights reserved.",
"title": ""
},
{
"docid": "449dbec9bcfe268a5db432c116a61087",
"text": "Cake appearance is an important attribute of freeze-dried products, which may or may not be critical with respect to product quality (i.e., safety and efficacy). Striving for \"uniform and elegant\" cake appearance may continue to remain an important goal during the design and development of a lyophilized drug product. However, \"sometimes\" a non-ideal cake appearance has no impact on product quality and is an inherent characteristic of the product (due to formulation, drug product presentation, and freeze-drying process). This commentary provides a summary of challenges related to visual appearance testing of freeze-dried products, particularly on how to judge the criticality of cake appearance. Furthermore, a harmonized nomenclature and description for variations in cake appearance from the ideal expectation of uniform and elegant is provided, including representative images. Finally, a science and risk-based approach is discussed on establishing acceptance criteria for cake appearance.",
"title": ""
},
{
"docid": "37dcc23a5504466a5f8200f281487888",
"text": "Computational approaches that 'dock' small molecules into the structures of macromolecular targets and 'score' their potential complementarity to binding sites are widely used in hit identification and lead optimization. Indeed, there are now a number of drugs whose development was heavily influenced by or based on structure-based design and screening strategies, such as HIV protease inhibitors. Nevertheless, there remain significant challenges in the application of these approaches, in particular in relation to current scoring schemes. Here, we review key concepts and specific features of small-molecule–protein docking methods, highlight selected applications and discuss recent advances that aim to address the acknowledged limitations of established approaches.",
"title": ""
},
{
"docid": "7ea89697894cb9e0da5bfcebf63be678",
"text": "This paper develops a frequency-domain iterative machine learning (IML) approach for output tracking. Frequency-domain iterative learning control allows bounded noncausal inversion of system dynamics and is, therefore, applicable to nonminimum phase systems. The model used in the frequency-domain control update can be obtained from the input–output data acquired during the iteration process. However, such data-based approaches can have challenges if the noise-to-output-signal ratio is large. The main contribution of this paper is the use of kernel-based machine learning during the iterations to estimate both the model (and its inverse) for the control update, as well as the model uncertainty needed to establish bounds on the iteration gain for ensuring convergence. Another contribution is the proposed use of augmented inputs with persistency of excitation to promote learning of the model during iterations. The improved model can be used to better infer the inverse input resulting in lower initial error for new output trajectories. The proposed IML approach with the augmented input is illustrated with simulations for a benchmark nonminimum phase example.",
"title": ""
},
{
"docid": "dda8427a6630411fc11e6d95dbff08b9",
"text": "Text representations using neural word embeddings have proven effective in many NLP applications. Recent researches adapt the traditional word embedding models to learn vectors of multiword expressions (concepts/entities). However, these methods are limited to textual knowledge bases (e.g., Wikipedia). In this paper, we propose a novel and simple technique for integrating the knowledge about concepts from two large scale knowledge bases of different structure (Wikipedia and Probase) in order to learn concept representations. We adapt the efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph. We evaluate our concept embedding models on two tasks: (1) analogical reasoning, where we achieve a state-of-the-art performance of 91% on semantic analogies, (2) concept categorization, where we achieve a state-of-the-art performance on two benchmark datasets achieving categorization accuracy of 100% on one and 98% on the other. Additionally, we present a case study to evaluate our model on unsupervised argument type identification for neural semantic parsing. We demonstrate the competitive accuracy of our unsupervised method and its ability to better generalize to out of vocabulary entity mentions compared to the tedious and error prone methods which depend on gazetteers and regular expressions.",
"title": ""
},
{
"docid": "bf48f9ac763b522b8d43cfbb281fbffa",
"text": "We present a declarative framework for collective deduplication of entity references in the presence of constraints. Constraints occur naturally in many data cleaning domains and can improve the quality of deduplication. An example of a constraint is \"each paper has a unique publication venue''; if two paper references are duplicates, then their associated conference references must be duplicates as well. Our framework supports collective deduplication, meaning that we can dedupe both paper references and conference references collectively in the example above. Our framework is based on a simple declarative Datalog-style language with precise semantics. Most previous work on deduplication either ignoreconstraints or use them in an ad-hoc domain-specific manner. We also present efficient algorithms to support the framework. Our algorithms have precise theoretical guarantees for a large subclass of our framework. We show, using a prototype implementation, that our algorithms scale to very large datasets. We provide thoroughexperimental results over real-world data demonstrating the utility of our framework for high-quality and scalable deduplication.",
"title": ""
},
{
"docid": "69a6cfb649c3ccb22f7a4467f24520f3",
"text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.",
"title": ""
},
{
"docid": "00ed53e43725d782b38c185faa2c8fd2",
"text": "In this paper we evaluate tensegrity probes on the basis of the EDL phase performance of the probe in the context of a mission to Titan. Tensegrity probes are structurally designed around tension networks and are composed of tensile and compression elements. Such probes have unique physical force distribution properties and can be both landing and mobility platforms, allowing for dramatically simpler mission profile and reduced costs. Our concept is to develop a tensegrity probe in which the tensile network can be actively controlled to enable compact stowage for launch followed by deployment in preparation for landing. Due to their natural compliance and structural force distribution properties, tensegrity probes can safely absorb significant impact forces, enabling high speed Entry, Descent, and Landing (EDL) scenarios where the probe itself acts much like an airbag. However, unlike an airbag which must be discarded after a single use, the tensegrity probe can actively control its shape to provide compliant rolling mobility while still maintaining its ability to safely absorb impact shocks that might occur during exploration. (See Figure 1) This combination of functions from a single structure enables compact and light-weight planetary exploration missions with the capabilities of traditional wheeled rovers, but with the mass and cost similar or less than a stationary probe. In this paper we cover this new mission concept and tensegrity probe technologies for compact storage, EDL, and surface mobility, with an focus on analyzing the landing phase performance and ability to protect and deliver scientific payloads. The analysis is then supported with results from physical prototype drop-tests.",
"title": ""
},
{
"docid": "5815fb8da17375f24bbdeab7af91f3a3",
"text": "We introduce a new method for framesemantic parsing that significantly improves the prior state of the art. Our model leverages the advantages of a deep bidirectional LSTM network which predicts semantic role labels word by word and a relational network which predicts semantic roles for individual text expressions in relation to a predicate. The two networks are integrated into a single model via knowledge distillation, and a unified graphical model is employed to jointly decode frames and semantic roles during inference. Experiments on the standard FrameNet data show that our model significantly outperforms existing neural and non-neural approaches, achieving a 5.7 F1 gain over the current state of the art, for full frame structure extraction.",
"title": ""
},
{
"docid": "6cb480efca7138e26ce484eb28f0caec",
"text": "Given the demand for authentic personal interactions over social media, it is unclear how much firms should actively manage their social media presence. We study this question empirically in a healthcare setting. We show empirically that active social media management drives more user-generated content. However, we find that this is due to an increase in incremental user postings from an organization’s employees rather than from its clients. This result holds when we explore exogenous variation in social media policies, employees and clients that are explained by medical marketing laws, medical malpractice laws and distortions in Medicare incentives. Further examination suggests that content being generated mainly by employees can be avoided if a firm’s postings are entirely client-focused. However, empirically the majority of firm postings seem not to be specifically targeted to clients’ interests, instead highlighting more general observations or achievements of the firm itself. We show that untargeted postings like this provoke activity by employees rather than clients. This may not be a bad thing, as employee-generated content may help with employee motivation, recruitment or retention, but it does suggest that social media should not be funded or managed exclusively as a marketing function of the firm. ∗Economics Department, University of Virginia, Charlottesville, VA and RAND Corporation †MIT Sloan School of Management, MIT, Cambridge, MA and NBER ‡All errors are our own.",
"title": ""
},
{
"docid": "e2737102af24a27c4f531e5242807c76",
"text": "We present the design, fabrication, and characterization of a fiber optically sensorized robotic hand for multi purpose manipulation tasks. The robotic hand has three fingers that enable both pinch and power grips. The main bone structure was made of a rigid plastic material and covered by soft skin. Both bone and skin contain embedded fiber optics for force and tactile sensing, respectively. Eight fiber optic strain sensors were used for rigid bone force sensing, and six fiber optic strain sensors were used for soft skin tactile sensing. For characterization, different loads were applied in two orthogonal axes at the fingertip and the sensor signals were measured from the bone structure. The skin was also characterized by applying a light load on different places for contact localization. The actuation of the hand was achieved by a tendon-driven under-actuated system. Gripping motions are implemented using an active tendon located on the volar side of each finger and connected to a motor. Opening motions of the hand were enabled by passive elastic tendons located on the dorsal side of each finger.",
"title": ""
}
] |
scidocsrr
|
d3e52dc43d6509c57475809902b20a26
|
Visual Madlibs: Fill in the blank Image Generation and Question Answering
|
[
{
"docid": "55b9284f9997b18d3b1fad9952cd4caa",
"text": "This paper presents a system which learns to answer questions on a broad range of topics from a knowledge base using few handcrafted features. Our model learns low-dimensional embeddings of words and knowledge base constituents; these representations are used to score natural language questions against candidate answers. Training our system using pairs of questions and structured representations of their answers, and pairs of question paraphrases, yields competitive results on a recent benchmark of the literature.",
"title": ""
}
] |
[
{
"docid": "543348825e8157926761b2f6a7981de2",
"text": "With the aim of developing a fast yet accurate algorithm for compressive sensing (CS) reconstruction of natural images, we combine in this paper the merits of two existing categories of CS methods: the structure insights of traditional optimization-based methods and the speed of recent network-based ones. Specifically, we propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $$ norm CS reconstruction model. To cast ISTA into deep network form, we develop an effective strategy to solve the proximal mapping associated with the sparsity-inducing regularizer using nonlinear transforms. All the parameters in ISTA-Net (e.g. nonlinear transforms, shrinkage thresholds, step sizes, etc.) are learned end-to-end, rather than being hand-crafted. Moreover, considering that the residuals of natural images are more compressible, an enhanced version of ISTA-Net in the residual domain, dubbed ISTA-Net+, is derived to further improve CS reconstruction. Extensive CS experiments demonstrate that the proposed ISTA-Nets outperform existing state-of-the-art optimization-based and network-based CS methods by large margins, while maintaining fast computational speed. Our source codes are available: http://jianzhang.tech/projects/ISTA-Net.",
"title": ""
},
{
"docid": "54f95cef02818cb4eb86339ee12a8b07",
"text": "The problem of discontinuities in broadband multisection coupled-stripline 3-dB directional couplers, phase shifters, high-pass tapered-line 3-dB directional couplers, and magic-T's, regarding the connections of coupled and terminating signal lines, is comprehensively investigated in this paper for the first time. The equivalent circuit of these discontinuities proposed in Part I has been used for accurate modeling of the broadband multisection and ultra-broadband high-pass coupled-stripline circuits. It has been shown that parasitic reactances, which result from the connections of signal and coupled lines, severely deteriorate the return losses and the isolation of such circuits and also-in case of tapered-line directional couplers-the coupling responses. Moreover, it has been proven theoretically and experimentally that these discontinuity effects can be substantially reduced by introducing compensating shunt capacitances in a number of cross sections of coupled and signal lines. Results of measurements carried out for various designed and manufactured coupled-line circuits have been very promising and have proven the efficiency of the proposed broadband compensation technique. The theoretical and measured data are given for the following coupled-stripline circuits: a decade-bandwidth asymmetric three-section 3-dB directional coupler, a decade-bandwidth three-section phase-shifter compensator, and a high-pass asymmetric tapered-line 3-dB coupler",
"title": ""
},
{
"docid": "e3027e5a2cd00142eb3e227ba2ac73dd",
"text": "Modern cars are already incredibly smart environments today due to the sheer number of sensors and processors packed into a small space. Likewise, new technologies in human-computer interaction increasingly find their way inside, e.g. eye tracking, speech interaction and gesture recognition. The support of new modalities is promising a reduction of driver distraction and a better handling of an increasing number of functions offered by in-vehicle systems. With multiple modalities to choose from, which can be combined arbitrarily via multimodal fusion, drivers can make a free choice depending on the demands of the situation and their preferences. Our paper presents a prototype in-car system that allows car features (like turning lights and windows) to be controlled by combinations of speech, gaze, and micro-gestures. We propose an interaction concept, sketch our architecture based on a domain-independent multimodal dialogue platform, and draw some first conclusions on the outcome.",
"title": ""
},
{
"docid": "01ba1a2087b177895dceff8675e92bbb",
"text": "The beer game is a widely used in-class game that is played in supply chain management classes to demonstrate the bullwhip effect. The game is a decentralized, multi-agent, cooperative problem that can be modeled as a serial supply chain network in which agents cooperatively attempt to minimize the total cost of the network even though each agent can only observe its own local information. Each agent chooses order quantities to replenish its stock. Under some conditions, a base-stock replenishment policy is known to be optimal. However, in a decentralized supply chain in which some agents (stages) may act irrationally (as they do in the beer game), there is no known optimal policy for an agent wishing to act optimally. We propose a machine learning algorithm, based on deep Q-networks, to optimize the replenishment decisions at a given stage. When playing alongside agents who follow a base-stock policy, our algorithm obtains near-optimal order quantities. It performs much better than a base-stock policy when the other agents use a more realistic model of human ordering behavior. Unlike most other algorithms in the literature, our algorithm does not have any limits on the beer game parameter values. Like any deep learning algorithm, training the algorithm can be computationally intensive, but this can be performed ahead of time; the algorithm executes in real time when the game is played. Moreover, we propose a transfer learning approach so that the training performed for one agent and one set of cost coefficients can be adapted quickly for other agents and costs. Our algorithm can be extended to other decentralized multi-agent cooperative games with partially observed information, which is a common type of situation in real-world supply chain problems.",
"title": ""
},
{
"docid": "e49e65b40bf1cccdcbf223a109bec267",
"text": "Deep neural networks are being used increasingly to automate data analysis and decision making, yet their decision-making process is largely unclear and is difficult to explain to the end users. In this paper, we address the problem of Explainable AI for deep neural networks that take images as input and output a class probability. We propose an approach called RISE that generates an importance map indicating how salient each pixel is for the model’s prediction. In contrast to white-box approaches that estimate pixel importance using gradients or other internal network state, RISE works on blackbox models. It estimates importance empirically by probing the model with randomly masked versions of the input image and obtaining the corresponding outputs. We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments. Extensive experiments on several benchmark datasets show that our approach matches or exceeds the performance of other methods, including white-box approaches.",
"title": ""
},
{
"docid": "13dde903c4568b7077d43e1786a1175b",
"text": "In this paper, a method is proposed to detect the emotion of a song based on its lyrical and audio features. Lyrical features are generated by segmentation of lyrics during the process of data extraction. ANEW and WordNet knowledge is then incorporated to compute Valence and Arousal values. In addition to this, linguistic association rules are applied to ensure that the issue of ambiguity is properly addressed. Audio features are used to supplement the lyrical ones and include attributes like energy, tempo, and danceability. These features are extracted from The Echo Nest, a widely used music intelligence platform. Construction of training and test sets is done on the basis of social tags extracted from the last.fm website. The classification is done by applying feature weighting and stepwise threshold reduction on the k-Nearest Neighbors algorithm to provide fuzziness in the classification.",
"title": ""
},
{
"docid": "f231bff77a403fe18a445d894e9b93e5",
"text": "The geographical location of Internet IP addresses is important for academic research, commercial and homeland security applications. Thus, both commercial and academic databases and tools are available for mapping IP addresses to geographic locations. Evaluating the accuracy of these mapping services is complex since obtaining diverse large scale ground truth is very hard. In this work we evaluate mapping services using an algorithm that groups IP addresses to PoPs, based on structure and delay. This way we are able to group close to 100,000 IP addresses world wide into groups that are known to share a geo-location with high confidence. We provide insight into the strength and weaknesses of IP geolocation databases, and discuss their accuracy and encountered anomalies.",
"title": ""
},
{
"docid": "4535a5961d6628f2f4bafb1d99821bbb",
"text": "The prevalence of diabetes has dramatically increased worldwide due to the vast increase in the obesity rate. Diabetic nephropathy is one of the major complications of type 1 and type 2 diabetes and it is currently the leading cause of end-stage renal disease. Hyperglycemia is the driving force for the development of diabetic nephropathy. It is well known that hyperglycemia increases the production of free radicals resulting in oxidative stress. While increases in oxidative stress have been shown to contribute to the development and progression of diabetic nephropathy, the mechanisms by which this occurs are still being investigated. Historically, diabetes was not thought to be an immune disease; however, there is increasing evidence supporting a role for inflammation in type 1 and type 2 diabetes. Inflammatory cells, cytokines, and profibrotic growth factors including transforming growth factor-β (TGF-β), monocyte chemoattractant protein-1 (MCP-1), connective tissue growth factor (CTGF), tumor necrosis factor-α (TNF-α), interleukin-1 (IL-1), interleukin-6 (IL-6), interleukin-18 (IL-18), and cell adhesion molecules (CAMs) have all been implicated in the pathogenesis of diabetic nephropathy via increased vascular inflammation and fibrosis. The stimulus for the increase in inflammation in diabetes is still under investigation; however, reactive oxygen species are a primary candidate. Thus, targeting oxidative stress-inflammatory cytokine signaling could improve therapeutic options for diabetic nephropathy. The current review will focus on understanding the relationship between oxidative stress and inflammatory cytokines in diabetic nephropathy to help elucidate the question of which comes first in the progression of diabetic nephropathy, oxidative stress, or inflammation.",
"title": ""
},
{
"docid": "e9cd9fccbee43dfaf7a3001220847ec6",
"text": "Cell-free protein synthesis has emerged as a powerful technology platform to help satisfy the growing demand for simple and efficient protein production. While used for decades as a foundational research tool for understanding transcription and translation, recent advances have made possible cost-effective microscale to manufacturing scale synthesis of complex proteins. Protein yields exceed grams protein produced per liter reaction volume, batch reactions last for multiple hours, costs have been reduced orders of magnitude, and reaction scale has reached the 100-liter milestone. These advances have inspired new applications in the synthesis of protein libraries for functional genomics and structural biology, the production of personalized medicines, and the expression of virus-like particles, among others. In the coming years, cell-free protein synthesis promises new industrial processes where short protein production timelines are crucial as well as innovative approaches to a wide range of applications.",
"title": ""
},
{
"docid": "7dba7b28582845bf13d9f9373e39a2af",
"text": "The Internet and social media provide a major source of information about people's opinions. Due to the rapidly growing number of online documents, it becomes both time-consuming and hard task to obtain and analyze the desired opinionated information. Sentiment analysis is the classification of sentiments expressed in documents. To improve classification perfromance feature selection methods which help to identify the most valuable features are generally applied. In this paper, we compare the performance of four feature selection methods namely Chi-square, Information Gain, Query Expansion Ranking, and Ant Colony Optimization using Maximum Entropi Modeling classification algorithm over Turkish Twitter dataset. Therefore, the effects of feature selection methods over the performance of sentiment analysis of Turkish Twitter data are evaluated. Experimental results show that Query Expansion Ranking and Ant Colony Optimization methods outperform other traditional feature selection methods for sentiment analysis.",
"title": ""
},
{
"docid": "07631274713ad80653552767d2fe461c",
"text": "Life cycle assessment (LCA) methodology was used to determine the optimum municipal solid waste (MSW) management strategy for Eskisehir city. Eskisehir is one of the developing cities of Turkey where a total of approximately 750tons/day of waste is generated. An effective MSW management system is needed in this city since the generated MSW is dumped in an unregulated dumping site that has no liner, no biogas capture, etc. Therefore, five different scenarios were developed as alternatives to the current waste management system. Collection and transportation of waste, a material recovery facility (MRF), recycling, composting, incineration and landfilling processes were considered in these scenarios. SimaPro7 libraries were used to obtain background data for the life cycle inventory. One ton of municipal solid waste of Eskisehir was selected as the functional unit. The alternative scenarios were compared through the CML 2000 method and these comparisons were carried out from the abiotic depletion, global warming, human toxicity, acidification, eutrophication and photochemical ozone depletion points of view. According to the comparisons and sensitivity analysis, composting scenario, S3, is the more environmentally preferable alternative. In this study waste management alternatives were investigated only on an environmental point of view. For that reason, it might be supported with other decision-making tools that consider the economic and social effects of solid waste management.",
"title": ""
},
{
"docid": "0f1a36a4551dc9c6b4ae127c34ff7330",
"text": "Internet of Things (IoT) is reshaping our daily lives by bridging the gaps between physical and digital world. To enable ubiquitous sensing, seamless connection and real-time processing for IoT applications, fog computing is considered as a key component in a heterogeneous IoT architecture, which deploys storage and computing resources to network edges. However, the fog-based IoT architecture can lead to various security and privacy risks, such as compromised fog nodes that may impede developments of IoT by attacking the data collection and gathering period. In this paper, we propose a novel privacy-preserving and reliable scheme for the fog-based IoT to address the data privacy and reliability challenges of the selective data aggregation service. Specifically, homomorphic proxy re-encryption and proxy re-authenticator techniques are respectively utilized to deal with the data privacy and reliability issues of the service, which supports data aggregation over selective data types for any type-driven applications. We define a new threat model to formalize the non-collusive and collusive attacks of compromised fog nodes, and it is demonstrated that the proposed scheme can prevent both non-collusive and collusive attacks in our model. In addition, performance evaluations show the efficiency of the scheme in terms of computational costs and communication overheads.",
"title": ""
},
{
"docid": "2c734e48d2698ea11c84efa4704d5da8",
"text": "Nowadays there is an increasing interest in mobile application development. However, developers often disregard, or at least significantly adapt, existing software development processes to suit their purpose, given the existing specific constraints. Such adjustments can introduce variations and new trends in existing processes that in many occasions are not shared with the scientific community since there is no official documentation, thus justifying further research. In this paper, we present a study and characterization of current mobile application development processes based on a practical experience. We consider a set of real case studies to investigate the current development processes for mobile applications used by software development companies, as well as by independent developers. The result of the present study is the identification of mobile software development processes, namely agile approaches, and also of shortcomings in current methodologies applied in industry and academy, namely the lack of informed and experienced resources to develop mobile apps.",
"title": ""
},
{
"docid": "63a14ae93563bc66d9880c4c04c0c686",
"text": "This brief analyzes the jitter as well as the power dissipation of phase-locked loops (PLLs). It aims at defining a benchmark figure-of-merit (FOM) that is compatible with the well-known FOM for oscillators but now extended to an entire PLL. The phase noise that is generated by the thermal noise in the oscillator and loop components is calculated. The power dissipation is estimated, focusing on the required dynamic power. The absolute PLL output jitter is calculated, and the optimum PLL bandwidth that gives minimum jitter is derived. It is shown that, with a steep enough input reference clock, this minimum jitter is independent of the reference frequency and output frequency for a given PLL power budget. Based on these insights, a benchmark FOM for PLL designs is proposed.",
"title": ""
},
{
"docid": "8626803a7fd8a2190f4d6c4b56b04489",
"text": "Quotes, or quotations, are well known phrases or sentences that we use for various purposes such as emphasis, elaboration, and humor. In this paper, we introduce a task of recommending quotes which are suitable for given dialogue context and we present a deep learning recommender system which combines recurrent neural network and convolutional neural network in order to learn semantic representation of each utterance and construct a sequence model for the dialog thread. We collected a large set of twitter dialogues with quote occurrences in order to evaluate proposed recommender system. Experimental results show that our approach outperforms not only the other state-of-the-art algorithms in quote recommendation task, but also other neural network based methods built for similar tasks.",
"title": ""
},
{
"docid": "2af0ef7c117ace38f44a52379c639e78",
"text": "Examination of a child with genital or anal disease may give rise to suspicion of sexual abuse. Dermatologic, traumatic, infectious, and congenital disorders may be confused with sexual abuse. Seven children referred to us are representative of such confusion.",
"title": ""
},
{
"docid": "986820785faa0927a54f26d097adbe1b",
"text": "— Following the ideas of Bontekoe et al. who noticed that the classical Maximum Entropy Method (MEM) had difficulties to efficiently restore high and low spatial frequency structure in an image at the same time, we use the wavelet transform, a mathematical tool to decompose a signal into different frequency bands. We introduce the concept of multi-scale entropy of an image, leading to a better restoration at all spatial frequencies. This deconvolution method is flux conservative and the use of a multiresolution support solves the problem of MEM to choose the α parameter, i.e. the relative weight between the goodness-of-fit and the entropy. We show that our algorithm is efficient too for filtering astronomical images. A range of practical examples illustrate this approach.",
"title": ""
},
{
"docid": "dd170ec01ee5b969605dace70e283664",
"text": "This work discusses the regulation of the ball and plate system, the problemis to design a control laws which generates a voltage u for the servomotors to move the ball from the actual position to a desired one. The controllers are constructed by introducing nonlinear compensation terms into the traditional PD controller. In this paper, a complete physical system and controller design is explored from conception to modeling to testing and implementation. The stability of the control is presented. Experiment results are obtained via our prototype of the ball and plate system.",
"title": ""
},
{
"docid": "3cc9d3767cbfac13fcb7d363419eccad",
"text": "SpeechPy is an open source Python package that contains speech preprocessing techniques, speech features, and important post-processing operations. It provides most frequent used speech features including MFCCs and filterbank energies alongside with the log-energy of filter-banks. The aim of the package is to provide researchers with a simple tool for speech feature extraction and processing purposes in applications such as Automatic Speech Recognition and Speaker Verification.",
"title": ""
},
{
"docid": "669de02f4c87c2a67e776410f70bf801",
"text": "Repeating an item in a list benefits recall performance, and this benefit increases when the repetitions are spaced apart (Madigan, 1969; Melton, 1970). Retrieved context theory incorporates 2 mechanisms that account for these effects: contextual variability and study-phase retrieval. Specifically, if an item presented at position i is repeated at position j, this leads to retrieval of its context from its initial presentation at i (study-phase retrieval), and this retrieved context will be used to update the current state of context (contextual variability). Here we consider predictions of a computational model that embodies retrieved context theory, the context maintenance and retrieval model (CMR; Polyn, Norman, & Kahana, 2009). CMR makes the novel prediction that subjects are more likely to successively recall items that follow a shared repeated item (e.g., i + 1, j + 1) because both items are associated with the context of the repeated item presented at i and j. CMR also predicts that the probability of recalling at least 1 of 2 studied items should increase with the items' spacing (Lohnas, Polyn, & Kahana, 2011). We tested these predictions in a new experiment, and CMR's predictions were upheld. These findings suggest that retrieved context theory offers an integrated explanation for repetition and spacing effects in free recall tasks.",
"title": ""
}
] |
scidocsrr
|
36f35efe5ea091b94760347f71530d0b
|
Self-Regulation in Academic Writing Tasks
|
[
{
"docid": "2d905398cfb131e0ea674c564552b090",
"text": "In this article, I review the diverse ways in which perceived self-efficacy contributes to cognitive development and functioning. Perceived self-efficacy exerts its influence through four major processes. They include cognitive, motivational, affective, and selection processes. There are three different levels at which perceived self-efficacy operates as an important contributor to academic devellopment. Students' beliefs in their efficacy to regulate their own learning and to master academic activities determine their aspirations, level of motivation, and academic accomplishments. Teachers' beliefs in their personal efficacy to motivate and promote learning affect the types of learning environments tlhey create and the level of academic progress their students achieve. Faculti~es' beliefs in their collective instructional efficacy contribute significantly to their schools' level of academic achievement. Student body characteristics influence school-level achievement more strongly by altering faculties' beliefs in their collective efficacy than through direct affects on school achievement.",
"title": ""
}
] |
[
{
"docid": "176cf87aa657a5066a02bfb650532070",
"text": "Structural Design of Reinforced Concrete Tall Buildings Author: Ali Sherif S. Rizk, Director, Dar al-Handasah Shair & Partners Subject: Structural Engineering",
"title": ""
},
{
"docid": "3e0a52bc1fdf84279dee74898fcd93bf",
"text": "A variety of abnormal imaging findings of the petrous apex are encountered in children. Many petrous apex lesions are identified incidentally while images of the brain or head and neck are being obtained for indications unrelated to the temporal bone. Differential considerations of petrous apex lesions in children include “leave me alone” lesions, infectious or inflammatory lesions, fibro-osseous lesions, neoplasms and neoplasm-like lesions, as well as a few rare miscellaneous conditions. Some lesions are similar to those encountered in adults, and some are unique to children. Langerhans cell histiocytosis (LCH) and primary and metastatic pediatric malignancies such as neuroblastoma, rhabomyosarcoma and Ewing sarcoma are more likely to be encountered in children. Lesions such as petrous apex cholesterol granuloma, cholesteatoma and chondrosarcoma are more common in adults and are rarely a diagnostic consideration in children. We present a comprehensive pictorial review of CT and MRI appearances of pediatric petrous apex lesions.",
"title": ""
},
{
"docid": "9e4adad2e248895d80f28cf6134f68c1",
"text": "Maltodextrin (MX) is an ingredient in high demand in the food industry, mainly for its useful physical properties which depend on the dextrose equivalent (DE). The DE has however been shown to be an inaccurate parameter for predicting the performance of the MXs in technological applications, hence commercial MXs were characterized by mass spectrometry (MS) to determine their molecular weight distribution (MWD) and degree of polymerization (DP). Samples were subjected to different water activities (aw). Water adsorption was similar at low aw, but radically increased with the DP at higher aw. The decomposition temperature (Td) showed some variations attributed to the thermal hydrolysis induced by the large amount of adsorbed water and the supplied heat. The glass transition temperature (Tg) linearly decreased with both, aw and DP. The microstructural analysis by X-ray diffraction showed that MXs did not crystallize with the adsorption of water, preserving their amorphous structure. The optical micrographs showed radical changes in the overall appearance of the MXs, indicating a transition from a glassy to a rubbery state. Based on these characterizations, different technological applications for the MXs were suggested.",
"title": ""
},
{
"docid": "c613138270b05f909904519d195fcecf",
"text": "This study deals with artificial neural network (ANN) modeling a diesel engine using waste cooking biodiesel fuel to predict the brake power, torque, specific fuel consumption and exhaust emissions of engine. To acquire data for training and testing the proposed ANN, two cylinders, four-stroke diesel engine was fuelled with waste vegetable cooking biodiesel and diesel fuel blends and operated at different engine speeds. The properties of biodiesel produced from waste vegetable oil was measured based on ASTM standards. The experimental results reveal that blends of waste vegetable oil methyl ester with diesel fuel provide better engine performance and improved emission characteristics. Using some of the experimental data for training, an ANN model based on standard Back-Propagation algorithm for the engine was developed. Multi layer perception network (MLP) was used for nonlinear mapping between the input and the output parameters. Different activation functions and several rules were used to assess the percentage error between the desired and the predicted values. It was observed that the ANN model can predict the engine performance and exhaust emissions quite well with correlation coefficient (R) were 0.9487, 0.999, 0.929 and 0.999 for the engine torque, SFC, CO and HC emissions, respectively. The prediction MSE (Mean Square Error) error was between the desired outputs as measured values and the simulated values by the model was obtained as 0.0004.",
"title": ""
},
{
"docid": "a7284bfc38d5925cb62f04c8f6dcaae2",
"text": "The brain's electrical signals enable people without muscle control to physically interact with the world.",
"title": ""
},
{
"docid": "5b9d8b0786691f68659bcce2e6803cdb",
"text": "We introduce SentEval, a toolkit for evaluating the quality of universal sentence representations. SentEval encompasses a variety of tasks, including binary and multi-class classification, natural language inference and sentence similarity. The set of tasks was selected based on what appears to be the community consensus regarding the appropriate evaluations for universal sentence representations. The toolkit comes with scripts to download and preprocess datasets, and an easy interface to evaluate sentence encoders. The aim is to provide a fairer, less cumbersome and more centralized way for evaluating sentence representations.",
"title": ""
},
{
"docid": "d752bf764e4518cee561b11146d951c4",
"text": "Speech recognition is an increasingly important input modality, especially for mobile computing. Because errors are unavoidable in real applications, efficient correction methods can greatly enhance the user experience. In this paper we study a reranking and classification strategy for choosing word alternates to display to the user in the framework of a tap-to-correct interface. By employing a logistic regression model to estimate the probability that an alternate will offer a useful correction to the user, we can significantly reduce the average length of the alternates lists generated with no reduction in the number of words they are able to correct.",
"title": ""
},
{
"docid": "fb116c7cd3ab8bd88fb7817284980d4a",
"text": "Sentence-level sentiment classification is important to understand users' fine-grained opinions. Existing methods for sentence-level sentiment classification are mainly based on supervised learning. However, it is difficult to obtain sentiment labels of sentences since manual annotation is expensive and time-consuming. In this paper, we propose an approach for sentence-level sentiment classification without the need of sentence labels. More specifically, we propose a unified framework to incorporate two types of weak supervision, i.e., document-level and word-level sentiment labels, to learn the sentence-level sentiment classifier. In addition, the contextual information of sentences and words extracted from unlabeled sentences is incorporated into our approach to enhance the learning of sentiment classifier. Experiments on benchmark datasets show that our approach can effectively improve the performance of sentence-level sentiment classification.",
"title": ""
},
{
"docid": "0d644ca204280bf3f7bf4ea5e4cb8886",
"text": "Accurate rainfall forecasting is critical because it has a great impact on people’s social and economic activities. Recent trends on various literatures shows that Deep Learning (Neural Network) is a promising methodology to tackle many challenging tasks. In this study, we introduce a brand-new data-driven precipitation prediction model called DeepRain. This model predicts the amount of rainfall from weather radar data, which is three-dimensional and four-channel data, using convolutional LSTM (ConvLSTM). ConvLSTM is a variant of LSTM (Long Short-Term Memory) containing a convolution operation inside the LSTM cell. For the experiment, we used radar reflectivity data for a twoyear period whose input is in a time series format in units of 6 min divided into 15 records. The output is the predicted rainfall information for the input data. Experimental results show that two-stacked ConvLSTM reduced RMSE by 23.0% compared to linear regression.",
"title": ""
},
{
"docid": "d97a3b15b3a269d697d9936c1c192781",
"text": "In this paper, we take a queer linguistics approach to the analysis of data from British newspaper articles that discuss the introduction of same-sex marriage. Drawing on methods from CDA and corpus linguistics, we focus on the construction of agency in relation to the government extending marriage to same-sex couples, and those resisting this. We show that opponents to same-sex marriage are represented and represent themselves as victims whose moral values, traditions, and civil liberties are being threatened by the state. Specifically, we argue that victimhood is invoked in a way that both enables and permits discourses of implicit homophobia.",
"title": ""
},
{
"docid": "a552f0ee9fafe273859a11f29cf7670d",
"text": "A majority of the existing stereo matching algorithms assume that the corresponding color values are similar to each other. However, it is not so in practice as image color values are often affected by various radiometric factors such as illumination direction, illuminant color, and imaging device changes. For this reason, the raw color recorded by a camera should not be relied on completely, and the assumption of color consistency does not hold good between stereo images in real scenes. Therefore, the performance of most conventional stereo matching algorithms can be severely degraded under the radiometric variations. In this paper, we present a new stereo matching measure that is insensitive to radiometric variations between left and right images. Unlike most stereo matching measures, we use the color formation model explicitly in our framework and propose a new measure, called the Adaptive Normalized Cross-Correlation (ANCC), for a robust and accurate correspondence measure. The advantage of our method is that it is robust to lighting geometry, illuminant color, and camera parameter changes between left and right images, and does not suffer from the fattening effect unlike conventional Normalized Cross-Correlation (NCC). Experimental results show that our method outperforms other state-of-the-art stereo methods under severely different radiometric conditions between stereo images.",
"title": ""
},
{
"docid": "f95df0d732e36c4db578d0b85a722615",
"text": "Computer assisted language learning (CAPT) has been shown to be effective for learning non-natives pronunciation details of a new language. No automatic pronunciation evaluation system exists for non-native Norwegian. We present initial experiments on the Norwegian quantity contrast between short and long vowels. A database of native and non-native speakers was recorded for training and test respectively. We have used a set of acoustic-phonetic features and combined them in a classifier based on linear discriminant analysis (LDA). The resulting classification rate was 92.3% compared with a human rating. As expected, vowel duration was the most important feature, whereas vowel spectral content contributed insignificantly. The achieved classification rate is promising with respect to making a useful Norwegian CAPT for quantity.",
"title": ""
},
{
"docid": "58640b446a3c03ab8296302498e859a5",
"text": "With Islands of Music we present a system which facilitates exploration of music libraries without requiring manual genre classification. Given pieces of music in raw audio format we estimate their perceived sound similarities based on psychoacoustic models. Subsequently, the pieces are organized on a 2-dimensional map so that similar pieces are located close to each other. A visualization using a metaphor of geographic maps provides an intuitive interface where islands resemble genres or styles of music. We demonstrate the approach using a collection of 359 pieces of music.",
"title": ""
},
{
"docid": "bf87a4c68912f1de3492dac098f4fc17",
"text": "In this paper, we demonstrate a blockchain-based solution for transparently managing and analyzing data in a pay-as-you-go car insurance application. This application allows drivers who rarely use cars to only pay insurance premium for particular trips they would like to travel. One of the key challenges from database perspective is how to ensure all the data pertaining to the actual trip and premium payment made by the users are transparently recorded so that every party in the insurance contract including the driver, the insurance company, and the financial institution is confident that the data are tamper-proof and traceable. \n Another challenge from information retrieval perspective is how to perform entity matching and pattern matching on customer data as well as their trip and claim history recorded on the blockchain for intelligent fraud detection. Last but not least, the drivers' trip history, once have been collected sufficiently, can be much valuable for the insurance company to do offline analysis and build statistics on past driving behaviour and past vehicle runtime. These statistics enable the insurance company to offer the users with transparent and individualized insurance quotes. Towards this end, we develop a blockchain-based solution for micro-insurance applications that transparently keeps records and executes smart contracts depending on runtime conditions while also connecting with off-chain analytic databases.",
"title": ""
},
{
"docid": "73caebe78a81e6debe7dfcfb609354d2",
"text": "This paper presents an original device for the electrification of granular mixtures in vibrated zigzag-shaped pipes. Spatial movement of the granules introduced in the pipes is controlled by varying the oscillation amplitude and the frequency of a slider-crank mechanism. In the first set of experiments, two sorts of granular plastics (ABS and HIPS) were separately processed through the vibratory tribocharging device. Both ABS and HIPS charged negatively in contact with the aluminum pipes. The absolute value of charge/mass ratio increased with the amplitude and frequency of the vibratory movements to attain a maximum of 26 nC/g for the HIPS particles. In the second set of experiments, 100-g samples of 50% ABS + 50% HIPS were tribocharged, than introduced in a freefall electrostatic separator. A composite experimental design was performed for modeling the process. The output variable was the extraction of ABS while the speed and length of the crank were with the applied voltage the three control variables under investigation. ABS extractions higher than 85% were obtained for optimally chosen values of the control variables.",
"title": ""
},
{
"docid": "b80df19e67d2bbaabf4da18d7b5af4e2",
"text": "This paper presents a data-driven approach for automatically generating cartoon faces in different styles from a given portrait image. Our stylization pipeline consists of two steps: an offline analysis step to learn about how to select and compose facial components from the databases; a runtime synthesis step to generate the cartoon face by assembling parts from a database of stylized facial components. We propose an optimization framework that, for a given artistic style, simultaneously considers the desired image-cartoon relationships of the facial components and a proper adjustment of the image composition. We measure the similarity between facial components of the input image and our cartoon database via image feature matching, and introduce a probabilistic framework for modeling the relationships between cartoon facial components. We incorporate prior knowledge about image-cartoon relationships and the optimal composition of facial components extracted from a set of cartoon faces to maintain a natural, consistent, and attractive look of the results. We demonstrate generality and robustness of our approach by applying it to a variety of portrait images and compare our output with stylized results created by artists via a comprehensive user study.",
"title": ""
},
{
"docid": "4c27cde9b9170fc77d43d2cefdd4736d",
"text": "Many events occur in the world. Some event types are stochastically excited or inhibited—in the sense of having their probabilities elevated or decreased—by patterns in the sequence of previous events. Discovering such patterns can help us predict which type of event will happen next and when. We model streams of discrete events in continuous time, by constructing a neurally self-modulating multivariate point process in which the intensities of multiple event types evolve according to a novel continuous-time LSTM. This generative model allows past events to influence the future in complex and realistic ways, by conditioning future event intensities on the hidden state of a recurrent neural network that has consumed the stream of past events. Our model has desirable qualitative properties. It achieves competitive likelihood and predictive accuracy on real and synthetic datasets, including under missing-data conditions.",
"title": ""
},
{
"docid": "3bab09c8759c0b7040c48003c7a745bc",
"text": "We describe an approach to coreference resolution that relies on the intuition that easy decisions should be made early, while harder decisions should be left for later when more information is available. We are inspired by the recent success of the rule-based system of Raghunathan et al. (2010), which relies on the same intuition. Our system, however, automatically learns from training data what constitutes an easy decision. Thus, we can utilize more features, learn more precise weights, and adapt to any dataset for which training data is available. Experiments show that our system outperforms recent state-of-the-art coreference systems including Raghunathan et al.’s system as well as a competitive baseline that uses a pairwise classifier.",
"title": ""
},
{
"docid": "610922e925ccb52308dcc68ca2e7bc6b",
"text": "In this brief, we introduce an architecture for accelerating convolution stages in convolutional neural networks (CNNs) implemented in embedded vision systems. The purpose of the architecture is to exploit the inherent parallelism in CNNs to reduce the required bandwidth, resource usage, and power consumption of highly computationally complex convolution operations as required by real-time embedded applications. We also implement the proposed architecture using fixed-point arithmetic on a ZC706 evaluation board that features a Xilinx Zynq-7000 system on-chip, where the embedded ARM processor with high clocking speed is used as the main controller to increase the flexibility and speed. The proposed architecture runs under a frequency of 150 MHz, which leads to 19.2 Giga multiply accumulation operations per second while consuming less than 10 W in power. This is done using only 391 DSP48 modules, which shows significant utilization improvement compared to the state-of-the-art architectures.",
"title": ""
},
{
"docid": "f333e058a8025c21808a70e98c4863a9",
"text": "Artifact-centric process mining is an extension of classical process mining (van der Aalst 2016) that allows to analyze event data with more than one case identifier in its entirety. It allows to analyze the dynamic behavior of (business) processes that create, read, update, and delete multiple data objects that are related to each other in relationships with one-to-one, one-to-many, and many-to-many cardinalities. Such event data is typically stored in relational databases of, for example, Enterprise Resource Planning (ERP) systems (Lu et al 2015). Artifact-centric process mining comprises artifact-centric process discovery, conformance checking, and enhancement. The outcomes of artifact-centric process mining can be used for documentation of the actual data flow in an organization, and for analyzing deviations in the data flow for performance and conformance analysis. The input to artifact-centric process discovery is either an event log where events carry information about the data objects and their changes, or a relational database also containing records about data creation, change, and deletion events. The output of artifact-centric process discovery are a data model of the objects (each defining its own case identifier) and relations between objects, and an artifact-centric pro-",
"title": ""
}
] |
scidocsrr
|
4d6e05afcf60f8348b92ec5f326e51da
|
A Mechanism for Turing Pattern Formation with Active and Passive Transport
|
[
{
"docid": "f4db5b7cc70661ff780c96cd58f6624e",
"text": "Error Thresholds and Their Relation to Optimal Mutation Rates p. 54 Are Artificial Mutation Biases Unnatural? p. 64 Evolving Mutation Rates for the Self-Optimisation of Genetic Algorithms p. 74 Statistical Reasoning Strategies in the Pursuit and Evasion Domain p. 79 An Evolutionary Method Using Crossover in a Food Chain Simulation p. 89 On Self-Reproduction and Evolvability p. 94 Some Techniques for the Measurement of Complexity in Tierra p. 104 A Genetic Neutral Model for Quantitative Comparison of Genotypic Evolutionary Activity p. 109",
"title": ""
}
] |
[
{
"docid": "3bc9e621a0cfa7b8791ae3fb94eff738",
"text": "This paper deals with environment perception for automobile applications. Environment perception comprises measuring the surrounding field with onboard sensors such as cameras, radar, lidars, etc., and signal processing to extract relevant information for the planned safety or assistance function. Relevant information is primarily supplied using two well-known methods, namely, object based and grid based. In the introduction, we discuss the advantages and disadvantages of the two methods and subsequently present an approach that combines the two methods to achieve better results. The first part outlines how measurements from stereo sensors can be mapped onto an occupancy grid using an appropriate inverse sensor model. We employ the Dempster-Shafer theory to describe the occupancy grid, which has certain advantages over Bayes' theorem. Furthermore, we generate clusters of grid cells that potentially belong to separate obstacles in the field. These clusters serve as input for an object-tracking framework implemented with an interacting multiple-model estimator. Thereby, moving objects in the field can be identified, and this, in turn, helps update the occupancy grid more effectively. The first experimental results are illustrated, and the next possible research intentions are also discussed.",
"title": ""
},
{
"docid": "b94d33cc0366703b48d75ad844422c85",
"text": "We propose a dataflow architecture, called HyperFlow, that offers a supporting infrastructure that creates an abstraction layer over computation resources and naturally exposes heterogeneous computation to dataflow processing. In order to show the efficiency of our system as well as testing it, we have included a set of synthetic and real-case applications. First, we designed a general suite of micro-benchmarks that captures main parallel pipeline structures and allows evaluation of HyperFlow under different stress conditions. Finally, we demonstrate the potential of our system with relevant applications in visualization. Implementations in HyperFlow are shown to have greater performance than actual hand-tuning codes, yet still providing high scalability on different platforms.",
"title": ""
},
{
"docid": "2dd42cce112c61950b96754bb7b4df10",
"text": "Hierarchical methods have been widely explored for object recognition, which is a critical component of scene understanding. However, few existing works are able to model the contextual information (e.g., objects co-occurrence) explicitly within a single coherent framework for scene understanding. Towards this goal, in this paper we propose a novel three-level (superpixel level, object level and scene level) hierarchical model to address the scene categorization problem. Our proposed model is a coherent probabilistic graphical model that captures the object co-occurrence information for scene understanding with a probabilistic chain structure. The efficacy of the proposed model is demonstrated by conducting experiments on the LabelMe dataset.",
"title": ""
},
{
"docid": "cde1419d6b4912b414a3c83139dc3f06",
"text": "This book results from a decade of presenting the user-centered design (UCD) methodology for hundreds of companies (p. xxiii) and appears to be the book complement to the professional development short course. Its purpose is to encourage software developers to focus on the total user experience of software products during the whole of the development cycle. The notion of the “total user experience” is valuable because it focuses attention on the whole product-use cycle, from initial awareness through productive use.",
"title": ""
},
{
"docid": "72782fdcc61d1059bce95fe4e7872f5b",
"text": "ÐIn object prototype learning and similar tasks, median computation is an important technique for capturing the essential information of a given set of patterns. In this paper, we extend the median concept to the domain of graphs. In terms of graph distance, we introduce the novel concepts of set median and generalized median of a set of graphs. We study properties of both types of median graphs. For the more complex task of computing generalized median graphs, a genetic search algorithm is developed. Experiments conducted on randomly generated graphs demonstrate the advantage of generalized median graphs compared to set median graphs and the ability of our genetic algorithm to find approximate generalized median graphs in reasonable time. Application examples with both synthetic and nonsynthetic data are shown to illustrate the practical usefulness of the concept of median graphs. Index TermsÐMedian graph, graph distance, graph matching, genetic algorithm,",
"title": ""
},
{
"docid": "9b504f633488016fad865dee6fbdf3ef",
"text": "Transmission lines is the important factor of the power system. Transmission and distribution lines has good contribution in the generating unit and consumers to obtain the continuity of electric supply. To economically transfer high power between systems and from control generating field. Transmission line run over hundreds of kilometers to supply electrical power to the consumers. It is a required for industries to detect the faults in the power system as early as possible. “Fault Detection and Auto Line Distribution System With GSM Module” is a automation technique used for fault detection in AC supply and auto sharing of power. The significance undetectable faults is that they represent a serious public safety hazard as well as a risk of arcing ignition of fires. This paper represents under voltage and over current fault detection. It is useful in technology to provide many applications like home, industry etc..",
"title": ""
},
{
"docid": "641d09ff15b731b679dbe3e9004c1578",
"text": "In recent years, geological disposal of radioactive waste has focused on placement of highand intermediate-level wastes in mined underground caverns at depths of 500–800 m. Notwithstanding the billions of dollars spent to date on this approach, the difficulty of finding suitable sites and demonstrating to the public and regulators that a robust safety case can be developed has frustrated attempts to implement disposal programmes in several countries, and no disposal facility for spent nuclear fuel exists anywhere. The concept of deep borehole disposal was first considered in the 1950s, but was rejected as it was believed to be beyond existing drilling capabilities. Improvements in drilling and associated technologies and advances in sealing methods have prompted a re-examination of this option for the disposal of high-level radioactive wastes, including spent fuel and plutonium. Since the 1950s, studies of deep boreholes have involved minimal investment. However, deep borehole disposal offers a potentially safer, more secure, cost-effective and environmentally sound solution for the long-term management of high-level radioactive waste than mined repositories. Potentially it could accommodate most of the world’s spent fuel inventory. This paper discusses the concept, the status of existing supporting equipment and technologies and the challenges that remain.",
"title": ""
},
{
"docid": "de7eb0735d6cd2fb13a00251d89b0fbc",
"text": "Classical conditioning, the simplest form of associative learning, is one of the most studied paradigms in behavioural psychology. Since the formal description of classical conditioning by Pavlov, lesion studies in animals have identified a number of anatomical structures involved in, and necessary for, classical conditioning. In the 1980s, with the advent of functional brain imaging techniques, particularly positron emission tomography (PET), it has been possible to study the functional anatomy of classical conditioning in humans. The development of functional magnetic resonance imaging (fMRI)--in particular single-trial or event-related fMRI--has now considerably advanced the potential of neuroimaging for the study of this form of learning. Recent event-related fMRI and PET studies are adding crucial data to the current discussion about the putative role of the amygdala in classical fear conditioning in humans.",
"title": ""
},
{
"docid": "d66f86ac2b42d13ba2199e41c85d3c93",
"text": "We introduce Fluid Annotation, an intuitive human-machine collaboration interface for annotating the class label and outline of every object and background region in an image. Fluid annotation is based on three principles:(I) Strong Machine-Learning aid. We start from the output of a strong neural network model, which the annotator can edit by correcting the labels of existing regions, adding new regions to cover missing objects, and removing incorrect regions.The edit operations are also assisted by the model.(II) Full image annotation in a single pass. As opposed to performing a series of small annotation tasks in isolation [51,68], we propose a unified interface for full image annotation in a single pass.(III) Empower the annotator.We empower the annotator to choose what to annotate and in which order. This enables concentrating on what the ma-chine does not already know, i.e. putting human effort only on the errors it made. This helps using the annotation budget effectively.\n Through extensive experiments on the COCO+Stuff dataset [11,51], we demonstrate that Fluid Annotation leads to accurate an-notations very efficiently, taking 3x less annotation time than the popular LabelMe interface [70].",
"title": ""
},
{
"docid": "de8661c2e63188464de6b345bfe3a908",
"text": "Modern computer games show potential not just for engaging and entertaining users, but also in promoting learning. Game designers employ a range of techniques to promote long-term user engagement and motivation. These techniques are increasingly being employed in so-called serious games, games that have nonentertainment purposes such as education or training. Although such games share the goal of AIED of promoting deep learner engagement with subject matter, the techniques employed are very different. Can AIED technologies complement and enhance serious game design techniques, or does good serious game design render AIED techniques superfluous? This paper explores these questions in the context of the Tactical Language Training System (TLTS), a program that supports rapid acquisition of foreign language and cultural skills. The TLTS combines game design principles and game development tools with learner modelling, pedagogical agents, and pedagogical dramas. Learners carry out missions in a simulated game world, interacting with non-player characters. A virtual aide assists the learners if they run into difficulties, and gives performance feedback in the context of preparatory exercises. Artificial intelligence plays a key role in controlling the behaviour of the non-player characters in the game; intelligent tutoring provides supplementary scaffolding.",
"title": ""
},
{
"docid": "91771b6c50d7193e5612d9552913dec8",
"text": "The expected diffusion of EVehicles (EVs) to limit the impact of fossil fuel on mobility is going to cause severe issues to the management of electric grid. A large number of charging stations is going to be installed on the power grid to support EVs. Each of the charging station could require more than 100 kW from the grid. The grid consumption is unpredictable and it depends from the need of EVs in the neighborhood. The impact of the EV on the power grid can be limited by the proper exploitation of Vehicle to Grid communication (V2G). The advent of Low Power Wide Area Network (LPWAN) promoted by Internet Of Things applications offers new opportunity for wireless communications. In this work, an example of such a technology (the LoRaWAN solution) is tested in a real-world scenario as a candidate for EV to grid communications. The experimental results highlight as LoRaWAN technology can be used to cover an area with a radius under 2 km, in an urban environment. At this distance, the Received Signal Strength Indicator (RSSI) is about −117 dBm. Such a result demonstrates the feasibility of the proposed approach.",
"title": ""
},
{
"docid": "1e1d3d7a4997f6f58b7ed3f6b4ecb054",
"text": "Image semantic segmentation is the task of partitioning image into several regions based on semantic concepts. In this paper, we learn a weakly supervised semantic segmentation model from social images whose labels are not pixel-level but image-level; furthermore, these labels might be noisy. We present a joint conditional random field model leveraging various contexts to address this issue. More specifically, we extract global and local features in multiple scales by convolutional neural network and topic model. Inter-label correlations are captured by visual contextual cues and label co-occurrence statistics. The label consistency between image-level and pixel-level is finally achieved by iterative refinement. Experimental results on two real-world image datasets PASCAL VOC2007 and SIFT-Flow demonstrate that the proposed approach outperforms state-of-the-art weakly supervised methods and even achieves accuracy comparable with fully supervised methods.",
"title": ""
},
{
"docid": "9a82781af933251208aef5e683839346",
"text": "We present a comprehensive overview of the stereoscopic Intel RealSense RGBD imaging systems. We discuss these systems’ mode-of-operation, functional behavior and include models of their expected performance, shortcomings, and limitations. We provide information about the systems’ optical characteristics, their correlation algorithms, and how these properties can affect different applications, including 3D reconstruction and gesture recognition. Our discussion covers the Intel RealSense R200 and the Intel RealSense D400 (formally RS400).",
"title": ""
},
{
"docid": "3c28d7571e8d863b84ccf4edfc812dc6",
"text": "The purpose of this project was to explore what attitudes physicians, nurses, and operating room technicians had about working with Certified Registered Nurse Anesthetists (CRNAs) to better understand practice barriers and facilitators. This Q methodology study used a purposive sample of operating room personnel from four institutions in the Midwestern United States. Participants completed a -4 to +4 rank-ordering of their level of agreement with 34 attitude statements representing a wide range of beliefs about nurse anesthetists. Centroid factor analysis with varimax rotation was used to analyze 24 returned Q sorts. Three distinct viewpoints emerged that explained 66% of the variance: favoring unrestricted practice, favoring anesthesiologist supervision, and favoring anesthesiologist practice. Research is needed on how to develop workplace attitudes that support autonomous nurse anesthetist practice and to understand preferences for restricted practice in team members other than physicians.",
"title": ""
},
{
"docid": "57256bce5741b23fa4827fad2ad9e321",
"text": "This study assessed the depth of online learning, with a focus on the nature of online interaction in four distance education course designs. The Study Process Questionnaire was used to measure the shift in students’ approach to learning from the beginning to the end of the courses. Design had a significant impact on the nature of the interaction and whether students approached learning in a deep and meaningful manner. Structure and leadership were found to be crucial for online learners to take a deep and meaningful approach to learning.",
"title": ""
},
{
"docid": "c87a1cea06d135628691a912cad582c1",
"text": "OBJECTIVE\nDelphi technique is a structured process commonly used to developed healthcare quality indicators, but there is a little recommendation for researchers who wish to use it. This study aimed 1) to describe reporting of the Delphi method to develop quality indicators, 2) to discuss specific methodological skills for quality indicators selection 3) to give guidance about this practice.\n\n\nMETHODOLOGY AND MAIN FINDING\nThree electronic data bases were searched over a 30 years period (1978-2009). All articles that used the Delphi method to select quality indicators were identified. A standardized data extraction form was developed. Four domains (questionnaire preparation, expert panel, progress of the survey and Delphi results) were assessed. Of 80 included studies, quality of reporting varied significantly between items (9% for year's number of experience of the experts to 98% for the type of Delphi used). Reporting of methodological aspects needed to evaluate the reliability of the survey was insufficient: only 39% (31/80) of studies reported response rates for all rounds, 60% (48/80) that feedback was given between rounds, 77% (62/80) the method used to achieve consensus and 57% (48/80) listed quality indicators selected at the end of the survey. A modified Delphi procedure was used in 49/78 (63%) with a physical meeting of the panel members, usually between Delphi rounds. Median number of panel members was 17(Q1:11; Q3:31). In 40/70 (57%) studies, the panel included multiple stakeholders, who were healthcare professionals in 95% (38/40) of cases. Among 75 studies describing criteria to select quality indicators, 28 (37%) used validity and 17(23%) feasibility.\n\n\nCONCLUSION\nThe use and reporting of the Delphi method for quality indicators selection need to be improved. We provide some guidance to the investigators to improve the using and reporting of the method in future surveys.",
"title": ""
},
{
"docid": "9c77080dbab62dc7a5ddafcde98d094c",
"text": "A cornucopia of dimensionality reduction techniques have emerged over the past decade, leaving data analysts with a wide variety of choices for reducing their data. Means of evaluating and comparing low-dimensional embeddings useful for visualization, however, are very limited. When proposing a new technique it is common to simply show rival embeddings side-by-side and let human judgment determine which embedding is superior. This study investigates whether such human embedding evaluations are reliable, i.e., whether humans tend to agree on the quality of an embedding. We also investigate what types of embedding structures humans appreciate a priori. Our results reveal that, although experts are reasonably consistent in their evaluation of embeddings, novices generally disagree on the quality of an embedding. We discuss the impact of this result on the way dimensionality reduction researchers should present their results, and on applicability of dimensionality reduction outside of machine learning.",
"title": ""
},
{
"docid": "c66fc0dbd8774fdb5fea3990985e65d7",
"text": "Since 1985 various evolutionary approaches to multiobjective optimization have been developed, capable of searching for multiple solutions concurrently in a single run. But the few comparative studies of different methods available to date are mostly qualitative and restricted to two approaches. In this paper an extensive, quantitative comparison is presented, applying four multiobjective evolutionary algorithms to an extended ~0/1 knapsack problem. 1 I n t r o d u c t i o n Many real-world problems involve simultaneous optimization of several incommensurable and often competing objectives. Usually, there is no single optimal solution, but rather a set of alternative solutions. These solutions are optimal in the wider sense that no other solutions in the search space are superior to them when all objectives are considered. They are known as Pareto-optimal solutions. Mathematically, the concept of Pareto-optimality can be defined as follows: Let us consider, without loss of generality, a multiobjective maximization problem with m parameters (decision variables) and n objectives: Maximize y = f (x ) = ( f l (x ) , f 2 ( x ) , . . . , f,~(x)) (1) where x = ( x l , x 2 , . . . , x m ) e X and y = ( y l , y 2 , . . . , y ~ ) E Y are tuple. A decision vector a E X is said to dominate a decision vector b E X (also written as a >-b) iff V i e { 1 , 2 , . . . , n } : l ~ ( a ) > _ f ~ ( b ) A ~ j e { 1 , 2 , . . . , n } : f j ( a ) > f j ( b ) (2) Additionally, in this study we say a covers b iff a ~b or a = b. All decision vectors which are not dominated by any other decision vector are called nondominated or Pareto-optimal. Often, there is a special interest in finding or approximating the Paretooptimal set, mainly to gain deeper insight into the problem and knowledge about alternate solutions, respectively. Evolutionary algorithms (EAs) seem to be especially suited for this task, because they process a set of solutions in parallel, eventually exploiting similarities of solutions by crossover. Some researcher suggest that multiobjective search and optimization might be a problem area where EAs do better than other blind search strategies [1][12]. Since the mid-eighties various multiob]ective EAs have been developed, capable of searching for multiple Pareto-optimal solutions concurrently in a single",
"title": ""
},
{
"docid": "39ccd0efd846c2314da557b73a326e85",
"text": "We address the problem of recognizing situations in images. Given an image, the task is to predict the most salient verb (action), and fill its semantic roles such as who is performing the action, what is the source and target of the action, etc. Different verbs have different roles (e.g. attacking has weapon), and each role can take on many possible values (nouns). We propose a model based on Graph Neural Networks that allows us to efficiently capture joint dependencies between roles using neural networks defined on a graph. Experiments with different graph connectivities show that our approach that propagates information between roles significantly outperforms existing work, as well as multiple baselines. We obtain roughly 3-5% improvement over previous work in predicting the full situation. We also provide a thorough qualitative analysis of our model and influence of different roles in the verbs.",
"title": ""
},
{
"docid": "b9ca1209ce50bf527d68109dbdf7431c",
"text": "The MATLAB model of the analog multiplier based on the sigma delta modulation is developed. Different modes of multiplier are investigated and obtained results are compared with analytical results.",
"title": ""
}
] |
scidocsrr
|
ecb82d372295febe00d245bd7ee11a99
|
Empirical studies of end-user information searching
|
[
{
"docid": "516bbc36588afeeba0c3045f38efadb0",
"text": "full text) and the cognitively different indexer interpretations of the",
"title": ""
},
{
"docid": "b7b664d1749b61f2f423d7080a240a60",
"text": "The research challenge addressed in this paper is to devise effective techniques for identifying task-based sessions, i.e. sets of possibly non contiguous queries issued by the user of a Web Search Engine for carrying out a given task. In order to evaluate and compare different approaches, we built, by means of a manual labeling process, a ground-truth where the queries of a given query log have been grouped in tasks. Our analysis of this ground-truth shows that users tend to perform more than one task at the same time, since about 75% of the submitted queries involve a multi-tasking activity. We formally define the Task-based Session Discovery Problem (TSDP) as the problem of best approximating the manually annotated tasks, and we propose several variants of well known clustering algorithms, as well as a novel efficient heuristic algorithm, specifically tuned for solving the TSDP. These algorithms also exploit the collaborative knowledge collected by Wiktionary and Wikipedia for detecting query pairs that are not similar from a lexical content point of view, but actually semantically related. The proposed algorithms have been evaluated on the above ground-truth, and are shown to perform better than state-of-the-art approaches, because they effectively take into account the multi-tasking behavior of users.",
"title": ""
}
] |
[
{
"docid": "050679bfbeba42b30f19f1a824ec518a",
"text": "Principles of cognitive science hold the promise of helping children to study more effectively, yet they do not always make successful transitions from the laboratory to applied settings and have rarely been tested in such settings. For example, self-generation of answers to questions should help children to remember. But what if children cannot generate anything? And what if they make an error? Do these deviations from the laboratory norm of perfect generation hurt, and, if so, do they hurt enough that one should, in practice, spurn generation? Can feedback compensate, or are errors catastrophic? The studies reviewed here address three interlocking questions in an effort to better implement a computer-based study program to help children learn: (1) Does generation help? (2) Do errors hurt if they are corrected? And (3) what is the effect of feedback? The answers to these questions are: Yes, generation helps; no, surprisingly, errors that are corrected do not hurt; and, finally, feedback is beneficial in verbal learning. These answers may help put cognitive scientists in a better position to put their well-established principles in the service of children's learning.",
"title": ""
},
{
"docid": "5a583f5b67ceb7c59da2cef8201880df",
"text": "This article presents two designs of power amplifiers to be used with piezo-electric actuators in diesel injectors. The topologies as well as the controller approach and implementation are discussed.",
"title": ""
},
{
"docid": "f51bf455134a2aa80ba74e161b1de1e1",
"text": "Online reviews are often our first port of call when considering products and purchases online. When evaluating a potential purchase, we may have a specific query in mind, e.g. ‘will this baby seat fit in the overhead compartment of a 747?’ or ‘will I like this album if I liked Taylor Swift’s 1989?’. To answer such questions we must either wade through huge volumes of consumer reviews hoping to find one that is relevant, or otherwise pose our question directly to the community via a Q/A system. In this paper we hope to fuse these two paradigms: given a large volume of previously answered queries about products, we hope to automatically learn whether a review of a product is relevant to a given query. We formulate this as a machine learning problem using a mixture-of-experts-type framework—here each review is an ‘expert’ that gets to vote on the response to a particular query; simultaneously we learn a relevance function such that ‘relevant’ reviews are those that vote correctly. At test time this learned relevance function allows us to surface reviews that are relevant to new queries on-demand. We evaluate our system, Moqa, on a novel corpus of 1.4 million questions (and answers) and 13 million reviews. We show quantitatively that it is effective at addressing both binary and open-ended queries, and qualitatively that it surfaces reviews that human evaluators consider to be relevant.",
"title": ""
},
{
"docid": "296602c0884ea9c330a6fc8e33a7b722",
"text": "The skin is a major exposure route for many potentially toxic chemicals. It is, therefore, important to be able to predict the permeability of compounds through skin under a variety of conditions. Available skin permeability databases are often limited in scope and not conducive to developing effective models. This sparseness and ambiguity of available data prompted the use of fuzzy set theory to model and predict skin permeability. Using a previously published database containing 140 compounds, a rule-based Takagi–Sugeno fuzzy model is shown to predict skin permeability of compounds using octanol-water partition coefficient, molecular weight, and temperature as inputs. Model performance was estimated using a cross-validation approach. In addition, 10 data points were removed prior to model development for additional testing with new data. The fuzzy model is compared to a regression model for the same inputs using both R2 and root mean square error measures. The quality of the fuzzy model is also compared with previously published models. The statistical analysis demonstrates that the fuzzy model performs better than the regression model with identical data and validation protocols. The prediction quality for this model is similar to others that were published. The fuzzy model provides insights on the relationships between lipophilicity, molecular weight, and temperature on percutaneous penetration. This model can be used as a tool for rapid determination of initial estimates of skin permeability.",
"title": ""
},
{
"docid": "6fb006066fa1a25ae348037aa1ee7be3",
"text": "Reducing redundancy in data representation leads to decreased data storage requirements and lower costs for data communication.",
"title": ""
},
{
"docid": "a10752bb80ad47e18ef7dbcd83d49ff7",
"text": "Approximate computing has gained significant attention due to the popularity of multimedia applications. In this paper, we propose a novel inaccurate 4:2 counter that can effectively reduce the partial product stages of the Wallace Multiplier. Compared to the normal Wallace multiplier, our proposed multiplier can reduce 10.74% of power consumption and 9.8% of delay on average, with an error rate from 0.2% to 13.76% The accuracy of amplitude is higher than 99% In addition, we further enhance the design with error-correction units to provide accurate results. The experimental results show that the extra power consumption of correct units is lower than 6% on average. Compared to the normal Wallace multiplier, the average latency of our proposed multiplier with EDC is 6% faster when the bit-width is 32, and the power consumption is still 10% lower than that of the Wallace multiplier.",
"title": ""
},
{
"docid": "bb93778655c0bfa525d9539f8f720da6",
"text": "Small embedded integrated circuits (ICs) such as smart cards are vulnerable to the so-called side-channel attacks (SCAs). The attacker can gain information by monitoring the power consumption, execution time, electromagnetic radiation, and other information leaked by the switching behavior of digital complementary metal-oxide-semiconductor (CMOS) gates. This paper presents a digital very large scale integrated (VLSI) design flow to create secure power-analysis-attack-resistant ICs. The design flow starts from a normal design in a hardware description language such as very-high-speed integrated circuit (VHSIC) hardware description language (VHDL) or Verilog and provides a direct path to an SCA-resistant layout. Instead of a full custom layout or an iterative design process with extensive simulations, a few key modifications are incorporated in a regular synchronous CMOS standard cell design flow. The basis for power analysis attack resistance is discussed. This paper describes how to adjust the library databases such that the regular single-ended static CMOS standard cells implement a dynamic and differential logic style and such that 20 000+ differential nets can be routed in parallel. This paper also explains how to modify the constraints and rules files for the synthesis, place, and differential route procedures. Measurement-based experimental results have demonstrated that the secure digital design flow is a functional technique to thwart side-channel power analysis. It successfully protects a prototype Advanced Encryption Standard (AES) IC fabricated in an 0.18-mum CMOS",
"title": ""
},
{
"docid": "df8ceb0f804a8dca7375286541866f5f",
"text": "We propose a new model for unsupervised document embedding. Leading existing approaches either require complex inference or use recurrent neural networks (RNN) that are difficult to parallelize. We take a different route and develop a convolutional neural network (CNN) embedding model. Our CNN architecture is fully parallelizable resulting in over 10x speedup in inference time over RNN models. Parallelizable architecture enables to train deeper models where each successive layer has increasingly larger receptive field and models longer range semantic structure within the document. We additionally propose a fully unsupervised learning algorithm to train this model based on stochastic forward prediction. Empirical results on two public benchmarks show that our approach produces comparable to state-of-the-art accuracy at a fraction of computational cost.",
"title": ""
},
{
"docid": "b3066a9cde7f63ec048b4cfbee6e46a0",
"text": "As the deployment of network-centric systems increases, network attacks are proportionally increasing in intensity as well as complexity. Attack detection techniques can be broadly classified as being signature-based, classification-based, or anomaly-based. In this paper we present a multi level intrusion detection system (ML-IDS) that uses autonomic computing to automate the control and management of ML-IDS. This automation allows ML-IDS to detect network attacks and proactively protect against them. ML-IDS inspects and analyzes network traffic using three levels of granularities (traffic flow, packet header, and payload), and employs an efficient fusion decision algorithm to improve the overall detection rate and minimize the occurrence of false alarms. We have individually evaluated each of our approaches against a wide range of network attacks, and then compared the results of these approaches with the results of the combined decision fusion algorithm.",
"title": ""
},
{
"docid": "2132b1f93ff079a1cdc82f7c70b48be1",
"text": "Lexical co-occurrence statistics are becoming widely used in the syntactic analysis of unconstrained text. However, analyses based solely on lexical relationships suffer from sparseness of data: it is sometimes necessary to use a less informed model in order to reliably estimate statistical parameters. For example, the \"lexical association\" strategy for resolving ambiguous prepositional phrase attachments [Hindle and Rooth. 1991] takes into account only the attachment site (a verb or its direct object) and the preposition, ignoring the object of the preposition. We investigated an extension of the lexical association strategy to make use of noun class information, thus permitting a disambiguation strategy to take more information into account. Although in preliminary experiments the extended strategy did not yield improved performance over lexical association alone. a qualitative analysis of the results suggests that the problem lies not in the noun class information, but rather in the multiplicity of classes available for each noun in the absence of sense disambiguation. This suggests several possible revisions of our proposal. 1. P r e f e r e n c e S t r a t e g i e s Prepositional phrase attachment is a paradigmatic case of the structural ambiguity problems faced by natural language parsing systems. Most models of grammar will not constrain the analysis of such attachments in examples like (1): the grammar simply specifies that a prepositional phrase such as on computer theft can be attached in several ways, and leaves the problem of selecting the correct choice to some other process. (1) a. Eventually, Mr. Stoll was invited to both the CIA and NSA to brief high-ranking officers on computer theft. b. Eventually, Mr. Stoll was invited to both the ClA and NSA [to brief [high-ranking officers on computer theft]]. c. Eventually, Mr. Stoll was invited to both the CIA and NSA [to brief [high-ranking ollicers] [on computer theft]]. As [Church and Patil, 1982] point out, the number of analyses given combinations of such \"all ways ambiguous\" constructions grows rapidly even for sentences of quite Marti A. Hearst Computer Science Division 465 Evans Hall University of California, Berkeley Berkeley, CA 94720 USA mar t i @ c s . b e r k e l e y . e d u reasonable length, so this other process has an important role to play. Discussions of sentence processing have focused primarily on structurally-based preference strategies such as right association and minimal attachment [Kimball, 1973; Frazier, 1979; Ford et al., 1982]; [Hobbs and Bear, 1990], while acknowledging the importance of semantics and pragmatics in attachment decisions, propose two syntactically-based attachment rules that are meant to be generalizations of those structural strategies. Others, however, have argued that syntactic considerations alone are insumcient for determining prepositional phrase attachments, suggesting instead that preference relationships among lexical items are the crucial factor. For example: [Wilks et aL, 1985] argue that the right attachment rules posited by [Frazier, 1979] are incorrect for phrases in general, and supply counterexarnples. They further argue that lexical preferences alone as suggested by [Ford et al., 1982] are too simplistic, and suggest instad the use of preference semantics. In the preference semantics framework, attachment relations of phrases are determined by comparing the preferences emanating from all the entities involved in the attachment, until the best mutual fit is found. Their CASSEX system represents the various meanings of the preposition in terms of (a) the preferred semantic class of the noun or verb that proceeds the preposition (e.g., move, be, strike), (b) the case of the preposition (e.g., instrument, time, loc.static), and (c) the preferred semantic class of the head noun of the prepositional phrase (e.g., physob, event). The difficult part of this method is the identification of preference relationships and particularly determining the strengths of the preferences and how they should interact. (See also discussion in [Schubert, 19841.) lDahlgren and McDowell, 1986] also suggests using preferences based on hand-built knowledge about the prepositions and their objects, specifying a simpler set of rules than those of [Wilks et al., 1985].",
"title": ""
},
{
"docid": "0eb4a0cb4a40407aea3025e0a3e1b534",
"text": "Telling the story of \"Moana\" became one of the most ambitious things we've ever done at the Walt Disney Animation Studios. We felt a huge responsibility to properly celebrate the culture and mythology of the Pacific Islands, in an epic tale involving demigods, monsters, vast ocean voyages, beautiful lush islands, and a sweeping musical visit to the village and people of Motunui. Join us as we discuss our partnership with our Pacific Islands consultants, known as our \"Oceanic Story Trust,\" the research and development we pursued, and the tremendous efforts of our team of engineers, artists and storytellers who brought the world of \"Moana\" to life.",
"title": ""
},
{
"docid": "1e4a74d8d4ae131467e12911fd6ac281",
"text": "Google Scholar has been well received by the research community. Its promises of free, universal and easy access to scientific literature as well as the perception that it covers better than other traditional multidisciplinary databases the areas of the Social Sciences and the Humanities have contributed to the quick expansion of Google Scholar Citations and Google Scholar Metrics: two new bibliometric products that offer citation data at the individual level and at journal level. In this paper we show the results of a experiment undertaken to analyze Google Scholar's capacity to detect citation counting manipulation. For this, six documents were uploaded to an institutional web domain authored by a false researcher and referencing all the publications of the members of the EC3 research group at the University of Granada. The detection of Google Scholar of these papers outburst the citations included in the Google Scholar Citations profiles of the authors. We discuss the effects of such outburst and how it could affect the future development of such products not only at individual level but also at journal level, especially if Google Scholar persists with its lack of transparency.",
"title": ""
},
{
"docid": "60306e39a7b281d35e8a492aed726d82",
"text": "The aim of this study was to assess the efficiency of four anesthetic agents, tricaine methanesulfonate (MS-222), clove oil, 7 ketamine, and tobacco extract on juvenile rainbow trout. Also, changes of blood indices were evaluated at optimum doses of four anesthetic agents. Basal effective concentrations determined were 40 mg L−1 (induction, 111 ± 16 s and recovery time, 246 ± 36 s) for clove oil, 150 mg L−1 (induction, 287 ± 59 and recovery time, 358 ± 75 s) for MS-222, 1 mg L−1 (induction, 178 ± 38 and recovery time, 264 ± 57 s) for ketamine, and 30 mg L−1 (induction, 134 ± 22 and recovery time, 285 ± 42 s) for tobacco. According to our results, significant changes in hematological parameters including white blood cells (WBCs), red blood cells (RBCs), hematocrit (Ht), and hemoglobin (Hb) were found between four anesthetics agents. Also, significant differences were observed in some plasma parameters including cortical, glucose, and lactate between experimental treatments. Induction and recovery times for juvenile Oncorhynchus mykiss anesthetized with anesthetic agents were dose-dependent.",
"title": ""
},
{
"docid": "2c69eb4be7bc2bed32cfbbbe3bc41a5d",
"text": "The Sapienza University Networking framework for underwater Simulation Emulation and real-life Testing (SUNSET) is a toolkit for the implementation and testing of protocols for underwater sensor networks. SUNSET enables a radical new way of performing experimental research on underwater communications. It allows protocol designers and implementors to easily realize their solutions and to evaluate their performance through simulation, in-lab emulation and trials at sea in a direct and transparent way, and independently of specific underwater hardware platforms. SUNSET provides a complete toolchain of predeployment and deployment time tools able to identify risks, malfunctioning and under-performing solutions before incurring the expense of going to sea. Novel underwater systems can therefore be rapidly and easily investigated. Heterogeneous underwater communication technologies from different vendors can be used, allowing the evaluation of the impact of different combinations of hardware and software on the overall system performance. Using SUNSET, underwater devices can be reconfigured and controlled remotely in real time, using acoustic links. This allows the performance investigation of underwater systems under different settings and configurations and significantly reduces the cost and complexity of at-sea trials. This paper describes the architectural concept of SUNSET and presents some exemplary results of its use in the field. The SUNSET framework has been extensively validated during more than fifteen at-sea experimental campaigns in the past four years. Several of these have been conducted jointly with the NATO STO Centre for Maritime Research and Experimentation (CMRE) under a collaboration between the University of Rome and CMRE.",
"title": ""
},
{
"docid": "f4e73a0c766ce1ead78b2b770e641f61",
"text": "Epistasis, or interactions between genes, has long been recognized as fundamentally important to understanding the structure and function of genetic pathways and the evolutionary dynamics of complex genetic systems. With the advent of high-throughput functional genomics and the emergence of systems approaches to biology, as well as a new-found ability to pursue the genetic basis of evolution down to specific molecular changes, there is a renewed appreciation both for the importance of studying gene interactions and for addressing these questions in a unified, quantitative manner.",
"title": ""
},
{
"docid": "b819015b4c65522905d5ee0eeba11442",
"text": "This version of the referenced work is the post-print version of the article—it is NOT the final published version nor the corrected proofs. If you would like to receive the final published version please send a request to Paul.Lowry.PhD@gmail.com, and I will be happy to send you the latest version. Moreover, you can contact the publisher's website and order the final version there, as well. (2011). \" Privacy concerns versus desire for interpersonal awareness in driving the use of self-disclosure technologies: The case of instant messaging in two cultures, \" If you have any questions and/or would like copies of other articles I've published, please email me at Paul.Lowry. Alternatively, I have an online system that you can use to request any of my published or forthcoming articles. To go to this system, click on the following link: interests include human-computer interaction and cross-cultural issues in information systems. ACKNOWLEDGMENTS We appreciate the support our data collections have received from the Kong University in China. We also acknowledge the participants who gave us useful feedback at the HCI International 2007 workshop and those who gave us feedback on the paper at a presentation at the University College Dublin, Spring 2008. Finally, we appreciate the insightful feedback and reviews and contributions to the literature from ABSTRACT Social computing technologies typically have multiple features that allow users to reveal their personal information to other users. Such self-disclosure (SD) behavior is generally considered positive and beneficial in interpersonal communication and relationships. Using a newly proposed model based on social exchange theory, this paper investigates and empirically validates the relationships between SD technology use and culture. In particular, we explore the effects of culture on information privacy concerns and the desire for online interpersonal awareness, which influence attitudes toward, intention to use, and actual use of SD technologies. Our model was tested using arguably the strongest social computing technology for online self-disclosure—instant messaging (IM)—with users from China and the US. Our findings reveal that cross-cultural dimensions are significant predictors of information privacy concerns and desire for online awareness, which were, in turn, found to be predictors of attitude toward, intention to use, and actual use of IM. Overall, our proposed model is applicable to both cultures. Our findings help enhance the theoretical understanding of the effects of culture and privacy concerns on SD technologies, and provide practical suggestions for developers of SD technologies, such as …",
"title": ""
},
{
"docid": "d638bf6a0ec3354dd6ba90df0536aa72",
"text": "Selected elements of dynamical system (DS) theory approach to nonlinear time series analysis are introduced. Key role in this concept plays a method of time delay. The method enables us reconstruct phase space trajectory of DS without knowledge of its governing equations. Our variant is tested and compared with wellknown TISEAN package for Lorenz and Hénon systems. Introduction There are number of methods of nonlinear time series analysis (e.g. nonlinear prediction or noise reduction) that work in a phase space (PS) of dynamical systems. We assume that a given time series of some variable is generated by a dynamical system. A specific state of the system can be represented by a point in the phase space and time evolution of the system creates a trajectory in the phase space. From this point of view we consider our time series to be a projection of trajectory of DS to one (or more – when we have more simultaneously measured variables) coordinates of phase space. This view was enabled due to formulation of embedding theorem [1], [2] at the beginning of the 1980s. It says that it is possible to reconstruct the phase space from the time series. One of the most frequently used methods of phase space reconstruction is the method of time delay. The main task while using this method is to determine values of time delay τ and embedding dimension m. We tested individual steps of this method on simulated data generated by Lorenz and Hénon systems. We compared results computed by our own programs with outputs of program package TISEAN created by R. Hegger, H. Kantz, and T. Schreiber [3]. Method of time delay The most frequently used method of PS reconstruction is the method of time delay. If we have a time series of a scalar variable we construct a vector ( ) , ,..., 1 , N i t x i = in phase space in time ti as following: ( ) ( ) ( ) ( ) ( ) ( ) [ ], 1 ,..., 2 , , τ τ τ − + + + = m t x t x t x t x t i i i i i X where i goes from 1 to N – (m – 1)τ, τ is time delay, m is a dimension of reconstructed space (embedding dimension) and M = N – (m – 1)τ is number of points (states) in the phase space. According to embedding theorem, when this is done in a proper way, dynamics reconstructed using this formula is equivalent to the dynamics on an attractor in the origin phase space in the sense that characteristic invariants of the system are conserved. The time delay method and related aspects are described in literature, e.g. [4]. We estimated the two parameters—time delay and embedding dimension—using algorithms below. Choosing a time delay To determine a suitable time delay we used average mutual information (AMI), a certain generalization of autocorrelation function. Average mutual information between sets of measurements A and B is defined [5]:",
"title": ""
},
{
"docid": "65c823a03c6626f76f753c52e120543c",
"text": "Within interaction design, several forces have coincided in the last few years to fuel the emergence of a new field of inquiry, which we summarize under the label of embodied interaction. The term was introduced to the HCI community by Dourish (2001) as a way to combine the then-distinct perspectives of tangible interaction (Ullmer & Ishii, 2001) and social computing. Briefly, his point was that computing must be approached as twice embodied: in the physical/material sense and in the sense of social fabrics and practices. Dourish’s work has been highly influential in the academic interaction design field and has to be considered a seminal contribution at the conceptual level. Still, we find that more needs to be done to create a body of contemporary designoriented knowledge on embodied interaction. Several recent developments within academia combine to inform and advance the emerging field of embodied interaction. For example, the field of wearable computing (see Mann, 1997, for an introduction to early and influential work), which can be considered a close cousin of tangible interaction, puts particular emphasis on physical bodiness and full-body interaction. The established discipline of human-computer interaction (HCI) has increasingly turned towards considering the whole body in interaction, often drawing on recent advances in cognitive science (e.g., Johnson, 2007) and philosophy (e.g., Shusterman, 2008). Some characteristic examples are the work of Twenebowa Larssen et al. (2007) on conceptualization of haptic and kinaesthetic sensations in tangible interaction and Schiphorst’s (2009) design work on the somaesthetics of interaction. Höök (2009) provides an interesting view of the “bodily turn” in HCI through the progression of four successive design cases. In more technical terms, the growing acceptance of the Internet of Things vision (which according to Dodson [2003] traces its origins to MIT around 1999) serves as a driver and enabler for realizations of embodied interaction. Finally, it should be mentioned that analytical perspectives on interaction in media studies are increasingly moving from interactivity to performativity, a concept of long standing in, for example, performance studies which turns out to have strong implications also for how interaction is seen as socially embodied (see Bardzell, Bolter, & Löwgren, 2010, for an example). The picture that emerges is one of a large and somewhat fuzzy design space, that has been predicted for quite a few years within academia but is only now becoming increasingly amenable ORIGINAL ARTICLE",
"title": ""
},
{
"docid": "02ad36e53e8b2f697b98b7d6427bcc29",
"text": "Conventional firewalls rely on the notions of restricted topology and controlled entry points to function. More precisely, they rely on the assumption that everyone on one side of the entry point—the firewall—is to be trusted, and that anyone on the other side is, at least potentially, an enemy. The vastly expanded Internet connectivity in recent years has called that assumption into question. We propose a “distributed firewall”, using IPSEC, a policy language, and system management tools. A distributed firewall preserves central control of access policy, while reducing or eliminating any dependency on topology.",
"title": ""
}
] |
scidocsrr
|
a44f4542c10c390c99101498abf9cef2
|
POTs: Protective Optimization Technologies
|
[
{
"docid": "c70466f8b1e70fcdd4b7fe3f2cb772b2",
"text": "We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design. Tor adds perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than a dozen hosts. We close with a list of open problems in anonymous communication.",
"title": ""
},
{
"docid": "53a55e8aa8b3108cdc8d015eabb3476d",
"text": "We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM’s test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM’s decision function due to malicious input and use this ability to construct malicious data. The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM’s optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier’s test error.",
"title": ""
}
] |
[
{
"docid": "333df3bdb8be67123a5b06c4c29e18bc",
"text": "Deep Learning refers to a set of machine learning techniques that utilize neural networks with many hidden layers for tasks, such as image classification, speech recognition, language understanding. Deep learning has been proven to be very effective in these domains and is pervasively used by many Internet services. In this paper, we describe different automotive uses cases for deep learning in particular in the domain of computer vision. We surveys the current state-of-the-art in libraries, tools and infrastructures (e. g. GPUs and clouds) for implementing, training and deploying deep neural networks. We particularly focus on convolutional neural networks and computer vision use cases, such as the visual inspection process in manufacturing plants and the analysis of social media data. To train neural networks, curated and labeled datasets are essential. In particular, both the availability and scope of such datasets is typically very limited. A main contribution of this paper is the creation of an automotive dataset, that allows us to learn and automatically recognize different vehicle properties. We describe an end-to-end deep learning application utilizing a mobile app for data collection and process support, and an Amazon-based cloud backend for storage and training. For training we evaluate the use of cloud and on-premises infrastructures (including multiple GPUs) in conjunction with different neural network architectures and frameworks. We assess both the training times as well as the accuracy of the classifier. Finally, we demonstrate the effectiveness of the trained classifier in a real world setting during manufacturing process.",
"title": ""
},
{
"docid": "082e1a8052ed0799612c1bf6b2ec3334",
"text": "Researchers are actively exploring techniques to enforce control-flow integrity (CFI), which restricts program execution to a predefined set of targets for each indirect control transfer to prevent code-reuse attacks. While hardware-assisted CFI enforcement may have the potential for advantages in performance and flexibility over software instrumentation, current hardware-assisted defenses are either incomplete (i.e., do not enforce all control transfers) or less efficient in comparison. We find that the recent introduction of hardware features to log complete control-flow traces, such as Intel Processor Trace (PT), provides an opportunity to explore how efficient and flexible a hardware-assisted CFI enforcement system may become. While Intel PT was designed to aid in offline debugging and failure diagnosis, we explore its effectiveness for online CFI enforcement over unmodified binaries by designing a parallelized method for enforcing various types of CFI policies. We have implemented a prototype called GRIFFIN in the Linux 4.2 kernel that enables complete CFI enforcement over a variety of software, including the Firefox browser and its jitted code. Our experiments show that GRIFFIN can enforce fine-grained CFI policies with shadow stack as recommended by researchers at a performance that is comparable to software-only instrumentation techniques. In addition, we find that alternative logging approaches yield significant performance improvements for trace processing, identifying opportunities for further hardware assistance.",
"title": ""
},
{
"docid": "75343eee16d87d02bc9e588a42a1abcc",
"text": "This paper addresses the solution of bound-constrained optimization problems using algorithms that require only the availability of objective function values but no derivative information. We refer to these algorithms as derivative-free algorithms. Fueled by a growing number of applications in science and engineering, the development of derivative-free optimization algorithms has long been studied, and it has found renewed interest in recent time. Along with many derivative-free algorithms, many software implementations have also appeared. The paper presents a review of derivative-free algorithms, followed by a systematic comparison of 22 related implementations using a test set of 502 problems. The test bed includes convex and nonconvex problems, smooth as well as nonsmooth problems. The algorithms were tested under the same conditions and ranked under several criteria, including their ability to find near-global solutions for nonconvex problems, improve a given starting point, and refine a near-optimal solution. A total of 112,448 problem instances were solved. We find that the ability of all these solvers to obtain good solutions diminishes with increasing problem size. For the problems used in this study, TOMLAB/MULTIMIN, TOMLAB/GLCCLUSTER, MCS and TOMLAB/LGO are better, on average, than other derivative-free solvers in terms of solution quality within 2500 function evaluations. These global solvers outperform local solvers even for convex problems. Finally, TOMLAB/OQNLP, NEWUOA, and TOMLAB/MULTIMIN show superior performance in terms of refining a near-optimal solution.",
"title": ""
},
{
"docid": "49215cb8cb669aef5ea42dfb1e7d2e19",
"text": "Many people rely on Web-based tutorials to learn how to use complex software. Yet, it remains difficult for users to systematically explore the set of tutorials available online. We present Sifter, an interface for browsing, comparing and analyzing large collections of image manipulation tutorials based on their command-level structure. Sifter first applies supervised machine learning to identify the commands contained in a collection of 2500 Photoshop tutorials obtained from the Web. It then provides three different views of the tutorial collection based on the extracted command-level structure: (1) A Faceted Browser View allows users to organize, sort and filter the collection based on tutorial category, command names or on frequently used command subsequences, (2) a Tutorial View summarizes and indexes tutorials by the commands they contain, and (3) an Alignment View visualizes the commandlevel similarities and differences between a subset of tutorials. An informal evaluation (n=9) suggests that Sifter enables users to successfully perform a variety of browsing and analysis tasks that are difficult to complete with standard keyword search. We conclude with a meta-analysis of our Photoshop tutorial collection and present several implications for the design of image manipulation software. ACM Classification H5.2 [Information interfaces and presentation]: User Interfaces. Graphical user interfaces. Author",
"title": ""
},
{
"docid": "633be21ba8ae6b8882c8b4ac37969027",
"text": "This paper presents a local search, based on a new neighborhood for the job-shop scheduling problem, and its application within a biased random-key genetic algorithm. Schedules are constructed by decoding the chromosome supplied by the genetic algorithm with a procedure that generates active schedules. After an initial schedule is obtained, a local search heuristic, based on an extension of the graphical method of Akers (1956), is applied to improve the solution. The new heuristic is tested on a set of 205 standard instances taken from the job-shop scheduling literature and compared with results obtained by other approaches. The new algorithm improved the best known solution values for 57 instances.",
"title": ""
},
{
"docid": "01ebfe5e28bfcd111a014d1a47743028",
"text": "In this paper, we propose a Cognitive Caching approach for the Future Fog (CCFF) that takes into consideration the value of the exchanged data in Information Centric Sensor Networks (ICSNs). Our approach depends on four functional parameters in ICSNs. These four main parameters are: age of the data, popularity of on-demand requests, delay to receive the requested information and data fidelity. These parameters are considered together to assign a value to the cached data while retaining the most valuable one in the cache for prolonged time periods. This CCFF approach provides significant availability for most valuable and difficult to retrieve data in the ICSNs. Extensive simulations and case studies have been examined in this research in order to compare to other dominant cache management frameworks in the literature under varying circumstances such as data popularity, cache size, data publisher load, and node connectivity degree. Formal fidelity and trust analysis has been applied as well to emphasize the effectiveness of CCFF in Fog paradigms, where edge devices can retrieve unsecured data from the authorized nodes in the cloud.",
"title": ""
},
{
"docid": "b673da5389899d61fb4f5a91f039226b",
"text": "Light fields captured by the Lytro-Illum camera are the first to appear in the consumer market, capable of providing refocused pictures at acceptable spatial resolution and quality. Since this is partially due to sampling of a huge number of light rays, efficient compression methods are required to store and exchange light field data. This paper presents a performance study of HEVC-compatible coding of Lytro-Illum light fields using different data formats for standard coding. The efficiency of 5 different light field data formats are evaluated using a data set of 12 light field images and the standard HEVC coding configurations of Still-Image Profile, All-Intra, Low Delay B and P and Random Access. Unexpectedly, the results show that relative performance is not consistent across all coding configurations, raising new research questions regarding standard coding of Lytro-Illum light fields using HEVC. Most importantly, the proposed data formats greatly increase HEVC performance.",
"title": ""
},
{
"docid": "24116898bef26e6327d79d85e8d290fd",
"text": "This paper presents an inclusive set of EMTP models used to simulate the cause of voltage sags such as short circuits, transformer energizing, induction motor starting. Voltage sag is usually described as characteristics of both magnitude and duration, but it is also necessary to detect phase angle jump in order to identify sags phenomena and finding the solutions, especially for sags due to short circuits. In case of the simulation of voltage sags due to short circuit, their effect on the magnitude, duration and phase-jump are studied.",
"title": ""
},
{
"docid": "477af6326b8d51afcb15ef6107fe3cd7",
"text": "BACKGROUND\nThe few studies that have investigated the relationship between mobile phone use and sleep have mainly been conducted among children and adolescents. In adults, very little is known about mobile phone usage in bed our after lights out. This cross-sectional study set out to examine the association between bedtime mobile phone use and sleep among adults.\n\n\nMETHODS\nA sample of 844 Flemish adults (18-94 years old) participated in a survey about electronic media use and sleep habits. Self-reported sleep quality, daytime fatigue and insomnia were measured using the Pittsburgh Sleep Quality Index (PSQI), the Fatigue Assessment Scale (FAS) and the Bergen Insomnia Scale (BIS), respectively. Data were analyzed using hierarchical and multinomial regression analyses.\n\n\nRESULTS\nHalf of the respondents owned a smartphone, and six out of ten took their mobile phone with them to the bedroom. Sending/receiving text messages and/or phone calls after lights out significantly predicted respondents' scores on the PSQI, particularly longer sleep latency, worse sleep efficiency, more sleep disturbance and more daytime dysfunction. Bedtime mobile phone use predicted respondents' later self-reported rise time, higher insomnia score and increased fatigue. Age significantly moderated the relationship between bedtime mobile phone use and fatigue, rise time, and sleep duration. An increase in bedtime mobile phone use was associated with more fatigue and later rise times among younger respondents (≤ 41.5 years old and ≤ 40.8 years old respectively); but it was related to an earlier rise time and shorter sleep duration among older respondents (≥ 60.15 years old and ≥ 66.4 years old respectively).\n\n\nCONCLUSION\nFindings suggest that bedtime mobile phone use is negatively related to sleep outcomes in adults, too. It warrants continued scholarly attention as the functionalities of mobile phones evolve rapidly and exponentially.",
"title": ""
},
{
"docid": "1ee74e505f5efc99331d5b63565882cf",
"text": "Consumers shopping in \"brick-and-mortar\" (non-virtual) stores often use their mobile phones to consult with others about potential purchases. Via a survey (n = 200), we detail current practices in seeking remote shopping advice. We then consider how emerging social platforms, such as social networking sites and crowd labor markets, could offer rich next-generation remote shopping advice experiences. We conducted a field experiment in which shoppers shared photographs of potential purchases via MMS, Facebook, and Mechanical Turk. Paid crowdsourcing, in particular, proved surprisingly useful and influential as a means of augmenting in-store shopping. Based on our findings, we offer design suggestions for next-generation remote shopping advice systems.",
"title": ""
},
{
"docid": "0811f0768e8112b40bbcd38625db2526",
"text": "The Alfred Mann Foundation is completing development of a coordinated network of BION/spl reg/ microstimulator/sensor (hereinafter implant) that has broad stimulating, sensing and communication capabilities. The network consists of a master control unit (MCU) in communication with a group of BION implants. Each implant is powered by a custom lithium-ion rechargeable 10 mW-hr battery. The charging, discharging, safety, stimulating, sensing, and communication circuits are designed to be highly efficient to minimize energy use and maximize battery life and time between charges. The stimulator can be programmed to deliver pulses in any value in the following range: 5 /spl mu/A to 20 mA in 3.3% constant current steps, 7 /spl mu/s to 2000 /spl mu/s in 7 /spl mu/s pulse width steps, and 1 to 4000 Hz in frequency. The preamp voltage sensor covers the range 10 /spl mu/V to 1.0 V with bandpass filtering and several forms of data analysis. The implant also contains sensors that can read out pressure, temperature, DC magnetic field, and distance (via a low frequency magnetic field) up to 20 cm between any two BION implants. The MCU contains a microprocessor, user interface, two-way communication system, and a rechargeable battery. The MCU can command and interrogate in excess of 800 BlON implants every 10 ms, i.e., 100 times a second.",
"title": ""
},
{
"docid": "f26df52af74f9c2f51ff0e56daeb4c38",
"text": "Browsing is part of the information seeking process, used when information needs are ill-defined or unspecific. Browsing and searching are often interleaved during information seeking to accommodate changing awareness of information needs. Digital Libraries often support full-text search, but are not so helpful in supporting browsing. Described here is a novel browsing system created for the Greenstone software used by the New Zealand Digital Library that supports users in a more natural approach to the information seeking process.",
"title": ""
},
{
"docid": "72345bf404d21d0f7aa1e54a5710674c",
"text": "Many real-world data sets exhibit skewed class distributions in which almost all cases are allotted to a class and far fewer cases to a smaller, usually more interesting class. A classifier induced from an imbalanced data set has, typically, a low error rate for the majority class and an unacceptable error rate for the minority class. This paper firstly provides a systematic study on the various methodologies that have tried to handle this problem. Finally, it presents an experimental study of these methodologies with a proposed mixture of expert agents and it concludes that such a framework can be a more effective solution to the problem. Our method seems to allow improved identification of difficult small classes in predictive analysis, while keeping the classification ability of the other classes in an acceptable level.",
"title": ""
},
{
"docid": "7150a73a223e4a398929fc75994b4117",
"text": "Alzheimer disease (AD) and type 2 diabetes mellitus (T2DM) are conditions that affect a large number of people in the industrialized countries. Both conditions are on the increase, and finding novel treatments to cure or prevent them are a major aim in research. Somewhat surprisingly, AD and T2DM share several molecular processes that underlie the respective degenerative developments. This review describes and discusses several of these shared biochemical and physiological pathways. Disturbances in insulin signalling appears to be the main common impairment that affects cell growth and differentiation, cellular repair mechanisms, energy metabolism, and glucose utilization. Insulin not only regulates blood sugar levels but also acts as a growth factor on all cells including neurons in the CNS. Impairment of insulin signalling therefore not only affects blood glucose levels but also causes numerous degenerative processes. Other growth factor signalling systems such as insulin growth factors (IGFs) and transforming growth factors (TGFs) also are affected in both conditions. Also, the misfolding of proteins plays an important role in both diseases, as does the aggregation of amyloid peptides and of hyperphosphorylated proteins. Furthermore, more general physiological processes such as angiopathic and cytotoxic developments, the induction of apoptosis, or of non-apoptotic cell death via production of free radicals greatly influence the progression of AD and T2DM. The increase of detailed knowledge of these common physiological processes open up the opportunities for treatments that can prevent or reduce the onset of AD as well as T2DM.",
"title": ""
},
{
"docid": "1385d6d1e6f0858c3105e151850fa24b",
"text": "Newspapers and blogs express opinion of news entities (people, places, things) while reporting on recent events. We present a system that assigns scores indicating positive or negative opinion to each distinct entity in the text corpus. Our system consists of a sentiment identification phase, which associates expressed opinions with each relevant entity, and a sentiment aggregation and scoring phase, which scores each entity relative to others in the same class. Finally, we evaluate the significance of our scoring techniques over large corpus of news and blogs.",
"title": ""
},
{
"docid": "0b1310ac9630fa4a1c90dcf90d4ae327",
"text": "The Mirai Distributed Denial-of-Service (DDoS) attack exploited security vulnerabilities of Internet-of-Things (IoT) devices and thereby clearly signaled that attackers have IoT on their radar. Securing IoT is therefore imperative, but in order to do so it is crucial to understand the strategies of such attackers. For that purpose, in this paper, a novel IoT honeypot called ThingPot is proposed and deployed. Honeypot technology mimics devices that might be exploited by attackers and logs their behavior to detect and analyze the used attack vectors. ThingPot is the first of its kind, since it focuses not only on the IoT application protocols themselves, but on the whole IoT platform. A Proof-of-Concept is implemented with XMPP and a REST API, to mimic a Philips Hue smart lighting system. ThingPot has been deployed for 1.5 months and through the captured data we have found five types of attacks and attack vectors against smart devices. The ThingPot source code is made available as open source.",
"title": ""
},
{
"docid": "054cde7ac85562e1f96e69f0d769de29",
"text": "Research on the impact of nocturnal road traffic noise on sleep and the consequences on daytime functioning demonstrates detrimental effects that cannot be ignored. The physiological reactions due to continuing noise processing during night time lead to primary sleep disturbances, which in turn impair daytime functioning. This review focuses on noise processing in general and in relation to sleep, as well as methodological aspects in the study of noise and sleep. More specifically, the choice of a research setting and noise assessment procedure is discussed and the concept of sleep quality is elaborated. In assessing sleep disturbances, we differentiate between objectively measured and subjectively reported complaints, which demonstrates the need for further understanding of the impact of noise on several sleep variables. Hereby, mediating factors such as noise sensitivity appear to play an important role. Research on long term effects of noise intrusion on sleep up till now has mainly focused on cardiovascular outcomes. The domain might benefit from additional longitudinal studies on deleterious effects of noise on mental health and general well-being.",
"title": ""
},
{
"docid": "7704e1154d480c167eff13c0e3fe4411",
"text": "An autonomous dual wheel self balancing robot is developed that is capable of balancing its position around predetermined position. Initially the system was nonlinear and unstable. It is observed that the system becomes stable after redesigning the physical structure of the system using PID controller and analyzing its dynamic behavior using mathematical modeling. The position of self balancing robot is controlled by PID controller. Simulation results using PROTEOUS, MATLAB, and VM lab are observed and verified vital responses of different components. Balancing is claimed and shown the verification for this nonlinear and unstable system. Some fluctuations in forward or backward around its mean position is observed, afterwards it acquires its balanced position in reasonable settling time. The research is applicable in gardening, hospitals, shopping malls and defense systems etc.",
"title": ""
},
{
"docid": "0d20f5ae084c6ca4e7a834e1eee1e84c",
"text": "Gantry-tilted helical multi-slice computed tomography (CT) refers to the helical scanning CT system equipped with multi-row detector operating at some gantry tilting angle. Its purpose is to avoid the area which is vulnerable to the X-ray radiation. The local tomography is to reduce the total radiation dose by only scanning the region of interest for image reconstruction. In this paper we consider the scanning scheme, and incorporate the local tomography technique with the gantry-tilted helical multi-slice CT. The image degradation problem caused by gantry tilting is studied, and a new error correction method is proposed to deal with this problem in the local CT. Computer simulation shows that the proposed method can enhance the local imaging performance in terms of image sharpness and artifacts reduction",
"title": ""
},
{
"docid": "d055d4d53bd523aaf9913b8237e155f7",
"text": "In practice, each writer provides only a limited number of signature samples to design a signature verification (SV) system. Hybrid generative-discriminative ensembles of classifiers (EoCs) are proposed in this paper to design an off-line SV system from few samples, where the classifier selection process is performed dynamically. To design the generative stage, multiple discrete left-to-right Hidden Markov Models (HMMs) are trained using a different number of states and codebook sizes, allowing the system to learn signatures at different levels of perception. To design the discriminative stage, HMM likelihoods are measured for each training signature, and assembled into feature vectors that are used to train a diversified pool of two-class classifiers through a specialized Random Subspace Method. During verification, a new dynamic selection strategy based on the K-nearest-oracles (KNORA) algorithm and on Output Profiles selects the most accurate EoCs to classify a given input signature. This SV system is suitable for incremental learning of new signature samples. Experiments performed with real-world signature data (comprised of genuine samples, and random, simple and skilled forgeries) indicate that the proposed dynamic selection strategy can significantly reduce the overall error rates, with respect to other EoCs formed using well-known dynamic and static selection strategies. Moreover, the performance of the SV system proposed in this paper is significantly greater than or comparable to that of related systems found in the literature.",
"title": ""
}
] |
scidocsrr
|
d333f35aa463f13596b558052cf27aa6
|
NFV enabled IoT architecture for an operating room environment
|
[
{
"docid": "29d02d7219cb4911ab59681e0c70a903",
"text": "As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures, which bring network functions and contents to the network edge, are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at the edge of cellular networks. In this survey, we make an exhaustive review on the state-of-the-art research efforts on mobile edge networks. We first give an overview of mobile edge networks, including definition, architecture, and advantages. Next, a comprehensive survey of issues on computing, caching, and communication techniques at the network edge is presented. The applications and use cases of mobile edge networks are discussed. Subsequently, the key enablers of mobile edge networks, such as cloud technology, SDN/NFV, and smart devices are discussed. Finally, open research challenges and future directions are presented as well.",
"title": ""
},
{
"docid": "ecb06a681f7d14fc690376b4c5a630af",
"text": "Diverse proprietary network appliances increase both the capital and operational expense of service providers, meanwhile causing problems of network ossification. Network function virtualization (NFV) is proposed to address these issues by implementing network functions as pure software on commodity and general hardware. NFV allows flexible provisioning, deployment, and centralized management of virtual network functions. Integrated with SDN, the software-defined NFV architecture further offers agile traffic steering and joint optimization of network functions and resources. This architecture benefits a wide range of applications (e.g., service chaining) and is becoming the dominant form of NFV. In this survey, we present a thorough investigation of the development of NFV under the software-defined NFV architecture, with an emphasis on service chaining as its application. We first introduce the software-defined NFV architecture as the state of the art of NFV and present relationships between NFV and SDN. Then, we provide a historic view of the involvement from middlebox to NFV. Finally, we introduce significant challenges and relevant solutions of NFV, and discuss its future research directions by different application domains.",
"title": ""
}
] |
[
{
"docid": "ddc73328c18db1e4ef585671fb3a838d",
"text": "Gamification has drawn the attention of academics, practitioners and business professionals in domains as diverse as education, information studies, human–computer interaction, and health. As yet, the term remains mired in diverse meanings and contradictory uses, while the concept faces division on its academic worth, underdeveloped theoretical foundations, and a dearth of standardized guidelines for application. Despite widespread commentary on its merits and shortcomings, little empirical work has sought to validate gamification as a meaningful concept and provide evidence of its effectiveness as a tool for motivating and engaging users in non-entertainment contexts. Moreover, no work to date has surveyed gamification as a field of study from a human–computer studies perspective. In this paper, we present a systematic survey on the use of gamification in published theoretical reviews and research papers involving interactive systems and human participants. We outline current theoretical understandings of gamification and draw comparisons to related approaches, including alternate reality games (ARGs), games with a purpose (GWAPs), and gameful design. We present a multidisciplinary review of gamification in action, focusing on empirical findings related to purpose and context, design of systems, approaches and techniques, and user impact. Findings from the survey show that a standard conceptualization of gamification is emerging against a growing backdrop of empirical participantsbased research. However, definitional subjectivity, diverse or unstated theoretical foundations, incongruities among empirical findings, and inadequate experimental design remain matters of concern. We discuss how gamification may to be more usefully presented as a subset of a larger effort to improve the user experience of interactive systems through gameful design. We end by suggesting points of departure for continued empirical investigations of gamified practice and its effects. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "abedd6f0896340a190750666b1d28d91",
"text": "This study aimed to characterize the neural generators of the early components of the visual evoked potential (VEP) to isoluminant checkerboard stimuli. Multichannel scalp recordings, retinotopic mapping and dipole modeling techniques were used to estimate the locations of the cortical sources giving rise to the early C1, P1, and N1 components. Dipole locations were matched to anatomical brain regions visualized in structural magnetic resonance imaging (MRI) and to functional MRI (fMRI) activations elicited by the same stimuli. These converging methods confirmed previous reports that the C1 component (onset latency 55 msec; peak latency 90-92 msec) was generated in the primary visual area (striate cortex; area 17). The early phase of the P1 component (onset latency 72-80 msec; peak latency 98-110 msec) was localized to sources in dorsal extrastriate cortex of the middle occipital gyrus, while the late phase of the P1 component (onset latency 110-120 msec; peak latency 136-146 msec) was localized to ventral extrastriate cortex of the fusiform gyrus. Among the N1 subcomponents, the posterior N150 could be accounted for by the same dipolar source as the early P1, while the anterior N155 was localized to a deep source in the parietal lobe. These findings clarify the anatomical origin of these VEP components, which have been studied extensively in relation to visual-perceptual processes.",
"title": ""
},
{
"docid": "704611db1aea020103b093a2156cd94d",
"text": "With the growing number of wearable devices and applications, there is an increasing need for a flexible body channel communication (BCC) system that supports both scalable data rate and low power operation. In this paper, a highly flexible frequency-selective digital transmission (FSDT) transmitter that supports both data scalability and low power operation with the aid of two novel implementation methods is presented. In an FSDT system, data rate is limited by the number of Walsh spreading codes available for use in the optimal body channel band of 40-80 MHz. The first method overcomes this limitation by applying multi-level baseband coding scheme to a carrierless FSDT system to enhance the bandwidth efficiency and to support a data rate of 60 Mb/s within a 40-MHz bandwidth. The proposed multi-level coded FSDT system achieves six times higher data rate as compared to other BCC systems. The second novel implementation method lies in the use of harmonic frequencies of a Walsh encoded FSDT system that allows the BCC system to operate in the optimal channel bandwidth between 40-80 MHz with half the clock frequency. Halving the clock frequency results in a power consumption reduction of 32%. The transmitter was fabricated in a 65-nm CMOS process. It occupies a core area of 0.24 × 0.3 mm 2. When operating under a 60-Mb/s data-rate mode, the transmitter consumes 1.85 mW and it consumes only 1.26 mW when operating under a 5-Mb/s data-rate mode.",
"title": ""
},
{
"docid": "4f6fc6635f661de7dd7081f3fd6e0a29",
"text": "Wirelessly networked systems of implantable medical devices endowed with sensors and actuators will be the basis of many innovative, sometimes revolutionary therapies. The biggest obstacle in realizing this vision of networked implantable devices is posed by the dielectric nature of the human body, which strongly attenuates radio-frequency (RF) electromagnetic waves. In this paper we present the first hardware and software architecture of an Internet of Medical Things (IoMT) platform with ultrasonic connectivity for intra-body communications that can be used as a basis for building future IoT-ready medical implantable and wearable devices. We show that ultrasonic waves can be efficiently generated and received with low-power and mm-sized components, and that despite the conversion loss introduced by ultrasonic transducers the gap in attenuation between 2.4GHz RF and ultrasonic waves is still substantial, e.g., ultrasounds offer 70dB less attenuation over 10cm. We show that the proposed IoMT platform requires much lower transmission power compared to 2.4 GHz RF with equal reliability in tissues, e.g., 35 dBm lower over 12 cm for 10−3 Bit Error Rate (BEr) leading to lower energy per bit and longer device lifetime. Finally, we show experimentally that 2.4 GHz RF links are not functional at all above 12 cm, while ultrasonic links achieve a reliability of 10−6 up to 20 cm with less than 0 dBm transmission power.",
"title": ""
},
{
"docid": "29c4156e966f2e177a71d604b1883204",
"text": "This paper discusses the use of factorization techniques in distributional semantic models. We focus on a method for redistributing the weight of latent variables, which has previously been shown to improve the performance of distributional semantic models. However, this result has not been replicated and remains poorly understood. We refine the method, and provide additional theoretical justification, as well as empirical results that demonstrate the viability of the proposed approach.",
"title": ""
},
{
"docid": "49bd1cdbeea10f39a2b34cfa5baac0ef",
"text": "Recently, image inpainting task has revived with the help of deep learning techniques. Deep neural networks, especially the generative adversarial networks~(GANs) make it possible to recover the missing details in images. Due to the lack of sufficient context information, most existing methods fail to get satisfactory inpainting results. This work investigates a more challenging problem, e.g., the newly-emerging semantic image inpainting - a task to fill in large holes in natural images. In this paper, we propose an end-to-end framework named progressive generative networks~(PGN), which regards the semantic image inpainting task as a curriculum learning problem. Specifically, we divide the hole filling process into several different phases and each phase aims to finish a course of the entire curriculum. After that, an LSTM framework is used to string all the phases together. By introducing this learning strategy, our approach is able to progressively shrink the large corrupted regions in natural images and yields promising inpainting results. Moreover, the proposed approach is quite fast to evaluate as the entire hole filling is performed in a single forward pass. Extensive experiments on Paris Street View and ImageNet dataset clearly demonstrate the superiority of our approach. Code for our models is available at https://github.com/crashmoon/Progressive-Generative-Networks.",
"title": ""
},
{
"docid": "05c72978e9b4437c648398d5bb824fed",
"text": "In this paper we propose a novel authentication mechanism for session mobility in Next Generation Networks named as Hierarchical Authentication Key Management (HAKM). The design objectives of HAKM are twofold: i) to minimize the authentication latency in NGNs; ii) to provide protection against an assortment of attacks such as denial-of-service attacks, man-in-the-middle attacks, guessing attacks, and capturing node attacks. In order to achieve these objectives, we combine Session Initiation Protocol (SIP) with Hierarchical Mobile IPv6 (HMIPv6) to perform local authentication for session mobility. The concept of group keys and pairwise keys with one way hash function is employed to make HAKM vigorous against the aforesaid attacks. The performance analysis and numerical results demonstrate that HAKM outperforms the existing approaches in terms of latency and protection against the abovementioned attacks.",
"title": ""
},
{
"docid": "d639525be41a05f1aec5d0637eff79ac",
"text": "We analyze X-COM: UFO Defense and its successful remake XCOM: Enemy Unknown to understand how remakes can repropose a concept across decades, updating most mechanics, and yet retain the dynamic and aesthetic values that defined the original experience. We use gameplay design patterns along with the MDA framework to understand the changes, identifying an unchanged core among a multitude of differences. We argue that two forces polarize the context within which the new game was designed, simultaneously guaranteeing a sameness of experience across the two games and at the same time pushing for radical changes. The first force, which resists the push for an updated experience, can be described as experiential isomorphism, or “sameness of form” in terms of related Gestalt qualities. The second force is generated by the necessity to update the usability of the design, aligning it to a current usability paradigm. We employ game usability heuristics (PLAY) to evaluate aesthetic patterns present in both games, and to understand the implicit vector for change. Our finding is that while patterns on the mechanical and to a slight degree the dynamic levels change between the games, the same aesthetic patterns are present in both, but produced through different means. The method we use offers new understanding of how sequels and remakes of games can change significantly from their originals while still giving rise to similar experiences.",
"title": ""
},
{
"docid": "59cb3aa5f6749b05ebdb9735177d66fd",
"text": "The basic aim of this article is to provide a model to explain stock performance utmost level. To reach this purpose, at the initial step, the model results composed of fundamental and technical analysis variables considered separately; in the second step, building the model composed of fundamental and technical analysis parameters which has best explaining ability was the focal point of this study. Artificial Neural Network (ANN) is an approach that has been widely used for financial classification problems for a long time. In addition, promising results of a novel machine learning method known as the Support Vector Machines (SVM) have been presented in several studies compared to the ANN. The stock performance results relying on fundamental analysis have shown more successful classification rates than the models based on technical analysis. Moreover, it was also experienced that the models constructed by using SVM method in the both type of analyses have shown more prominent results. JEL Classifications: G10, G11, C10",
"title": ""
},
{
"docid": "67e2bbbbd0820bb47f04258eb4917cc1",
"text": "One of the major differences between markets that follow a \" sharing economy \" paradigm and traditional two-sided markets is that the supply side in the sharing economy often includes individual nonprofessional decision makers, in addition to firms and professional agents. Using a data set of prices and availability of listings on Airbnb, we find that there exist substantial differences in the operational and financial performance of professional and nonprofessional hosts. In particular, properties managed by professional hosts earn 16.9% more in daily revenue, have 15.5% higher occupancy rates, and are 13.6% less likely to exit the market compared with properties owned by nonprofessional hosts, while controlling for property and market characteristics. We demonstrate that these performance differences between professionals and nonprofessionals can be partly explained by pricing inefficiencies. Specifically, we provide empirical evidence that nonprofes-sional hosts are less likely to offer different rates across stay dates based on the underlying demand patterns, such as those created by major holidays and conventions. We develop a parsimonious model to analyze the implications of having two such different host groups for a profit-maximizing platform operator and for a social planner. While a profit-maximizing platform operator should charge lower prices to nonprofessional hosts, a social planner would charge the same prices to professionals and nonprofessionals.",
"title": ""
},
{
"docid": "87fefee3cb35d188ad942ee7c8fad95f",
"text": "Financial frictions are a central element of most of the models that the literature on emerging markets crises has proposed for explaining the ‘Sudden Stop’ phenomenon. To date, few studies have aimed to examine the quantitative implications of these models and to integrate them with an equilibrium business cycle framework for emerging economies. This paper surveys these studies viewing them as ability-to-pay and willingness-to-pay variations of a framework that adds occasionally binding borrowing constraints to the small open economy real-business-cycle model. A common feature of the different models is that agents factor in the risk of future Sudden Stops in their optimal plans, so that equilibrium allocations and prices are distorted even when credit constraints do not bind. Sudden Stops are a property of the unique, flexible-price competitive equilibrium of these models that occurs in a particular region of the state space in which negative shocks make borrowing constraints bind. The resulting nonlinear effects imply that solving the models requires non-linear numerical methods, which are described in the survey. The results show that the models can yield relatively infrequent Sudden Stops with large current account reversals and deep recessions nested within smoother business cycles. Still, research in this area is at an early stage and this survey aims to stimulate further work. Cristina Arellano Enrique G. Mendoza Department of Economics Department of Economics Social Sciences Building University of Maryland Duke University College Park, MD 20742 Durham, NC 27708-0097 and NBER mendozae@econ.duke.edu",
"title": ""
},
{
"docid": "5e9dce428a2bcb6f7bc0074d9fe5162c",
"text": "This paper describes a real-time motion planning algorithm, based on the rapidly-exploring random tree (RRT) approach, applicable to autonomous vehicles operating in an urban environment. Extensions to the standard RRT are predominantly motivated by: 1) the need to generate dynamically feasible plans in real-time; 2) safety requirements; 3) the constraints dictated by the uncertain operating (urban) environment. The primary novelty is in the use of closed-loop prediction in the framework of RRT. The proposed algorithm was at the core of the planning and control software for Team MIT's entry for the 2007 DARPA Urban Challenge, where the vehicle demonstrated the ability to complete a 60 mile simulated military supply mission, while safely interacting with other autonomous and human driven vehicles.",
"title": ""
},
{
"docid": "a86a7cafdd464e40c8a9cf8207d249ae",
"text": "Mobile marketing offers great opportunities for businesses. Marketing activities supported by mobile devices allow companies to directly communicate with their consumers without time or location barriers. Possibilities for marketers are numerous, but many aspects of mobile marketing still need further investigation. Especially, the topic of mobile advertising (m-advertising) is of major interest. M-advertising addresses consumers with individualized advertising messages via mobile devices. The underlying paper discusses the relevance of m-advertising and investigates how perceived advertising value of mobile marketing can be increased. The analysis is based on a study among consumers. All together a quota sample of 815 mobile phone users was interviewed. The results indicate that the message content is of greatest relevance for the perceived advertising value, while a high frequency of message exposure has a negative impact on it.",
"title": ""
},
{
"docid": "1b5fc0a7b39bedcac9bdc52584fb8a22",
"text": "Neem (Azadirachta indica) is a medicinal plant of containing diverse chemical active substances of several biological properties. So, the aim of the current investigation was to assess the effects of water leaf extract of neem plant on the survival and healthy status of Nile tilapia (Oreochromis niloticus), African cat fish (Clarias gariepinus) and zooplankton community. The laboratory determinations of lethal concentrations (LC 100 and LC50) through a static bioassay test were performed. The 24 h LC100 of neem leaf extract was estimated as 4 and 11 g/l, for juvenile's O. niloticus and C. gariepinus, respectively, while, the 96-h LC50 was 1.8 and 4 g/l, respectively. On the other hand, the 24 h LC100 for cladocera and copepoda were 0.25 and 0.45 g/l, respectively, while, the 96-h LC50 was 0.1 and 0.2 g/l, respectively. At the highest test concentrations, adverse effects were obvious with significant reductions in several cladoceran and copepod species. Some alterations in glucose levels, total protein, albumin, globulin as well as AST and ALT in plasma of treated O. niloticus and C. gariepinus with /2 and /10 LC50 of neem leaf water extract compared with non-treated one after 2 and 7 days of exposure were recorded and discussed. It could be concluded that the application of neem leaf extract can be used to control unwanted organisms in ponds as environment friendly material instead of deleterious pesticides. Also, extensive investigations should be established for the suitable methods of application in aquatic animal production facilities to be fully explored in future.",
"title": ""
},
{
"docid": "9eb0976833a48b7667a459d967b566eb",
"text": "A comprehensive scheme is described to construct rational trivariate solid T-splines from boundary triangulations with arbitrary topology. To extract the topology of the input geometry, we first compute a smooth harmonic scalar field defined over the mesh, and saddle points are extracted to determine the topology. By dealing with the saddle points, a polycube whose topology is equivalent to the input geometry is built, and it serves as the parametric domain for the trivariate T-spline. A polycube mapping is then used to build a one-to-one correspondence between the input triangulation and the polycube boundary. After that, we choose the deformed octree subdivision of the polycube as the initial T-mesh, and make it valid through pillowing, quality improvement and applying templates to handle extraordinary nodes and partial extraordinary nodes. The T-spline that is obtained is C2-continuous everywhere over the boundary surface except for the local region surrounding polycube corner nodes. The efficiency and robustness of the presented technique are demonstrated with several applications in isogeometric analysis. © 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2b8311fa53968e7d7b6db90d81c35d4e",
"text": "Maintaining healthy blood glucose concentration levels is advantageous for the prevention of diabetes and obesity. Present day technologies limit such monitoring to patients who already have diabetes. The purpose of this project is to suggest a non-invasive method for measuring blood glucose concentration levels. Such a method would provide useful for even people without illness, addressing preventive care. This project implements near-infrared light of wavelengths 1450nm and 2050nm through the use of light emitting diodes and measures transmittance through solutions of distilled water and d-glucose of concentrations 50mg/dL, 100mg/dL, 150mg/dL, and 200mg/dL by using an InGaAs photodiode. Regression analysis is done. Transmittance results were observed when using near-infrared light of wavelength 1450nm. As glucose concentration increases, output voltage from the photodiode also increases. The relation observed was linear. No significant transmittance results were obtained with the use of 2050nm infrared light due to high absorbance and low power. The use of 1450nm infrared light provides a means of measuring glucose concentration levels.",
"title": ""
},
{
"docid": "5158b5da8a561799402cb1ef3baa3390",
"text": "We study the segmental recurrent neural network for end-to-end acoustic modelling. This model connects the segmental conditional random field (CRF) with a recurrent neural network (RNN) used for feature extraction. Compared to most previous CRF-based acoustic models, it does not rely on an external system to provide features or segmentation boundaries. Instead, this model marginalises out all the possible segmentations, and features are extracted from the RNN trained together with the segmental CRF. In essence, this model is self-contained and can be trained end-to-end. In this paper, we discuss practical training and decoding issues as well as the method to speed up the training in the context of speech recognition. We performed experiments on the TIMIT dataset. We achieved 17.3% phone error rate (PER) from the first-pass decoding — the best reported result using CRFs, despite the fact that we only used a zeroth-order CRF and without using any language model.",
"title": ""
},
{
"docid": "8123ab525ce663e44b104db2cacd59a9",
"text": "Extractive summarization is the strategy of concatenating extracts taken from a corpus into a summary, while abstractive summarization involves paraphrasing the corpus using novel sentences. We define a novel measure of corpus controversiality of opinions contained in evaluative text, and report the results of a user study comparing extractive and NLG-based abstractive summarization at different levels of controversiality. While the abstractive summarizer performs better overall, the results suggest that the margin by which abstraction outperforms extraction is greater when controversiality is high, providing aion outperforms extraction is greater when controversiality is high, providing a context in which the need for generationbased methods is especially great.",
"title": ""
},
{
"docid": "8c301956112a9bfb087ae9921d80134a",
"text": "This paper presents an operation analysis of a high frequency three-level (TL) PWM inverter applied for an induction heating applications. The feature of TL inverter is to achieve zero-voltage switching (ZVS) at above the resonant frequency. The circuit has been modified from the full-bridge inverter to reach high-voltage with low-harmonic output. The device voltage stresses are controlled in a half of the DC input voltage. The prototype operated between 70 and 78 kHz at the DC voltage rating of 580 V can supply the output power rating up to 3000 W. The iron has been heated and hardened at the temperature up to 800degC. In addition, the experiments have been successfully tested and compared with the simulations",
"title": ""
},
{
"docid": "fa012857ec951bf6365559ab734e9367",
"text": "The aim of this study is to examine the teachers’ attitudes toward the inclusion of students with special educational needs, in public schools and how these attitudes are influenced by their self-efficacy perceptions. The sample is comprised of 416 preschool, primary and secondary education teachers. The results show that, in general, teachers develop positive attitude toward the inclusive education. Higher self-efficacy was associated rather with their capacity to come up against negative experiences at school, than with their attitude toward disabled learners in the classroom and their ability to meet successfully the special educational needs students. The results are consistent with similar studies and reveal the need of establishing collaborative support networks in school districts and the development of teacher education programs, in order to achieve the enrichment of their knowledge and skills to address diverse needs appropriately.",
"title": ""
}
] |
scidocsrr
|
5e5552ff46ca6c7780fc24a8184d18f1
|
Deep reinforcement learning with successor features for navigation across similar environments
|
[
{
"docid": "18b744209b3918d6636a87feed2597c6",
"text": "Robot learning is critically enabled by the availability of appropriate state representations. We propose a robotics-specific approach to learning such state representations. As robots accomplish tasks by interacting with the physical world, we can facilitate representation learning by considering the structure imposed by physics; this structure is reflected in the changes that occur in the world and in the way a robot can effect them. By exploiting this structure in learning, robots can obtain state representations consistent with the aspects of physics relevant to the learning task. We name this prior knowledge about the structure of interactions with the physical world robotic priors. We identify five robotic priors and explain how they can be used to learn pertinent state representations. We demonstrate the effectiveness of this approach in simulated and real robotic experiments with distracting moving objects. We show that our method extracts task-relevant state representations from high-dimensional observations, even in the presence of taskirrelevant distractions. We also show that the state representations learned by our method greatly improve generalization in reinforcement learning.",
"title": ""
},
{
"docid": "152d2dc6a96621ee6beb29ce472c6bb5",
"text": "Value functions are a core component of reinforcement learning systems. The main idea is to to construct a single function approximator V (s; θ) that estimates the long-term reward from any state s, using parameters θ. In this paper we introduce universal value function approximators (UVFAs) V (s, g; θ) that generalise not just over states s but also over goals g. We develop an efficient technique for supervised learning of UVFAs, by factoring observed values into separate embedding vectors for state and goal, and then learning a mapping from s and g to these factored embedding vectors. We show how this technique may be incorporated into a reinforcement learning algorithm that updates the UVFA solely from observed rewards. Finally, we demonstrate that a UVFA can successfully generalise to previously unseen goals.",
"title": ""
}
] |
[
{
"docid": "5b0a1e4752c67b002ce16395640dbc1a",
"text": "Cut-and-Paste Text Summarization",
"title": ""
},
{
"docid": "fb632ba10b308128c6f60e2b48adfe59",
"text": "Vehicle Make and Model Recognition (VMMR) has evolved into a significant subject of study due to its importance in numerous Intelligent Transportation Systems (ITS) and corresponding components such as Automated Vehicular Surveillance (AVS). A highly accurate and real-time VMMR system significantly reduces the overhead cost of resources otherwise required. The VMMR problem is a multiclass classification task with a peculiar set of issues and challenges like multiplicity, inter- and intra-make ambiguity among various vehicle makes and models, which need to be solved in an efficient and reliable manner to achieve a highly robust VMMR system.,,,,,, In this paper, facing the growing importance of make and model recognition of vehicles, we present an image dataset1 with 9; 170 different classes of vehicles to advance the corresponding tasks. Extensive experiments conducted using baseline approaches yield superior results for images that were occluded, under low illumination, partial or nonfrontal camera views, available in our VMMR dataset. The approaches presented herewith provide a robust VMMR system for applications in realistic environments.",
"title": ""
},
{
"docid": "8d9f65aadba86c29cb19cd9e6eecec5a",
"text": "To achieve privacy requirements, IoT application providers may need to spend a lot of money to replace existing IoT devices. To address this problem, this study proposes the Blockchain Connected Gateways (BC Gateways) to protect users from providing personal data to IoT devices without user consent. In addition, the gateways store user privacy preferences on IoT devices in the blockchain network. Therefore, this study can utilize the blockchain technology to resolve the disputes of privacy issues. In conclusion, this paper can contribute to improving user privacy and trust in IoT applications with legacy IoT devices.",
"title": ""
},
{
"docid": "bc7340bd66b8192e0276e774a9b6b9d2",
"text": "Predicting facial attributes from faces in the wild is very challenging due to pose and lighting variations in the real world. The key to this problem is to build proper feature representations to cope with these unfavourable conditions. Given the success of Convolutional Neural Network (CNN) in image classification, the high-level CNN feature, as an intuitive and reasonable choice, has been widely utilized for this problem. In this paper, however, we consider the mid-level CNN features as an alternative to the high-level ones for attribute prediction. This is based on the observation that face attributes are different: some of them are locally oriented while others are globally defined. Our investigations reveal that the mid-level deep representations outperform the prediction accuracy achieved by the (fine-tuned) high-level abstractions. We empirically demonstrate that the mid-level representations achieve state-of-the-art prediction performance on CelebA and LFWA datasets. Our investigations also show that by utilizing the mid-level representations one can employ a single deep network to achieve both face recognition and attribute prediction.",
"title": ""
},
{
"docid": "3de4922096e2d9bf04ba1ea89b3b3ff1",
"text": "Events of various sorts make up an important subset of the entities relevant not only in knowledge representation but also in natural language processing and numerous other fields and tasks. How to represent these in a homogeneous yet expressive, extensive, and extensible way remains a challenge. In this paper, we propose an approach based on FrameBase, a broad RDFS-based schema consisting of frames and roles. The concept of a frame, which is a very general one, can be considered as subsuming existing definitions of events. This ensures a broad coverage and a uniform representation of various kinds of events, thus bearing the potential to serve as a unified event model. We show how FrameBase can represent events from several different sources and domains. These include events from a specific taxonomy related to organized crime, events captured using schema.org, and events from DBpedia.",
"title": ""
},
{
"docid": "caad87e49a39569d3af1fe646bd0bde2",
"text": "Over the last years, a variety of pervasive games was developed. Although some of these applications were quite successful in bringing digital games back to the real world, very little is known about their successful integration into smart environments. When developing video games, developers can make use of a broad variety of heuristics. Using these heuristics to guide the development process of applications for intelligent environments could significantly increase their functional quality. This paper addresses the question, whether existing heuristics can be used by pervasive game developers, or if specific design guidelines for smart home environments are required. In order to give an answer, the transferability of video game heuristics was evaluated in a two-step process. In a first step, a set of validated heuristics was analyzed to identify platform-dependent elements. In a second step, the transferability of those elements was assessed in a focus group study.",
"title": ""
},
{
"docid": "85c124fd317dc7c2e5999259d26aa1db",
"text": "This paper presents a method for extracting rotation-invariant features from images of handwriting samples that can be used to perform writer identification. The proposed features are based on the Hinge feature [1], but incorporating the derivative between several points along the ink contours. Finally, we concatenate the proposed features into one feature vector to characterize the writing styles of the given handwritten text. The proposed method has been evaluated using Fire maker and IAM datasets in writer identification, showing promising performance gains.",
"title": ""
},
{
"docid": "e72ed2b388577122402831d4cd75aa0f",
"text": "Development and testing of a compact 200-kV, 10-kJ/s industrial-grade power supply for capacitor charging applications is described. Pulse repetition rate (PRR) can be from single shot to 250 Hz, depending on the storage capacitance. Energy dosing (ED) topology enables high efficiency at switching frequency of up to 55 kHz using standard slow IGBTs. Circuit simulation examples are given. They clearly show zero-current switching at variable frequency during the charge set by the ED governing equations. Peak power drawn from the primary source is about only 60% higher than the average power, which lowers the stress on the input rectifier. Insulation design was assisted by electrostatic field analyses. Field plots of the main transformer insulation illustrate field distribution and stresses in it. Subsystem and system tests were performed including limited insulation life test. A precision, high-impedance, fast HV divider was developed for measuring voltages up to 250 kV with risetime down to 10 μs. The charger was successfully tested with stored energy of up to 550 J at discharge via a custom designed open-air spark gap at PRR up to 20 Hz (in bursts). Future work will include testing at customer sites.",
"title": ""
},
{
"docid": "5ce31924dabd93d4f5770dbfc2fa3c9a",
"text": "As the number of seismic sensors grows, it is becoming increasingly difficult for analysts to pick seismic phases manually and comprehensively, yet such efforts are fundamental to earthquake monitoring. Despite years of improvements in automatic phase picking, it is difficult to match the performance of experienced analysts. A more subtle issue is that different seismic analysts may pick phases differently, which can introduce bias into earthquake locations. We present a deep-neural-network-based arrival-time picking method called\"PhaseNet\"that picks the arrival times of both P and S waves. Deep neural networks have recently made rapid progress in feature learning, and with sufficient training, have achieved super-human performance in many applications. PhaseNet uses three-component seismic waveforms as input and generates probability distributions of P arrivals, S arrivals, and noise as output. We engineer PhaseNet such that peaks in probability provide accurate arrival times for both P and S waves, and have the potential to increase the number of S-wave observations dramatically over what is currently available. This will enable both improved locations and improved shear wave velocity models. PhaseNet is trained on the prodigious available data set provided by analyst-labeled P and S arrival times from the Northern California Earthquake Data Center. The dataset we use contains more than seven million waveform samples extracted from over thirty years of earthquake recordings. We demonstrate that PhaseNet achieves much higher picking accuracy and recall rate than existing methods.",
"title": ""
},
{
"docid": "5300e9938a545895c8b97fe6c9d06aa5",
"text": "Background subtraction is a common computer vision task. We analyze the usual pixel-level approach. We develop an efficient adaptive algorithm using Gaussian mixture probability density. Recursive equations are used to constantly update the parameters and but also to simultaneously select the appropriate number of components for each pixel.",
"title": ""
},
{
"docid": "ab85854fab566b49dd07ee9c9a9cf990",
"text": "A traveling-wave circularly-polarized microstrip array antenna is presented in this paper. It uses a circularly polarized dual-feed radiating element. The element is a rectangular patch with two chamfered corners. It is fed by microstrip lines, making it possible for the radiating element and feed lines to be realized and integrated in a single layer. A four-element array is designed, built and tested. Measured performance of the antenna is presented, where a good agreement between the simulated and measured results is obtained and demonstrated.",
"title": ""
},
{
"docid": "383c4eac985c9f3ace5369c3f823d6bd",
"text": "An artificial-intelligence system uses machine learning from massive training sets to teach itself to play 49 classic computer games, demonstrating that it can adapt to a variety of tasks. See Letter p.529 I mprovements in our ability to process large amounts of data have led to progress in many areas of science, not least artificial intelligence (AI). With advances in machine learning has come the development of machines that can learn intelligent behaviour directly from data, rather than being explicitly programmed to exhibit such behaviour. For instance, the advent of 'big data' has resulted in systems that can recognize objects or sounds with considerable precision. On page 529 of this issue, Mnih et al. 1 describe an agent that uses large data sets to teach itself how to play 49 classic Atari 2600 computer games by looking at the pixels and learning actions that increase the game score. It beat a professional games player in many instances — a remarkable example of the progress being made in AI. In machine learning, systems are trained to infer patterns from observational data. A particularly simple type of pattern, a mapping between input and output, can be learnt through a process called supervised learning. A supervised-learning system is given training data consisting of example inputs and the corresponding outputs, and comes up with a model to explain those data (a process called function approximation). It does this by choosing from a class of model specified by the system's designer. Designing this class is an art: its size and complexity should reflect the amount of training data available, and its content should reflect 'prior knowledge' that the designer of the system considers useful for the problem at hand. If all this is done well, the inferred model will then apply not only for the training set, but also for other data that adhere to the same underlying pattern. The rapid growth of data sets means that machine learning can now use complex model classes and tackle highly non-trivial inference problems. Such problems are usually characterized by several factors: the data are multi dimensional; the underlying pattern is complex (for instance, it might be nonlinear or changeable); and the designer has only weak prior knowledge about the problem — in particular , a mechanistic understanding is lacking. The human brain repeatedly solves non-trivial inference problems as we go about our daily lives, interpreting high-dimensional sensory …",
"title": ""
},
{
"docid": "f84c399ff746a8721640e115fd20745e",
"text": "Self-interference cancellation invalidates a long-held fundamental assumption in wireless network design that radios can only operate in half duplex mode on the same channel. Beyond enabling true in-band full duplex, which effectively doubles spectral efficiency, self-interference cancellation tremendously simplifies spectrum management. Not only does it render entire ecosystems like TD-LTE obsolete, it enables future networks to leverage fragmented spectrum, a pressing global issue that will continue to worsen in 5G networks. Self-interference cancellation offers the potential to complement and sustain the evolution of 5G technologies toward denser heterogeneous networks and can be utilized in wireless communication systems in multiple ways, including increased link capacity, spectrum virtualization, any-division duplexing (ADD), novel relay solutions, and enhanced interference coordination. By virtue of its fundamental nature, self-interference cancellation will have a tremendous impact on 5G networks and beyond.",
"title": ""
},
{
"docid": "7531be3af1285a4c1c0b752d1ee45f52",
"text": "Given an undirected graph with weight for each vertex, the maximum weight clique problem is to find the clique of the maximum weight. Östergård proposed a fast exact algorithm for solving this problem. We show his algorithm is not efficient for very dense graphs. We propose an exact algorithm for the problem, which is faster than Östergård’s algorithm in case the graph is dense. We show the efficiency of our algorithm with some experimental results.",
"title": ""
},
{
"docid": "338af8ad05468f3205c0078d56f5bd74",
"text": "Once a color image is converted to grayscale, it is a common belief that the original color cannot be fully restored, even with the state-of-the-art colorization methods. In this paper, we propose an innovative method to synthesize invertible grayscale. It is a grayscale image that can fully restore its original color. The key idea here is to encode the original color information into the synthesized grayscale, in a way that users cannot recognize any anomalies. We propose to learn and embed the color-encoding scheme via a convolutional neural network (CNN). It consists of an encoding network to convert a color image to grayscale, and a decoding network to invert the grayscale to color. We then design a loss function to ensure the trained network possesses three required properties: (a) color invertibility, (b) grayscale conformity, and (c) resistance to quantization error. We have conducted intensive quantitative experiments and user studies over a large amount of color images to validate the proposed method. Regardless of the genre and content of the color input, convincing results are obtained in all cases.",
"title": ""
},
{
"docid": "54ef290e7c8fbc5c1bcd459df9bc4a06",
"text": "Augmenter of Liver Regeneration (ALR) is a sulfhydryl oxidase carrying out fundamental functions facilitating protein disulfide bond formation. In mammals, it also functions as a hepatotrophic growth factor that specifically stimulates hepatocyte proliferation and promotes liver regeneration after liver damage or partial hepatectomy. Whether ALR also plays a role during vertebrate hepatogenesis is unknown. In this work, we investigated the function of alr in liver organogenesis in zebrafish model. We showed that alr is expressed in liver throughout hepatogenesis. Knockdown of alr through morpholino antisense oligonucleotide (MO) leads to suppression of liver outgrowth while overexpression of alr promotes liver growth. The small-liver phenotype in alr morphants results from a reduction of hepatocyte proliferation without affecting apoptosis. When expressed in cultured cells, zebrafish Alr exists as dimer and is localized in mitochondria as well as cytosol but not in nucleus or secreted outside of the cell. Similar to mammalian ALR, zebrafish Alr is a flavin-linked sulfhydryl oxidase and mutation of the conserved cysteine in the CxxC motif abolishes its enzymatic activity. Interestingly, overexpression of either wild type Alr or enzyme-inactive Alr(C131S) mutant promoted liver growth and rescued the liver growth defect of alr morphants. Nevertheless, alr(C131S) is less efficacious in both functions. Meantime, high doses of alr MOs lead to widespread developmental defects and early embryonic death in an alr sequence-dependent manner. These results suggest that alr promotes zebrafish liver outgrowth using mechanisms that are dependent as well as independent of its sulfhydryl oxidase activity. This is the first demonstration of a developmental role of alr in vertebrate. It exemplifies that a low-level sulfhydryl oxidase activity of Alr is essential for embryonic development and cellular survival. The dose-dependent and partial suppression of alr expression through MO-mediated knockdown allows the identification of its late developmental role in vertebrate liver organogenesis.",
"title": ""
},
{
"docid": "dca8b7f7022a139fc14bddd1af2fea49",
"text": "In this study, we investigated the discrimination power of short-term heart rate variability (HRV) for discriminating normal subjects versus chronic heart failure (CHF) patients. We analyzed 1914.40 h of ECG of 83 patients of which 54 are normal and 29 are suffering from CHF with New York Heart Association (NYHA) classification I, II, and III, extracted by public databases. Following guidelines, we performed time and frequency analysis in order to measure HRV features. To assess the discrimination power of HRV features, we designed a classifier based on the classification and regression tree (CART) method, which is a nonparametric statistical technique, strongly effective on nonnormal medical data mining. The best subset of features for subject classification includes square root of the mean of the sum of the squares of differences between adjacent NN intervals (RMSSD), total power, high-frequencies power, and the ratio between low- and high-frequencies power (LF/HF). The classifier we developed achieved sensitivity and specificity values of 79.3% and 100 %, respectively. Moreover, we demonstrated that it is possible to achieve sensitivity and specificity of 89.7% and 100 %, respectively, by introducing two nonstandard features ΔAVNN and ΔLF/HF, which account, respectively, for variation over the 24 h of the average of consecutive normal intervals (AVNN) and LF/HF. Our results are comparable with other similar studies, but the method we used is particularly valuable because it allows a fully human-understandable description of classification procedures, in terms of intelligible “if ... then ...” rules.",
"title": ""
},
{
"docid": "96763245ab037e57abb3546aa12bc4fb",
"text": "This paper seeks understanding the user behavior in a social network created essentially by video interactions. We present a characterization of a social network created by the video interactions among users on YouTube, a popular social networking video sharing system. Our results uncover typical user behavioral patterns as well as show evidences of anti-social behavior such as self-promotion and other types of content pollution.",
"title": ""
},
{
"docid": "094e09f2d7d7ce91b9bbf30f31825eb3",
"text": "・ This leads to the problem of structured matching of regions and phrases: (1) individual regions agree with their corresponding phrases. (2) visual relations among regions agree with textual relations among corresponding phrases. ・ For the task of phrase localization, we propose a structured matching of phrases and regions that encourages the semantic relations between phrases to agree with the visual relations between regions.",
"title": ""
},
{
"docid": "a33aa33a2ae6efe5ca43948e8ef3043e",
"text": "In this paper, we describe COCA -- Computation Offload to Clouds using AOP (aspect-oriented programming). COCA is a programming framework that allows smart phones application developers to offload part of the computation to servers in the cloud easily. COCA works at the source level. By harnessing the power of AOP, \\name inserts appropriate offloading code into the source code of the target application based on the result of static and dynamic profiling. As a proof of concept, we integrate \\name into the Android development environment and fully automate the new build process, making application programming and software maintenance easier. With COCA, mobile applications can now automatically offload part of the computation to the cloud, achieving better performance and longer battery life. Smart phones such as iPhone and Android phones can now easily leverage the immense computing power of the cloud to achieve tasks that were considered difficult before, such as having a more complicated artificial-intelligence engine.",
"title": ""
}
] |
scidocsrr
|
5fe5e1144036bf809e53d6d44cabf7c0
|
Coping and Well-Being in Parents of Children with Autism Spectrum Disorders (ASD).
|
[
{
"docid": "ff27d6a0bb65b7640ca1dbe03abc4652",
"text": "The psychometric properties of the Depression Anxiety Stress Scales (DASS) were evaluated in a normal sample of N = 717 who were also administered the Beck Depression Inventory (BDI) and the Beck Anxiety Inventory (BAI). The DASS was shown to possess satisfactory psychometric properties, and the factor structure was substantiated both by exploratory and confirmatory factor analysis. In comparison to the BDI and BAI, the DASS scales showed greater separation in factor loadings. The DASS Anxiety scale correlated 0.81 with the BAI, and the DASS Depression scale correlated 0.74 with the BDI. Factor analyses suggested that the BDI differs from the DASS Depression scale primarily in that the BDI includes items such as weight loss, insomnia, somatic preoccupation and irritability, which fail to discriminate between depression and other affective states. The factor structure of the combined BDI and BAI items was virtually identical to that reported by Beck for a sample of diagnosed depressed and anxious patients, supporting the view that these clinical states are more severe expressions of the same states that may be discerned in normals. Implications of the results for the conceptualisation of depression, anxiety and tension/stress are considered, and the utility of the DASS scales in discriminating between these constructs is discussed.",
"title": ""
},
{
"docid": "c649d226448782ee972c620bea3e0ea3",
"text": "Parents of children with developmental disabilities, particularly autism spectrum disorders (ASDs), are at risk for high levels of distress. The factors contributing to this are unclear. This study investigated how child characteristics influence maternal parenting stress and psychological distress. Participants consisted of mothers and developmental-age matched preschool-aged children with ASD (N = 51) and developmental delay without autism (DD) ( N = 22). Evidence for higher levels of parenting stress and psychological distress was found in mothers in the ASD group compared to the DD group. Children's problem behavior was associated with increased parenting stress and psychological distress in mothers in the ASD and DD groups. This relationship was stronger in the DD group. Daily living skills were not related to parenting stress or psychological distress. Results suggest clinical services aiming to support parents should include a focus on reducing problem behaviors in children with developmental disabilities.",
"title": ""
},
{
"docid": "51be236c79d1af7a2aff62a8049fba34",
"text": "BACKGROUND\nAs the number of children diagnosed with autism continues to rise, resources must be available to support parents of children with autism and their families. Parents need help as they assess their unique situations, reach out for help in their communities, and work to decrease their stress levels by using appropriate coping strategies that will benefit their entire family.\n\n\nMETHODS\nA descriptive, correlational, cross-sectional study was conducted with 75 parents/primary caregivers of children with autism. Using the McCubbin and Patterson model of family behavior, adaptive behaviors of children with autism, family support networks, parenting stress, and parent coping were measured.\n\n\nFINDINGS AND CONCLUSIONS\nAn association between low adaptive functioning in children with autism and increased parenting stress creates a need for additional family support as parents search for different coping strategies to assist the family with ongoing and new challenges. Professionals should have up-to-date knowledge of the supports available to families and refer families to appropriate resources to avoid overwhelming them with unnecessary and inappropriate referrals.",
"title": ""
}
] |
[
{
"docid": "9ec39badc92094783fcaaa28c2eb2f7a",
"text": "In trying to solve multiobjective optimization problems, many traditional methods scalarize the objective vector into a single objective. In those cases, the obtained solution is highly sensitive to the weight vector used in the scalarization process and demands that the user have knowledge about the underlying problem. Moreover, in solving multiobjective problems, designers may be interested in a set of Pareto-optimal points, instead of a single point. Since genetic algorithms (GAs) work with a population of points, it seems natural to use GAs in multiobjective optimization problems to capture a number of solutions simultaneously. Although a vector evaluated GA (VEGA) has been implemented by Schaffer and has been tried to solve a number of multiobjective problems, the algorithm seems to have bias toward some regions. In this paper, we investigate Goldberg's notion of nondominated sorting in GAs along with a niche and speciation method to find multiple Pareto-optimal points simultaneously. The proof-of-principle results obtained on three problems used by Schaffer and others suggest that the proposed method can be extended to higher dimensional and more difficult multiobjective problems. A number of suggestions for extension and application of the algorithm are also discussed.",
"title": ""
},
{
"docid": "6b410b123925efb0dae519ab8455cc75",
"text": "Attributes, or semantic features, have gained popularity in the past few years in domains ranging from activity recognition in video to face verification. Improving the accuracy of attribute classifiers is an important first step in any application which uses these attributes. In most works to date, attributes have been considered to be independent. However, we know this not to be the case. Many attributes are very strongly related, such as heavy makeup and wearing lipstick. We propose to take advantage of attribute relationships in three ways: by using a multi-task deep convolutional neural network (MCNN) sharing the lowest layers amongst all attributes, sharing the higher layers for related attributes, and by building an auxiliary network on top of the MCNN which utilizes the scores from all attributes to improve the final classification of each attribute. We demonstrate the effectiveness of our method by producing results on two challenging publicly available datasets.",
"title": ""
},
{
"docid": "9b37cc1d96d9a24e500c572fa2cb339a",
"text": "Site-based or topic-specific search engines work with mixed success because of the general difficulty of the information retrieval task, and the lack of good link information to allow authorities to be identified. We are advocating an open source approach to the problem due to its scope and need for software components. We have adopted a topic-based search engine because it represents the next generation of capability. This paper outlines our scalable system for site-based or topic-specific search, and demonstrates the developing system on a small 250,000 document collection of EU and UN web pages.",
"title": ""
},
{
"docid": "fc8850669cc3f6f2dd1baaf2d2792506",
"text": "Liver segmentation is still a challenging task in medical image processing area due to the complexity of the liver's anatomy, low contrast with adjacent organs, and presence of pathologies. This investigation was used to develop and validate an automated method to segment livers in CT images. The proposed framework consists of three steps: 1) preprocessing; 2) initialization; and 3) segmentation. In the first step, a statistical shape model is constructed based on the principal component analysis and the input image is smoothed using curvature anisotropic diffusion filtering. In the second step, the mean shape model is moved using thresholding and Euclidean distance transformation to obtain a coarse position in a test image, and then the initial mesh is locally and iteratively deformed to the coarse boundary, which is constrained to stay close to a subspace of shapes describing the anatomical variability. Finally, in order to accurately detect the liver surface, deformable graph cut was proposed, which effectively integrates the properties and inter-relationship of the input images and initialized surface. The proposed method was evaluated on 50 CT scan images, which are publicly available in two databases Sliver07 and 3Dircadb. The experimental results showed that the proposed method was effective and accurate for detection of the liver surface.",
"title": ""
},
{
"docid": "027eb9b0a8720451d45f144b45de7810",
"text": "BACKGROUND\nThe blood supply of the lateral supramalleolar flap (LSMF) generally comes from the perforating branch of the peroneal artery. However, the cutaneous branch may also receive blood from the anterior tibial artery. The main objective of the present study was to clarify the vascular anatomy of the LSMF.\n\n\nMETHODS\nAnatomical dissections were performed on 28 perfused fresh cadaver legs. The cutaneous branches of LSMF were identified, and the anatomic relationship between the cutaneous branches and the peroneal and anterior tibial arteries was analyzed.\n\n\nRESULTS\nThe vascular supply for LSMF was divided into 2 main types. A collateral inferolateral branch from the anterior tibial artery anastomosed with the perforating branch of the peroneal artery around the inferior tibiofibular angle, and the main cutaneous branch of the flap arose from this arterial anastomosis in 20 of 28 limbs (71.4%). The collateral inferolateral branch was absent or very small in the other 8 of 28 dissections (28.6%), and the cutaneous branches solely arose from the perforating branch of the peroneal artery. The anastomosis of the descending branch of the peroneal artery and anterior lateral malleolar artery was always (100%) found around the tibiotalar joint.\n\n\nCONCLUSIONS\nIn addition to the perforating branch of the peroneal artery, the LSMF may also receive blood from the anterior tibial artery through the collateral inferolateral branch. New modified proximally based flaps could be designed, and caution is warranted for these variations when a distally based flap is performed.",
"title": ""
},
{
"docid": "94919a7ba43e986e3519d658bba03811",
"text": "We propose a single image dehazing method that is based on a physical model and the dark channel prior principle. The selection of an atmospheric light value is directly responsible for the color authenticity and contrast of the resulting image. Our choice of atmospheric light is based on a variogram, which slowly weakens areas in the image that do not conform to the dark channel prior. Additionally, we propose a fast transmission estimation algorithm to shorten the processing time. Along with a subjective evaluation, the image quality was also evaluated using three indicators: MSE, PSNR, and average gradient. Our experimental results show that the proposed method can obtain accurate dehazing results and improve the operational efficiency. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "de53086ad6d2f3a2c69aa37dde35bee7",
"text": "Towards the integration of rules and ontologies in the Semantic Web, we propose a combination of logic programming under the answer set semantics with the description logics SHIF(D) and SHOIN (D), which underly the Web ontology languages OWL Lite and OWL DL, respectively. This combination allows for building rules on top of ontologies but also, to a limited extent, building ontologies on top of rules. We introduce description logic programs (dl-programs), which consist of a description logic knowledge base L and a finite set of description logic rules (dl-rules) P . Such rules are similar to usual rules in logic programs with negation as failure, but may also contain queries to L, possibly default-negated, in their bodies. We define Herbrand models for dl-programs, and show that satisfiable positive dl-programs have a unique least Herbrand model. More generally, consistent stratified dl-programs can be associated with a unique minimal Herbrand model that is characterized through iterative least Herbrand models. We then generalize the (unique) minimal Herbrand model semantics for positive and stratified dl-programs to a strong answer set semantics for all dl-programs, which is based on a reduction to the least model semantics of positive dl-programs. We also define a weak answer set semantics based on a reduction to the answer sets of ordinary logic programs. Strong answer sets are weak answer sets, and both properly generalize answer sets of ordinary normal logic programs. We then give fixpoint characterizations for the (unique) minimal Herbrand model semantics of positive and stratified dl-programs, and show how to compute these models by finite fixpoint iterations. Furthermore, we give a precise picture of the complexity of deciding strong and weak answer set existence for a dl-program. 1Institut für Informationssysteme, Technische Universität Wien, Favoritenstraße 9-11, A-1040 Vienna, Austria; e-mail: {eiter, lukasiewicz, roman, tompits}@kr.tuwien.ac.at. 2Dipartimento di Informatica e Sistemistica, Università di Roma “La Sapienza”, Via Salaria 113, I-00198 Rome, Italy; e-mail: lukasiewicz@dis.uniroma1.it. Acknowledgements: This work has been partially supported by the Austrian Science Fund project Z29N04 and a Marie Curie Individual Fellowship of the European Community programme “Human Potential” under contract number HPMF-CT-2001-001286 (disclaimer: The authors are solely responsible for information communicated and the European Commission is not responsible for any views or results expressed). We would like to thank Ian Horrocks and Ulrike Sattler for providing valuable information on complexityrelated issues during the preparation of this paper. Copyright c © 2004 by the authors INFSYS RR 1843-03-13 I",
"title": ""
},
{
"docid": "032423abdc2ffa2a8a84ddf6e6bbde2a",
"text": "European Journal of Information Systems (2008) 17, 441–443. doi:10.1057/ejis.2008.45 Your new issue of EJIS opens with two opinion articles, each of which is a response to one of our previous opinion articles. First, I have invited Ray Paul to reply to the opinion article in the last number written by Bob Galliers. Second, we have a deeply analytical article written by Steven Alter in response to Ray Paul’s 2007 editorial ‘Challenges to information systems: time to change’. This issue is our only special issue for 2008, and is dedicated to Design Science Research. Most of the articles came to us from among the best papers presented at the 2008 DESRIST conference (Design Science Research in Information Systems and Technology). The special issue is introduced by Robert Winter in his article ‘Design Science Research in Europe’. As this introduction explains, the community within information systems with an interest in design science research is engaged in a discourse of discovery. However, it cannot be said that there is yet broad agreement on terminology, methodology, evaluation criteria, etc. This discovery arena encompasses terms like design science, design research, science of design and design theory. Because there is some disagreement over what these terms mean, the conference (and as a consequence the special issue) adopted the term ‘Design Science Research’ as a broad term meant to encompass the various meanings of all of the others. (For the purposes at hand, I will use the term design science as shorthand for design science research.) This fundamental discourse is reminiscent of the long-running search for the meaning of the term ‘theory’. When it became clear that there would never be complete agreement among management scholars on exactly what sorts of things constitute theory, Robert Sutton and Barry Shaw suggested that since agreement seemed impossible on ‘what theory is’, the best that could be done would be to seek agreement on ‘what theory is not’ (Sutton & Staw, 1995). Perhaps those interested in design science should seek a similar model and agree on ‘what design science is not’.",
"title": ""
},
{
"docid": "bade68b8f95fc0ae5a377a52c8b04b5c",
"text": "The majority of deterministic mathematical programming problems have a compact formulation in terms of algebraic equations. Therefore they can easily take advantage of the facilities offered by algebraic modeling languages. These tools allow expressing models by using convenient mathematical notation (algebraic equations) and translate the models into a form understandable by the solvers for mathematical programs. Algebraic modeling languages provide facility for the management of a mathematical model and its data, and access different general-purpose solvers. The use of algebraic modeling languages (AMLs) simplifies the process of building the prototype model and in some cases makes it possible to create and maintain even the production version of the model. As presented in other chapters of this book, stochastic programming (SP) is needed when exogenous parameters of the mathematical programming problem are random. Dealing with stochasticities in planning is not an easy task. In a standard scenario-by-scenario analysis, the system is optimized for each scenario separately. Varying the scenario hypotheses we can observe the different optimal responses of the system and delineate the “strong trends” of the future. Indeed, this scenarioby-scenario approach implicitly assumes perfect foresight. The method provides a first-stage decision, which is valid only for the scenario under consideration. Having as many decisions as there are scenarios leaves the decision-maker without a clear recommendation. In stochastic programming the whole set of scenarios is combined into an event tree, which describes the unfolding of uncertainties over the period of planning. The model takes into account the uncertainties characterizing the scenarios through stochastic programming techniques. This adaptive plan is much closer, in spirit, to the way that decision-makers have to deal with uncertain future",
"title": ""
},
{
"docid": "d5f905fb66ba81ecde0239a4cc3bfe3f",
"text": "Bidirectional path tracing (BDPT) can render highly realistic scenes with complicated lighting scenarios. The Light Vertex Cache (LVC) based BDPT method by Davidovic et al. [Davidovič et al. 2014] provided good performance on scenes with simple materials in a progressive rendering scenario. In this paper, we propose a new bidirectional path tracing formulation based on the LVC approach that handles scenes with complex, layered materials efficiently on the GPU. We achieve coherent material evaluation while conserving GPU memory requirements using sorting. We propose a modified method for selecting light vertices using the contribution importance which improves the image quality for a given amount of work. Progressive rendering can empower artists in the production pipeline to iterate and preview their work quickly. We hope the work presented here will enable the use of GPUs in the production pipeline with complex materials and complicated lighting scenarios.",
"title": ""
},
{
"docid": "5e4c4a9f298a2eb015ce96fa2c82c2c2",
"text": "Tendons are able to respond to mechanical forces by altering their structure, composition, and mechanical properties--a process called tissue mechanical adaptation. The fact that mechanical adaptation is effected by cells in tendons is clearly understood; however, how cells sense mechanical forces and convert them into biochemical signals that ultimately lead to tendon adaptive physiological or pathological changes is not well understood. Mechanobiology is an interdisciplinary study that can enhance our understanding of mechanotransduction mechanisms at the tissue, cellular, and molecular levels. The purpose of this article is to provide an overview of tendon mechanobiology. The discussion begins with the mechanical forces acting on tendons in vivo, tendon structure and composition, and its mechanical properties. Then the tendon's response to exercise, disuse, and overuse are presented, followed by a discussion of tendon healing and the role of mechanical loading and fibroblast contraction in tissue healing. Next, mechanobiological responses of tendon fibroblasts to repetitive mechanical loading conditions are presented, and major cellular mechanotransduction mechanisms are briefly reviewed. Finally, future research directions in tendon mechanobiology research are discussed.",
"title": ""
},
{
"docid": "1c506714f51329e6ac438a6c5e8bbf20",
"text": "Color fringe causes degradation of image quality when an image is obtained from a digital camera. This paper presents a computationally efficient color fringe correction method using a guided filter. Experimental results with a number of test images show the effectiveness of the proposed method in terms of color fringe removal and the computational load.",
"title": ""
},
{
"docid": "07d8df7d895f0af5e76bd0d5980055fb",
"text": "Debate over euthanasia is not a recent phenomenon. Over the years, public opinion, decisions of courts, and legal and medical approaches to the issue of euthanasia has been conflicting. The connection between murder and euthanasia has been attempted in a few debates. Although it is widely accepted that murder is a crime, a clearly defined stand has not been taken on euthanasia. This article considers euthanasia from the medical, legal, and global perspectives and discusses the crime of murder in relation to euthanasia, taking into consideration the issue of consent in the law of crime. This article concludes that in the midst of this debate on euthanasia and murder, the important thing is that different countries need to find their own solution to the issue of euthanasia rather than trying to import solutions from other countries.",
"title": ""
},
{
"docid": "9d3ca4966c26c6691398157a22531a1d",
"text": "Bipedal locomotion skills are challenging to develop. Control strategies often use local linearization of the dynamics in conjunction with reduced-order abstractions to yield tractable solutions. In these model-based control strategies, the controller is often not fully aware of many details, including torque limits, joint limits, and other non-linearities that are necessarily excluded from the control computations for simplicity. Deep reinforcement learning (DRL) offers a promising model-free approach for controlling bipedal locomotion which can more fully exploit the dynamics. However, current results in the machine learning literature are often based on ad-hoc simulation models that are not based on corresponding hardware. Thus it remains unclear how well DRL will succeed on realizable bipedal robots. In this paper, we demonstrate the effectiveness of DRL using a realistic model of Cassie, a bipedal robot. By formulating a feedback control problem as finding the optimal policy for a Markov Decision Process, we are able to learn robust walking controllers that imitate a reference motion with DRL. Controllers for different walking speeds are learned by imitating simple time-scaled versions of the original reference motion. Controller robustness is demonstrated through several challenging tests, including sensory delay, walking blindly on irregular terrain and unexpected pushes at the pelvis. We also show we can interpolate between individual policies and that robustness can be improved with an interpolated policy.",
"title": ""
},
{
"docid": "2b61c90d330d3bd290c1bcd485ce0129",
"text": "Automatic detection of vehicle alert signals is extremely critical in autonomous vehicle applications and collision avoidance systems, as these detection systems can help in the prevention of deadly and costly accidents. In this paper, we present a novel and lightweight algorithm that uses a Kalman filter and a codebook to achieve a high level of robustness. The algorithm is able to detect braking and turning signals of the vehicle in front both during the daytime and at night (daytime detection being a major advantage over current research), as well as correctly track a vehicle despite changing lanes or encountering periods of no or low-visibility of the vehicle in front. We demonstrate that the proposed algorithm is able to detect the signals accurately and reliably under different lighting conditions.",
"title": ""
},
{
"docid": "a70f90ce39e1c3fc771412ca87adbad1",
"text": "The concept of death has evolved as technology has progressed. This has forced medicine and society to redefine its ancient cardiorespiratory centred diagnosis to a neurocentric diagnosis of death. The apparent consensus about the definition of death has not yet appeased all controversy. Ethical, moral and religious concerns continue to surface and include a prevailing malaise about possible expansions of the definition of death to encompass the vegetative state or about the feared bias of formulating criteria so as to facilitate organ transplantation.",
"title": ""
},
{
"docid": "956ffd90cc922e77632b8f9f79f42a98",
"text": "Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism Amir jafari Nikos Tsagarakis Darwin G Caldwell Article information: To cite this document: Amir jafari Nikos Tsagarakis Darwin G Caldwell , (2015),\"Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism\", Industrial Robot: An International Journal, Vol. 42 Iss 3 pp. Permanent link to this document: http://dx.doi.org/10.1108/IR-12-2014-0433",
"title": ""
},
{
"docid": "80ef125fb855cfb76197e474ec371726",
"text": "An electric arc furnace is a nonlinear, time varying load with stochastic behavior, which gives rise to harmonics, interharminics and voltage flicker. Since a power system has finite impedance, the current distortion caused by a DC electric arc furnace load creates a corresponding voltage distortion in the supply lines. The current and voltage harmonic distortion causes several problems in electrical power system, such as electrical, electronic, and computer equipment damage, control system errors due to electrical noise caused by harmonics, additional losses in transmission and distribution networks and etc. This paper makes an effort to display the differences between two types of DC electric arc furnace feeding system from the viewpoint of total harmonic distortion at AC side. These Types of feeding system include controlled rectifier power supply and uncontrolled rectifier chopper power supply. Simulation results show that the uncontrolled rectifier chopper power supply is more efficient than the other one.",
"title": ""
},
{
"docid": "366800edb32efd098351bc711984854a",
"text": "Building credible Non-Playing Characters (NPCs) in games requires not only to enhance the graphic animation but also the behavioral model. This paper tackles the problem of the dynamics of NPCs social relations depending on their emotional interactions. First, we discuss the need for a dynamic model of social relations. Then, we present our model of social relations for NPCs and we give a qualitative model of the influence of emotions on social relations. We describe the implementation of this model and we briefly illustrate its features on a simple scene.",
"title": ""
},
{
"docid": "5d82469913da465c7445359dcdbbc89b",
"text": "There is increasing interest in using synthetic aperture radar (SAR) images in automated target recognition and decision-making tasks. The success of such tasks depends on how well the reconstructed SAR images exhibit certain features of the underlying scene. Based on the observation that typical underlying scenes usually exhibit sparsity in terms of such features, this paper presents an image formation method that formulates the SAR imaging problem as a sparse signal representation problem. For problems of complex-valued nature, such as SAR, a key challenge is how to choose the dictionary and the representation scheme for effective sparse representation. Since features of the SAR reflectivity magnitude are usually of interest, the approach is designed to sparsely represent the magnitude of the complex-valued scattered field. This turns the image reconstruction problem into a joint optimisation problem over the representation of magnitude and phase of the underlying field reflectivities. The authors develop the mathematical framework for this method and propose an iterative solution for the corresponding joint optimisation problem. The experimental results demonstrate the superiority of this method over previous approaches in terms of both producing high-quality SAR images and exhibiting robustness to uncertain or limited data.",
"title": ""
}
] |
scidocsrr
|
8e0e64ef02fb9b8107f3f5ff8903b4ae
|
Recent Advances in Open Set Recognition: A Survey
|
[
{
"docid": "8017a70c73f6758b685648054201342a",
"text": "Detecting samples from previously unknown classes is a crucial task in object recognition, especially when dealing with real-world applications where the closed-world assumption does not hold. We present how to apply a null space method for novelty detection, which maps all training samples of one class to a single point. Beside the possibility of modeling a single class, we are able to treat multiple known classes jointly and to detect novelties for a set of classes with a single model. In contrast to modeling the support of each known class individually, our approach makes use of a projection in a joint subspace where training samples of all known classes have zero intra-class variance. This subspace is called the null space of the training data. To decide about novelty of a test sample, our null space approach allows for solely relying on a distance measure instead of performing density estimation directly. Therefore, we derive a simple yet powerful method for multi-class novelty detection, an important problem not studied sufficiently so far. Our novelty detection approach is assessed in comprehensive multi-class experiments using the publicly available datasets Caltech-256 and Image Net. The analysis reveals that our null space approach is perfectly suited for multi-class novelty detection since it outperforms all other methods.",
"title": ""
},
{
"docid": "5531c0b2286cff0d1c738f4d919e4cc4",
"text": "With the of advent rich classification models and high computational power visual recognition systems have found many operational applications. Recognition in the real world poses multiple challenges that are not apparent in controlled lab environments. The datasets are dynamic and novel categories must be continuously detected and then added. At prediction time, a trained system has to deal with myriad unseen categories. Operational systems require minimal downtime, even to learn. To handle these operational issues, we present the problem of Open World Recognition and formally define it. We prove that thresholding sums of monotonically decreasing functions of distances in linearly transformed feature space can balance “open space risk” and empirical risk. Our theory extends existing algorithms for open world recognition. We present a protocol for evaluation of open world recognition systems. We present the Nearest Non-Outlier (NNO) algorithm that evolves model efficiently, adding object categories incrementally while detecting outliers and managing open space risk. We perform experiments on the ImageNet dataset with 1.2M+ images to validate the effectiveness of our method on large scale visual recognition tasks. NNO consistently yields superior results on open world recognition.",
"title": ""
}
] |
[
{
"docid": "f8d0929721ba18b2412ca516ac356004",
"text": "Because of the fact that vehicle crash tests are complex and complicated experiments it is advisable to establish their mathematical models. This paper contains an overview of the kinematic and dynamic relationships of a vehicle in a collision. There is also presented basic mathematical model representing a collision together with its analysis. The main part of this paper is devoted to methods of establishing parameters of the vehicle crash model and to real crash data investigation i.e. – creation of a Kelvin model for a real experiment, its analysis and validation. After model’s parameters extraction a quick assessment of an occupant crash severity is done. Key-Words: Modeling, vehicle crash, Kelvin model, data processing.",
"title": ""
},
{
"docid": "9266a615d04b44961d8b8202b30aa27e",
"text": "Mobile communication has become a serious business tool nowadays. Mobile devices are the major platform for the users to transfer and exchange diverse data for communication. These devices are variably used for applications like banking, personal digital assistance, remote working, m-commerce, internet access, entertainment and medical usage. However people are still hesitant to use mobile devices because of its security issue. It is necessary to provide a reliable and easy to use method for securing these mobile devices against unauthorized access and diverse attacks. It is preferred to apply biometrics for the security of mobile devices and improve reliability over wireless services. This paper deals with various threats and vulnerabilities that affect the mobile devices and also it discusses how biometrics can be a solution to the mobile devices ensuring security.",
"title": ""
},
{
"docid": "c04e3a28b6f3f527edae534101232701",
"text": "An intelligent interface for an information retrieval system has the aims of controlling an underlying information retrieval system di rectly interacting with the user and allowing him to retrieve relevant information without the support of a human intermediary Developing intelligent interfaces for information retrieval is a di cult activity and no well established models of the functions that such systems should possess are available Despite of this di culty many intelligent in terfaces for information retrieval have been implemented in the past years This paper surveys these systems with two aims to stand as a useful entry point for the existing literature and to sketch an ana lysis of the functionalities that an intelligent interface for information retrieval has to possess",
"title": ""
},
{
"docid": "9a04006d0328b838b9360a381401e436",
"text": "In this paper, a novel approach for two-loop control of the DC-DC flyback converter in discontinuous conduction mode is presented by using sliding mode controller. The proposed controller can regulate output of the converter in wide range of input voltage and load resistance. In order to verify accuracy and efficiency of the developed sliding mode controller, proposed method is simulated in MATLAB/Simulink. It is shown that the developed controller has faster dynamic response compared with standard integrated circuit (MIC38C42-5) based regulators.",
"title": ""
},
{
"docid": "351c8772471518f305ab0b327632d59d",
"text": "Image classification is one of classical problems of concern in image processing. There are various approaches for solving this problem. The aim of this paper is bring together two areas in which are Artificial Neural Network (ANN) and Support Vector Machine (SVM) applying for image classification. Firstly, we separate the image into many sub-images based on the features of images. Each sub-image is classified into the responsive class by an ANN. Finally, SVM has been compiled all the classify result of ANN. Our proposal classification model has brought together many ANN and one SVM. Let it denote ANN_SVM. ANN_SVM has been applied for Roman numerals recognition application and the precision rate is 86%. The experimental results show the feasibility of our proposal model.",
"title": ""
},
{
"docid": "7e208f65cf33a910cc958ec57bdff262",
"text": "This study proposed to address a new method that could select subsets more efficiently. In addition, the reasons why employers voluntarily turnover were also investigated in order to increase the classification accuracy and to help managers to prevent employers’ turnover. The mixed subset selection used in this study combined Taguchi method and Nearest Neighbor Classification Rules to select subset and analyze the factors to find the best predictor of employer turnover. All the samples used in this study were from industry A, in which the employers left their job during 1st of February, 2001 to 31st of December, 2007, compared with those incumbents. The results showed that through the mixed subset selection method, total 18 factors were found that are important to the employers. In addition, the accuracy of correct selection was 87.85% which was higher than before using this subset selection (80.93%). The new subset selection method addressed in this study does not only provide industries to understand the reasons of employers’ turnover, but also could be a long-term classification prediction for industries. Key-Words: Voluntary Turnover; Subset Selection; Taguchi Methods; Nearest Neighbor Classification Rules; Training pattern",
"title": ""
},
{
"docid": "2cd5e92b5705753d10fc5949936d43ef",
"text": "Traditional flow monitoring provides a high-level view of network communications by reporting the addresses, ports, and byte and packet counts of a flow. This data is valuable, but it gives little insight into the actual content or context of a flow. To obtain this missing insight, we investigated intra-flow data, that is, information about events that occur inside of a flow that can be conveniently collected, stored, and analyzed within a flow monitoring framework. The focus of our work is on new types of data that are independent of protocol details, such as the lengths and arrival times of messages within a flow. These data elements have the attractive property that they apply equally well to both encrypted and unencrypted flows. Protocol-aware telemetry, specifically TLS-aware telemetry, is also analyzed. In this paper, we explore the benefits of enhanced telemetry, desirable properties of new intra-flow data features with respect to a flow monitoring system, and how best to use machine learning classifiers that operate on this data. We provide results on millions of flows processed by our open source program. Finally, we show that leveraging appropriate data features and simple machine learning models can successfully identify threats in encrypted network traffic.",
"title": ""
},
{
"docid": "3fd52b589a58f449ab1c03a19a034a2d",
"text": "This paper presents a low-power high-bit-rate phase modulator based on a digital PLL with single-bit TDC and two-point injection scheme. At high bit rates, this scheme requires a controlled oscillator with wide tuning range and becomes critically sensitive to the delay spread between the two injection paths, considerably degrading the achievable error-vector magnitude and causing significant spectral regrowth. A multi-capacitor-bank oscillator topology with an automatic background regulation of the gains of the banks and a digital adaptive filter for the delay-spread correction are introduced. The phase modulator fabricated in a 65-nm CMOS process synthesizes carriers in the 2.9-to-4.0-GHz range from a 40-MHz crystal reference and it is able to produce a phase change up to ±π with 10-bit resolution in a single reference cycle. Measured EVM at 3.6 GHz is -36 dB for a 10-Mb/s GMSK and a 20-Mb/s QPSK modulation. Power dissipation is 5 mW from a 1.2-V voltage supply, leading to a total energy consumption of 0.25 nJ/bit.",
"title": ""
},
{
"docid": "6bfdd78045816085cd0fa5d8bb91fd18",
"text": "Contextual factors can greatly influence the users' preferences in listening to music. Although it is hard to capture these factors directly, it is possible to see their effects on the sequence of songs liked by the user in his/her current interaction with the system. In this paper, we present a context-aware music recommender system which infers contextual information based on the most recent sequence of songs liked by the user. Our approach mines the top frequent tags for songs from social tagging Web sites and uses topic modeling to determine a set of latent topics for each song, representing different contexts. Using a database of human-compiled playlists, each playlist is mapped into a sequence of topics and frequent sequential patterns are discovered among these topics. These patterns represent frequent sequences of transitions between the latent topics representing contexts. Given a sequence of songs in a user's current interaction, the discovered patterns are used to predict the next topic in the playlist. The predicted topics are then used to post-filter the initial ranking produced by a traditional recommendation algorithm. Our experimental evaluation suggests that our system can help produce better recommendations in comparison to a conventional recommender system based on collaborative or content-based filtering. Furthermore, the topic modeling approach proposed here is also useful in providing better insight into the underlying reasons for song selection and in applications such as playlist construction and context prediction.",
"title": ""
},
{
"docid": "c2ed6ac38a6014db73ba81dd898edb97",
"text": "The ability of personality traits to predict important life outcomes has traditionally been questioned because of the putative small effects of personality. In this article, we compare the predictive validity of personality traits with that of socioeconomic status (SES) and cognitive ability to test the relative contribution of personality traits to predictions of three critical outcomes: mortality, divorce, and occupational attainment. Only evidence from prospective longitudinal studies was considered. In addition, an attempt was made to limit the review to studies that controlled for important background factors. Results showed that the magnitude of the effects of personality traits on mortality, divorce, and occupational attainment was indistinguishable from the effects of SES and cognitive ability on these outcomes. These results demonstrate the influence of personality traits on important life outcomes, highlight the need to more routinely incorporate measures of personality into quality of life surveys, and encourage further research about the developmental origins of personality traits and the processes by which these traits influence diverse life outcomes.",
"title": ""
},
{
"docid": "40e9c1a6bef4a8b0c2681b09afc528c9",
"text": "360-Degree panoramic cameras have been widely used in the field of computer vision and virtual reality recently. The use of fisheye lens to actualize a panoramic camera has become the industry trend. Fisheye lens has large distortion, and fisheye images have to be unwarped and blended to get 360-degree panoramic images, which has become two difficulties in fisheye lens practice. In this paper, a set of automatic 360-degree panoramic image generation algorithm which can be easily realized is proposed to solve these difficulties. The result shows that this software method can achieve high quality and low cost.",
"title": ""
},
{
"docid": "de1f35d0e19cafc28a632984f0411f94",
"text": "Large-pose face alignment is a very challenging problem in computer vision, which is used as a prerequisite for many important vision tasks, e.g, face recognition and 3D face reconstruction. Recently, there have been a few attempts to solve this problem, but still more research is needed to achieve highly accurate results. In this paper, we propose a face alignment method for large-pose face images, by combining the powerful cascaded CNN regressor method and 3DMM. We formulate the face alignment as a 3DMM fitting problem, where the camera projection matrix and 3D shape parameters are estimated by a cascade of CNN-based regressors. The dense 3D shape allows us to design pose-invariant appearance features for effective CNN learning. Extensive experiments are conducted on the challenging databases (AFLW and AFW), with comparison to the state of the art.",
"title": ""
},
{
"docid": "759a44aa610befecc766e7c4cbe19734",
"text": "This survey introduces the current state of the art in image and video retargeting and describes important ideas and technologies that have influenced the recent work. Retargeting is the process of adapting an image or video from one screen resolution to another to fit different displays, for example, when watching a wide screen movie on a normal television screen or a mobile device. As there has been considerable work done in this field already, this survey provides an overview of the techniques. It is meant to be a starting point for new research in the field. We include explanations of basic terms and operators, as well as the basic workflow of the different methods.",
"title": ""
},
{
"docid": "5f344817b225363f5309208909619306",
"text": "Semantic specialization is a process of finetuning pre-trained distributional word vectors using external lexical knowledge (e.g., WordNet) to accentuate a particular semantic relation in the specialized vector space. While post-processing specialization methods are applicable to arbitrary distributional vectors, they are limited to updating only the vectors of words occurring in external lexicons (i.e., seen words), leaving the vectors of all other words unchanged. We propose a novel approach to specializing the full distributional vocabulary. Our adversarial post-specialization method propagates the external lexical knowledge to the full distributional space. We exploit words seen in the resources as training examples for learning a global specialization function. This function is learned by combining a standard L2-distance loss with a adversarial loss: the adversarial component produces more realistic output vectors. We show the effectiveness and robustness of the proposed method across three languages and on three tasks: word similarity, dialog state tracking, and lexical simplification. We report consistent improvements over distributional word vectors and vectors specialized by other state-of-the-art specialization frameworks. Finally, we also propose a cross-lingual transfer method for zero-shot specialization which successfully specializes a full target distributional space without any lexical knowledge in the target language and without any bilingual data.",
"title": ""
},
{
"docid": "b395aa3ae750ddfd508877c30bae3a38",
"text": "This paper presents a technology review of voltage-source-converter topologies for industrial medium-voltage drives. In this highly active area, different converter topologies and circuits have found their application in the market. This paper covers the high-power voltage-source inverter and the most used multilevel-inverter topologies, including the neutral-point-clamped, cascaded H-bridge, and flying-capacitor converters. This paper presents the operating principle of each topology and a review of the most relevant modulation methods, focused mainly on those used by industry. In addition, the latest advances and future trends of the technology are discussed. It is concluded that the topology and modulation-method selection are closely related to each particular application, leaving a space on the market for all the different solutions, depending on their unique features and limitations like power or voltage level, dynamic performance, reliability, costs, and other technical specifications.",
"title": ""
},
{
"docid": "323eec69e6cd558ade788070cff58452",
"text": "OBJECTIVE\nTo report clinical signs, diagnostic and surgical or necropsy findings, and outcome in 2 calves with spinal epidural abscess (SEA).\n\n\nSTUDY DESIGN\nClinical report.\n\n\nANIMALS\nCalves (n=2).\n\n\nMETHODS\nCalves had neurologic examination, analysis and antimicrobial culture of cerebrospinal fluid (CSF), vertebral column radiographs, myelography, and in 1 calf, magnetic resonance imaging (MRI). A definitive diagnosis of SEA was confirmed by necropsy in 1 calf and during surgery and histologic examination of vertebral canal tissue in 1 calf.\n\n\nRESULTS\nClinical signs were difficulty in rising, ataxia, fever, apparent spinal pain, hypoesthesia, and paresis/plegia which appeared 15 days before admission. Calf 1 had pelvic limb weakness and difficulty standing and calf 2 had severe ataxia involving both thoracic and pelvic limbs. Extradural spinal cord compression was identified by myelography. SEA suspected in calf 1 with discospondylitis was confirmed at necropsy whereas calf 2 had MRI identification of the lesion and was successfully decompressed by laminectomy and SEA excision. Both calves had peripheral neutrophilia and calf 2 had neutrophilic pleocytosis in CSF. Bacteria were not isolated from CSF, from the surgical site or during necropsy. Calf 2 improved neurologically and had a good long-term outcome.\n\n\nCONCLUSION\nGood outcome in a calf with SEA was obtained after adequate surgical decompression and antibiotic administration.\n\n\nCLINICAL RELEVANCE\nSEA should be included in the list of possible causes of fever, apparent spinal pain, and signs of myelopathy in calves.",
"title": ""
},
{
"docid": "3e6aac2e0ff6099aabeee97dc1292531",
"text": "A lthough ordinary least-squares (OLS) regression is one of the most familiar statistical tools, far less has been written − especially in the pedagogical literature − on regression through the origin (RTO). Indeed, the subject is surprisingly controversial. The present note highlights situations in which RTO is appropriate, discusses the implementation and evaluation of such models and compares RTO functions among three popular statistical packages. Some examples gleaned from past Teaching Statistics articles are used as illustrations. For expository convenience, OLS and RTO refer here to linear regressions obtained by least-squares methods with and without a constant term, respectively.",
"title": ""
},
{
"docid": "ffde415087f0a7fcd93a2a94c17e196a",
"text": "This paper describes Stanford’s system at the CoNLL 2018 UD Shared Task. We introduce a complete neural pipeline system that takes raw text as input, and performs all tasks required by the shared task, ranging from tokenization and sentence segmentation, to POS tagging and dependency parsing. Our single system submission achieved very competitive performance on big treebanks. Moreover, after fixing an unfortunate bug, our corrected system would have placed the 2nd, 1st, and 3rd on the official evaluation metrics LAS, MLAS, and BLEX, and would have outperformed all submission systems on lowresource treebank categories on all metrics by a large margin. We further show the effectiveness of different model components through extensive ablation studies.",
"title": ""
},
{
"docid": "c46dd659aa1dfeac9c58197ff8575278",
"text": "Previous studies indicate that childhood sexual abuse can have extensive and serious consequences. The aim of this research was to do a qualitative study of the consequences of childhood sexual abuse for Icelandic men's health and well-being. Phenomenology was the methodological approach of the study. Totally 14 interviews were conducted, two per individual, and analysed based on the Vancouver School of Phenomenology. The main results of the study showed that the men describe deep and almost unbearable suffering, affecting their entire life, of which there is no alleviation in sight. The men have lived in repressed silence most of their lives and have come close to taking their own lives. What stopped them from committing suicide was revealing to others what happened to them which set them free in a way. The men experienced fear- or rage-based shock at the time of the trauma and most of them endured the attack by dissociation, disconnecting psyche and body and have difficulties reconnecting. They had extremely difficult childhoods, living with indisposition, bullying, learning difficulties and behavioural problems. Some have, from a young age, numbed themselves with alcohol and elicit drugs. They have suffered psychologically and physically and have had relational and sexual intimacy problems. The consequences of the abuse surfaced either immediately after the shock or many years later and developed into complex post-traumatic stress disorder. Because of perceived societal prejudice, it was hard for the men to seek help. This shows the great need for professionals to be alert to the possible consequences of childhood sexual abuse in their practice to reverse the damaging consequences on their health and well-being. We conclude that living in repressed silence after a trauma, like childhood sexual abuse, can be dangerous for the health, well-being and indeed the very life of the survivor.",
"title": ""
},
{
"docid": "51e0a26f73fb2cc56286a15c4e15d9cd",
"text": "OBJECTIVE\nTo determine the effectiveness of a water flosser in reducing the bleeding on probing (BOP) index around dental implants as compared to flossing.\n\n\nMETHODS AND MATERIALS\nPatients with implants were randomly assigned to one of two groups in this examiner-masked, single-center study. The study compared the efficacy of a manual toothbrush paired with either traditional string floss or a water flosser.\n\n\nRESULTS\nThe primary outcome was the reduction in the incidence of BOP after 30 days. There were no differences in the percent of bleeding sites between the groups at baseline. At 30 days, 18 of the 22 (81.8%) implants in the water flosser group showed a reduction in BOP compared to 6 of the 18 (33.3%) in the floss group (P=0.0018).\n\n\nCONCLUSIONS\nThese results demonstrate that the water flosser group had statistically significantly greater bleeding reduction than the string floss group. The authors concluded that water flossing may be a useful adjuvant for implant hygiene maintenance.",
"title": ""
}
] |
scidocsrr
|
3f04d4e8dd498cca0956dda98ed70366
|
HeteroMed: Heterogeneous Information Network for Medical Diagnosis
|
[
{
"docid": "8d9a02974ad85aa508dc0f7a85a669f1",
"text": "The successful application of data mining in highly visible fields like e-business, marketing and retail has led to its application in other industries and sectors. Among these sectors just discovering is healthcare. The healthcare environment is still „information rich‟ but „knowledge poor‟. There is a wealth of data available within the healthcare systems. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. This research paper intends to provide a survey of current techniques of knowledge discovery in databases using data mining techniques that are in use in today‟s medical research particularly in Heart Disease Prediction. Number of experiment has been conducted to compare the performance of predictive data mining technique on the same dataset and the outcome reveals that Decision Tree outperforms and some time Bayesian classification is having similar accuracy as of decision tree but other predictive methods like KNN, Neural Networks, Classification based on clustering are not performing well. The second conclusion is that the accuracy of the Decision Tree and Bayesian Classification further improves after applying genetic algorithm to reduce the actual data size to get the optimal subset of attribute sufficient for heart disease prediction.",
"title": ""
}
] |
[
{
"docid": "f4892cf76edfad23f0726d89ebfa6522",
"text": "Compression models represent an interesting approach for different classification tasks and have been used widely across many research fields. We adapt compression models to the field of authorship verification (AV), a branch of digital text forensics. The task in AV is to verify if a questioned document and a reference document of a known author are written by the same person. We propose an intrinsic AV method, which yields competitive results compared to a number of current state-of-the-art approaches, based on support vector machines or neural networks. However, in contrast to these approaches our method does not make use of machine learning algorithms, natural language processing techniques, feature engineering, hyperparameter optimization or external documents (a common strategy to transform AV from a one-class to a multi-class classification problem). Instead, the only three key components of our method are a compressing algorithm, a dissimilarity measure and a threshold, needed to accept or reject the authorship of the questioned document. Due to its compactness, our method performs very fast and can be reimplemented with minimal effort. In addition, the method can handle complicated AV cases where both, the questioned and the reference document, are not related to each other in terms of topic or genre. We evaluated our approach against publicly available datasets, which were used in three international AV competitions. Furthermore, we constructed our own corpora, where we evaluated our method against state-of-the-art approaches and achieved, in both cases, promising results.",
"title": ""
},
{
"docid": "4ee5931bf57096913f7e13e5da0fbe7e",
"text": "The design of an ultra wideband aperture-coupled vertical microstrip-microstrip transition is presented. The proposed transition exploits broadside coupling between exponentially tapered microstrip patches at the top and bottom layers via an exponentially tapered slot at the mid layer. The theoretical analysis indicates that the best performance concerning the insertion loss and the return loss over the maximum possible bandwidth can be achieved when the coupling factor is equal to 0.75 (or 2.5 dB). The calculated and simulated results show that the proposed transition has a linear phase performance, an important factor for distortionless pulse operation, with less than 0.4 dB insertion loss and more than 17 dB return loss across the frequency band 3.1 GHz to 10.6 GHz.",
"title": ""
},
{
"docid": "dee24c18a7d653f3d4136031bcb6efcb",
"text": "In mobile cloud computing, application offloading is implemented as a software level solution for augmenting computing potentials of smart mobile devices. VM is one of the prominent approaches for offloading computational load to cloud server nodes. A challenging aspect of such frameworks is the additional computing resources utilization in the deployment and management of VM on Smartphone. The deployment of Virtual Machine (VM) requires computing resources for VM creation and configuration. The management of VM includes computing resources utilization in the monitoring of VM in entire lifecycle and physical resources management for VM on Smartphone. The objective of this work is to ensure that VM deployment and management requires additional computing resources on mobile device for application offloading. This paper analyzes the impact of VM deployment and management on the execution time of application in different experiments. We investigate VM deployment and management for application processing in simulation environment by using CloudSim, which is a simulation toolkit that provides an extensible simulation framework to model the simulation of VM deployment and management for application processing in cloud-computing infrastructure. VM deployment and management in application processing is evaluated by analyzing VM deployment, the execution time of applications and total execution time of the simulation. The analysis concludes that VM deployment and management require additional resources on the computing host. Therefore, VM deployment is a heavyweight approach for process offloading on smart mobile devices.",
"title": ""
},
{
"docid": "600d04e1d78084b36c9fb573fb9d699a",
"text": "A mobile robot is designed to pick and place the objects through voice commands. This work would be practically useful to wheelchair bound persons. The pick and place robot is designed in a way that it is able to help the user to pick up an item that is placed at two different levels using an extendable arm. The robot would move around to pick up an item and then pass it back to the user or to a desired location as told by the user. The robot control is achieved through voice commands such as left, right, straight, etc. in order to help the robot to navigate around. Raspberry Pi 2 controls the overall design with 5 DOF servo motor arm. The webcam is used to navigate around which provides live streaming using a mobile application for the user to look into. Results show the ability of the robot to pick and place the objects up to a height of 23.5cm through proper voice commands.",
"title": ""
},
{
"docid": "70ec2398526863c05b41866593214d0a",
"text": "Matrix factorization (MF) is one of the most popular techniques for product recommendation, but is known to suffer from serious cold-start problems. Item cold-start problems are particularly acute in settings such as Tweet recommendation where new items arrive continuously. In this paper, we present a meta-learning strategy to address item cold-start when new items arrive continuously. We propose two deep neural network architectures that implement our meta-learning strategy. The first architecture learns a linear classifier whose weights are determined by the item history while the second architecture learns a neural network whose biases are instead adjusted. We evaluate our techniques on the real-world problem of Tweet recommendation. On production data at Twitter, we demonstrate that our proposed techniques significantly beat the MF baseline and also outperform production models for Tweet recommendation.",
"title": ""
},
{
"docid": "003fc1e182a045889206ec8b1b4b19d8",
"text": "Long short-term memory (LSTM) recurrent neural network language models compress the full context of variable lengths into a fixed size vector. In this work, we investigate the task of predicting the LSTM hidden representation of the full context from a truncated n-gram context as a subtask for training an n-gram feedforward language model. Since this approach is a form of knowledge distillation, we compare two methods. First, we investigate the standard transfer based on the Kullback-Leibler divergence of the output distribution of the feedforward model from that of the LSTM. Second, we minimize the mean squared error between the hidden state of the LSTM and that of the n-gram feedforward model. We carry out experiments on different subsets of the Switchboard speech recognition dataset for feedforward models with a short (5-gram) and a medium (10-gram) context length. We show that we get improvements in perplexity and word error rate of up to 8% and 4% relative for the medium model, while the improvements are only marginal for the short model.",
"title": ""
},
{
"docid": "ef2996a04c819777cc4b88c47f502c21",
"text": "Bioprinting is an emerging technology for constructing and fabricating artificial tissue and organ constructs. This technology surpasses the traditional scaffold fabrication approach in tissue engineering (TE). Currently, there is a plethora of research being done on bioprinting technology and its potential as a future source for implants and full organ transplantation. This review paper overviews the current state of the art in bioprinting technology, describing the broad range of bioprinters and bioink used in preclinical studies. Distinctions between laser-, extrusion-, and inkjet-based bioprinting technologies along with appropriate and recommended bioinks are discussed. In addition, the current state of the art in bioprinter technology is reviewed with a focus on the commercial point of view. Current challenges and limitations are highlighted, and future directions for next-generation bioprinting technology are also presented. [DOI: 10.1115/1.4028512]",
"title": ""
},
{
"docid": "4ad09f27848c5f47de5bb58a522c28a3",
"text": "The rapid development of deep learning are enabling a plenty of novel applications such as image and speech recognition for embedded systems, robotics or smart wearable devices. However, typical deep learning models like deep convolutional neural networks (CNNs) consume so much on-chip storage and high-throughput compute resources that they cannot be easily handled by mobile or embedded devices with thrifty silicon and power budget. In order to enable large CNN models in mobile or more cutting-edge devices for IoT or cyberphysics applications, we proposed an efficient on-chip memory architecture for CNN inference acceleration, and showed its application to our in-house general-purpose deep learning accelerator. The redesigned on-chip memory subsystem, Memsqueezer, includes an active weight buffer set and data buffer set that embrace specialized compression methods to reduce the footprint of CNN weight and data set respectively. The Memsqueezer buffer can compress the data and weight set according to their distinct features, and it also includes a built-in redundancy detection mechanism that actively scans through the work-set of CNNs to boost their inference performance by eliminating the data redundancy. In our experiment, it is shown that the CNN accelerators with Memsqueezer buffers achieves more than 2x performance improvement and reduces 80% energy consumption on average over the conventional buffer design with the same area budget.",
"title": ""
},
{
"docid": "515fac2b02637ddee5e69a8a22d0e309",
"text": "The continuous expansion of the multilingual information society has led in recent years to a pressing demand for multilingual linguistic resources suitable to be used for different applications. In this paper we present the WordNet Domains Hierarchy (WDH), a language-independent resource composed of 164, hierarchically organized, domain labels (e.g. Architecture, Sport, Medicine). Although WDH has been successfully applied to various Natural Language Processing tasks, the first available version presented some problems, mostly related to the lack of a clear semantics of the domain labels. Other correlated issues were the coverage and the balancing of the domains. We illustrate a new version of WDH addressing these problems by an explicit and systematic reference to the Dewey Decimal Classification. The new version of WDH has a better defined semantics and is applicable to a wider range of tasks.",
"title": ""
},
{
"docid": "9fb5db3cdcffb968b54c7d23d8a690a2",
"text": "BACKGROUND\nPhysical activity is associated with many physical and mental health benefits, however many children do not meet the national physical activity guidelines. While schools provide an ideal setting to promote children's physical activity, adding physical activity to the school day can be difficult given time constraints often imposed by competing key learning areas. Classroom-based physical activity may provide an opportunity to increase school-based physical activity while concurrently improving academic-related outcomes. The primary aim of this systematic review and meta-analysis was to evaluate the impact of classroom-based physical activity interventions on academic-related outcomes. A secondary aim was to evaluate the impact of these lessons on physical activity levels over the study duration.\n\n\nMETHODS\nA systematic search of electronic databases (PubMed, ERIC, SPORTDiscus, PsycINFO) was performed in January 2016 and updated in January 2017. Studies that investigated the association between classroom-based physical activity interventions and academic-related outcomes in primary (elementary) school-aged children were included. Meta-analyses were conducted in Review Manager, with effect sizes calculated separately for each outcome assessed.\n\n\nRESULTS\nThirty-nine articles met the inclusion criteria for the review, and 16 provided sufficient data and appropriate design for inclusion in the meta-analyses. Studies investigated a range of academic-related outcomes including classroom behaviour (e.g. on-task behaviour), cognitive functions (e.g. executive function), and academic achievement (e.g. standardised test scores). Results of the meta-analyses showed classroom-based physical activity had a positive effect on improving on-task and reducing off-task classroom behaviour (standardised mean difference = 0.60 (95% CI: 0.20,1.00)), and led to improvements in academic achievement when a progress monitoring tool was used (standardised mean difference = 1.03 (95% CI: 0.22,1.84)). However, no effect was found for cognitive functions (standardised mean difference = 0.33 (95% CI: -0.11,0.77)) or physical activity (standardised mean difference = 0.40 (95% CI: -1.15,0.95)).\n\n\nCONCLUSIONS\nResults suggest classroom-based physical activity may have a positive impact on academic-related outcomes. However, it is not possible to draw definitive conclusions due to the level of heterogeneity in intervention components and academic-related outcomes assessed. Future studies should consider the intervention period when selecting academic-related outcome measures, and use an objective measure of physical activity to determine intervention fidelity and effects on overall physical activity levels.",
"title": ""
},
{
"docid": "b4e5153f7592394e8743bc0fdee40dcc",
"text": "This paper is focussed on the modelling and control of a hydraulically-driven biologically-inspired robotic leg. The study is part of a larger project aiming at the development of an autonomous quadruped robot (hyQ) for outdoor operations. The leg has two hydraulically-actuated degrees of freedom (DOF), the hip and knee joints. The actuation system is composed of proportional valves and asymmetric cylinders. After a brief description of the prototype leg, the paper shows the development of a comprehensive model of the leg where critical parameters have been experimentally identified. Subsequently the leg control design is presented. The core of this work is the experimental assessment of the pros and cons of single-input single-output (SISO) vs. multiple-input multiple-output (MIMO) and linear vs. nonlinear control algorithms in this application (the leg is a coupled multivariable system driven by nonlinear actuators). The control schemes developed are a conventional PID (linear SISO), a Linear Quadratic Regulator (LQR) controller (linear MIMO) and a Feedback Linearisation (FL) controller (nonlinear MIMO). LQR performs well at low frequency but its behaviour worsens at higher frequencies. FL produces the fastest response in simulation, but when implemented is sensitive to parameters uncertainty and needs to be properly modified to achieve equally good performance also in the practical implementation.",
"title": ""
},
{
"docid": "330bbffaefd9f5d165b8eca16db1f991",
"text": "1 Pharmacist, Professor, and Researcher at the College of Pharmacy at the Federal Fluminense University. 2 Physiatric Doctor at the Institute of Instituto de Medicina Física e Reabilitação do Hospital da Clínicas da Faculdade de Medicina da Universidade de São Paulo (Physical Medicine and Rehabilitation at the Hospital of the Clinics of the College of Medicine of the University of São Paulo). Coordinator of Teaching and Research of the Instituto Brasil de Tecnologias da Saúde (Brazilian Institute of Health Technologies). 3 Orthopediatric Doctor and Physiatrist, CSO of the Instituto Brasil de Tecnologias da Saúde (Brazilian Institute of Health Technologies). Peripheral vascular diseases (PVDS) are characterized as a circulation problem in the veins, arteries, and lymphatic system. The main therapy consists of changes in lifestyle such as diet and physical activity. The pharmacological therapy includes the use of vasoactive drugs, which are used in arteriopathies and venolymphatic disorders. The goal of this study was to research the scientific literature on the use and pharmacology of vasoactive drugs, emphasizing the efficacy of their local actions and administration.",
"title": ""
},
{
"docid": "5ccda95046b0e5d1cfc345011b1e350d",
"text": "Considerable emphasis is currently placed on reducing healthcare-associated infection through improving hand hygiene compliance among healthcare professionals. There is also increasing discussion in the lay media of perceived poor hand hygiene compliance among healthcare staff. Our aim was to report the outcomes of a systematic search for peer-reviewed, published studies - especially clinical trials - that focused on hand hygiene compliance among healthcare professionals. Literature published between December 2009, after publication of the World Health Organization (WHO) hand hygiene guidelines, and February 2014, which was indexed in PubMed and CINAHL on the topic of hand hygiene compliance, was searched. Following examination of relevance and methodology of the 57 publications initially retrieved, 16 clinical trials were finally included in the review. The majority of studies were conducted in the USA and Europe. The intensive care unit emerged as the predominant focus of studies followed by facilities for care of the elderly. The category of healthcare worker most often the focus of the research was the nurse, followed by the healthcare assistant and the doctor. The unit of analysis reported for hand hygiene compliance was 'hand hygiene opportunity'; four studies adopted the 'my five moments for hand hygiene' framework, as set out in the WHO guidelines, whereas other papers focused on unique multimodal strategies of varying design. We concluded that adopting a multimodal approach to hand hygiene improvement intervention strategies, whether guided by the WHO framework or by another tested multimodal framework, results in moderate improvements in hand hygiene compliance.",
"title": ""
},
{
"docid": "8d3c1e649e40bf72f847a9f8ac6edf38",
"text": "Many organizations are forming “virtual teams” of geographically distributed knowledge workers to collaborate on a variety of workplace tasks. But how effective are these virtual teams compared to traditional face-to-face groups? Do they create similar teamwork and is information exchanged as effectively? An exploratory study of a World Wide Web-based asynchronous computer conference system known as MeetingWebTM is presented and discussed. It was found that teams using this computer-mediated communication system (CMCS) could not outperform traditional (face-to-face) teams under otherwise comparable circumstances. Further, relational links among team members were found to be a significant contributor to the effectiveness of information exchange. Though virtual and face-to-face teams exhibit similar levels of communication effectiveness, face-to-face team members report higher levels of satisfaction. Therefore, the paper presents steps that can be taken to improve the interaction experience of virtual teams. Finally, guidelines for creating and managing virtual teams are suggested, based on the findings of this research and other authoritative sources. Subject Areas: Collaboration, Computer Conference, Computer-mediated Communication Systems (CMCS), Internet, Virtual Teams, and World Wide Web. *The authors wish to thank the Special Focus Editor and the reviewers for their thoughtful critique of the earlier versions of this paper. We also wish to acknowledge the contributions of the Northeastern University College of Business Administration and its staff, which provided the web server and the MeetingWebTM software used in these experiments.",
"title": ""
},
{
"docid": "41043268fe70f05d6225f0ac84651a6b",
"text": "BACKGROUND AND PURPOSE\nCombined Therapy (CT) composed of ultrasound and Interferential Therapy has been reported as a cost-effective, local analgesic intervention on tender points in Fibromyalgia (FM). This study aims to investigate the difference between CT applied once a week and twice a week in patients with FM.\n\n\nMETHOD\nFifty patients with the diagnosis of FM were randomized into two groups (G1 = once a week treatment and G2 = twice a week treatment) with each group containing 25 patients. All eighteen tender points were assessed and treated with CT during each session, over a three-month time period. Interferential Therapy was modulated at 4,000 Hz of current carrier, 100 Hz of amplitude modulated frequency and at a bearable sensorial threshold of intensity. Pulsed ultrasound of 1 MHz at 20% of 2.5 W/cm² was used. For evaluation, the Visual Analogue Scale, Fibromyalgia Impact Questionnaire, Post Sleep Inventory and the tender point count were utilized, and the examiner was blinded to the group assignments.\n\n\nRESULTS\nG1 and G2 showed a significant improvement in Visual Analogue Scale (p < 0.0001 and p < 0.0005, respectively), Tender Points (p < 0.005 and p < 0.001, respectively), Fibromyalgia Impact Questionnaire and Post Sleep Inventory (p < 0.005 and p < 0.05, respectively). However, there was no significant difference between the two groups in all performed analyses.\n\n\nCONCLUSION\nThere is no advantage in increasing the number of sessions of combined therapy in terms of reducing generalized pain, quality of life and sleep quality for patients with FM.",
"title": ""
},
{
"docid": "103f4ff03cc1aef7c173b36ccc33e680",
"text": "Wireless environments are typically characterized by unpredictable and unreliable channel conditions. In such environments, fragmentation of network-bound data is a commonly adapted technique to improve the probability of successful data transmissions and reduce the energy overheads incurred due to re-transmissions. The overall latencies involved with fragmentation and consequent re-assembly of fragments are often neglected which bear significant effects on the real-time guarantees of the participating applications. This work studies the latencies introduced as a result of the fragmentation performed at the link layer (MAC layer in IEEE 802.11) of the source device and their effects on end-to-end delay constraints of mobile applications (e.g., media streaming). Based on the observed effects, this work proposes a feedback-based adaptive approach that chooses an optimal fragment size to (a) satisfy end-to-end delay requirements of the distributed application and (b) minimize the energy consumption of the source device by increasing the probability of successful transmissions, thereby reducing re-transmissions and their associated costs.",
"title": ""
},
{
"docid": "ce2c05b6e0cc4b0116ebf22006f1749b",
"text": "Use of renewable energy and in particular solar energy has brought significant attention over the past decades. Photovoltaic (PV) power generation projects are implemented in very large number in many countries. Many research works are carried out to analyze and validate the performance of PV modules. Implementation of experimental set up for PV based power system with DC-DC converter to validate the performance of the system is not always possible due to practical constraints. Software based simulation model helps to analyze the performance of PV and a common circuit based model which could be used for validating any commercial PV module will be more helpful. Simulation of mathematical model for Photovoltaic (PV) module and DC-DC boost converter is presented in this paper. The model presented in this paper can be used as a generalized PV module to analyze the performance of any commercially available PV modules. I-V characteristics and P-V characteristics of PV module under different temperature and irradiation level can be obtained using the model. The design of DC-DC boost converter is also discussed in detail. Simulation of DC-DC converter is performed and the results are obtained from constant DC supply fed converter and PV fed converter.",
"title": ""
},
{
"docid": "f62950bcb20c034de7a78f21887ce05b",
"text": "In the past decade, the role of data has increased exponentially from something that is queried or reported on, to becoming a true corporate asset. The same time period has also seen marked growth in corporate structural complexity. This combination has lead to information management challenges, as the data moving across a multitude of systems lends itself to a higher likelihood of impacting dependent processes and systems, should something go wrong or be changed. Many enterprise data projects are faced with low success rates and consequently subject to high amounts of scrutiny as senior leadership struggles to identify return on investment. While there are many tools and methods to increase a companies' ability to govern data, this research is based on the premise that you can not govern what you do not know. This lack of awareness of the corporate data landscape impacts the ability to govern data, which in turn impacts overall data quality within organizations. This paper seeks to propose a tools and techniques for companies to better gain an awareness of the landscape of their data, processes, and organizational attributes through the use of linked data, via the Resource Description Framework (RDF) and ontology. The outcome of adopting such techniques is an increased level of data awareness within the organization, resulting in improved ability to govern corporate data assets, and in turn increased data quality.",
"title": ""
},
{
"docid": "1c83671ad725908b2d4a6467b23fc83f",
"text": "Although many IT and business managers today may be lured into business intelligence (BI) investments by the promise of predictive analytics and emerging BI trends, creating an enterprise-wide BI capability is a journey that takes time. This article describes Norfolk Southern Railway’s BI journey, which began in the early 1990s with departmental reporting, evolved into data warehousing and analytic applications, and has resulted in a company that today uses BI to support corporate strategy. We describe how BI at Norfolk Southern evolved over several decades, with the company developing strong BI foundations and an effective enterprise-wide BI capability. We also identify the practices that kept the BI journey “on track.” These practices can be used by other IT and business leaders as they plan and develop BI capabilities in their own organizations.",
"title": ""
},
{
"docid": "aee2a31b02de518edda1c35f059cbe89",
"text": "A key challenge of future mobile communication research is to strike an attractive compromise between wireless network's area spectral efficiency and energy efficiency. This necessitates a clean-slate approach to wireless system design, embracing the rich body of existing knowledge, especially on multiple-input-multiple-ouput (MIMO) technologies. This motivates the proposal of an emerging wireless communications concept conceived for single-radio-frequency (RF) large-scale MIMO communications, which is termed as SM. The concept of SM has established itself as a beneficial transmission paradigm, subsuming numerous members of the MIMO system family. The research of SM has reached sufficient maturity to motivate its comparison to state-of-the-art MIMO communications, as well as to inspire its application to other emerging wireless systems such as relay-aided, cooperative, small-cell, optical wireless, and power-efficient communications. Furthermore, it has received sufficient research attention to be implemented in testbeds, and it holds the promise of stimulating further vigorous interdisciplinary research in the years to come. This tutorial paper is intended to offer a comprehensive state-of-the-art survey on SM-MIMO research, to provide a critical appraisal of its potential advantages, and to promote the discussion of its beneficial application areas and their research challenges leading to the analysis of the technological issues associated with the implementation of SM-MIMO. The paper is concluded with the description of the world's first experimental activities in this vibrant research field.",
"title": ""
}
] |
scidocsrr
|
1354d310a6fbe285712f0295fbdb9114
|
Adaptive Gesture Recognition with Variation Estimation for Interactive Systems
|
[
{
"docid": "f3590467f740bc575e995389c9cc3684",
"text": "Action recognition has become a very important topic in computer vision, with many fundamental applications, in robotics, video surveillance, human–computer interaction, and multimedia retrieval among others and a large variety of approaches have been described. The purpose of this survey is to give an overview and categorization of the approaches used. We concentrate on approaches that aim on classification of full-body motions, such as kicking, punching, and waving, and we categorize them according to how they represent the spatial and temporal structure of actions; how they segment actions from an input stream of visual data; and how they learn a view-invariant representation of actions. 2010 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "32a4c17a53643042a5c19180bffd7c21",
"text": "Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. Although some user interface libraries and toolkits offer gesture recognizers, such infrastructure is often unavailable in design-oriented environments like Flash, scripting environments like JavaScript, or brand new off-desktop prototyping environments. To enable novice programmers to incorporate gestures into their UI prototypes, we present a \"$1 recognizer\" that is easy, cheap, and usable almost anywhere in about 100 lines of code. In a study comparing our $1 recognizer, Dynamic Time Warping, and the Rubine classifier on user-supplied gestures, we found that $1 obtains over 97% accuracy with only 1 loaded template and 99% accuracy with 3+ loaded templates. These results were nearly identical to DTW and superior to Rubine. In addition, we found that medium-speed gestures, in which users balanced speed and accuracy, were recognized better than slow or fast gestures for all three recognizers. We also discuss the effect that the number of templates or training examples has on recognition, the score falloff along recognizers' N-best lists, and results for individual gestures. We include detailed pseudocode of the $1 recognizer to aid development, inspection, extension, and testing.",
"title": ""
}
] |
[
{
"docid": "6eb9db965b78d885b04a6ea70b21a58b",
"text": "Synthetic biology has the potential to benefit society with novel applications that can improve soil quality, produce biofuels, grow customized biological tissue, and perform intelligent drug delivery, among many other possibilities. Engineers are creating techniques to program living cells, inserting new logic, and leveraging cell-to-cell communication, which result in changes to a cell's core functionality. Using these techniques, we can now create synthetic biological organisms (SBOs) with entirely new (potentially unseen) behaviors, which, similar to silicon devices, can sense, ac-tuate, perform computation, and interconnect with other networks at the nanoscale level. SBOs are programmable evolving entities, and can be likened to self-adaptive programs that read inputs, process them, and produce outputs, reacting differently to different environmental conditions. With the increasing complexity of potential programs for SBOs, as in any new technology, there will be both beneficial as well as malicious uses. Although there has been much discussion about the potential safety and security risks of SBOs, and some research on predicting whether engineered life will be harmful, there has been little research on how to validate or verify safety of SBOs. In this thesis, we lay a foundation for validating and verifying safety for SBOs. We first present two case studies where we give insight into the difficulties of determining whether novel SBOs will be harmful given the vast combinatorial search space available for their engineering. Second, we explain how the current U.S. regulatory environment is fragmented with respect to the multiple dimensions of SBOs. Finally, we present a way forward for formalizing the architecture of SBOs and present a case study to show how we might utilize assurance cases to reason about SBO safety. 3 Acknowledgments I would like to personally thank my parents, Mark and Sandy Firestone, for supporting my academic career and neverending pursuit of knowledge. I also thank my advisors, Dr. Myra Cohen and Dr. Massimiliano Pierobon for all of the helpful comments which made this thesis much more focused and coherent. I finally wish to thank Dr. Jitender Deogun for encouraging me to pursue graduate studies in Computer Science. Introduction The emerging science of synthetic biology and the advent of synthetic biological organisms (SBOs) offer great hope for benefitting society through applications such as the enhancement of soil quality [9], the creation of new sources for biofuels [132], the development of engineered biological tissue [88, 104], and the synthesis of biocompatible intelligent drug delivery systems …",
"title": ""
},
{
"docid": "47fccbf00b2caaad529d660073b7e9a0",
"text": "The rapidly increasing popularity of community-based Question Answering (cQA) services, e.g. Yahoo! Answers, Baidu Zhidao, etc. have attracted great attention from both academia and industry. Besides the basic problems, like question searching and answer finding, it should be noted that the low participation rate of users in cQA service is the crucial problem which limits its development potential. In this paper, we focus on addressing this problem by recommending answer providers, in which a question is given as a query and a ranked list of users is returned according to the likelihood of answering the question. Based on the intuitive idea for recommendation, we try to introduce topic-level model to improve heuristic term-level methods, which are treated as the baselines. The proposed approach consists of two steps: (1) discovering latent topics in the content of questions and answers as well as latent interests of users to build user profiles; (2) recommending question answerers for new arrival questions based on latent topics and term-level model. Specifically, we develop a general generative model for questions and answers in cQA, which is then altered to obtain a novel computationally tractable Bayesian network model. Experiments are carried out on a real-world data crawled from Yahoo! Answers during Jun 12 2007 to Aug 04 2007, which consists of 118510 questions, 772962 answers and 150324 users. The experimental results reveal significant improvements over the baseline methods and validate the positive influence of topic-level information.",
"title": ""
},
{
"docid": "15dbf1ad05c8219be484c01145c09b6c",
"text": "In this paper, we study the contextual bandit problem (also known as the multi-armed bandit problem with expert advice) for linear payoff functions. For T rounds, K actions, and d dimensional feature vectors, we prove an O ( √ Td ln(KT ln(T )/δ) ) regret bound that holds with probability 1− δ for the simplest known (both conceptually and computationally) efficient upper confidence bound algorithm for this problem. We also prove a lower bound of Ω( √ Td) for this setting, matching the upper bound up to logarithmic factors.",
"title": ""
},
{
"docid": "6224b6e5d7cf7f48eccede10de743be2",
"text": "Tumor-associated macrophages (TAM) form a major component of the tumor stroma. However, important concepts such as TAM heterogeneity and the nature of the monocytic TAM precursors remain speculative. Here, we show for the first time that mouse mammary tumors contained functionally distinct subsets of TAMs and provide markers for their identification. Furthermore, in search of the TAM progenitors, we show that the tumor-monocyte pool almost exclusively consisted of Ly6C(hi)CX(3)CR1(low) monocytes, which continuously seeded tumors and renewed all nonproliferating TAM subsets. Interestingly, gene and protein profiling indicated that distinct TAM populations differed at the molecular level and could be classified based on the classic (M1) versus alternative (M2) macrophage activation paradigm. Importantly, the more M2-like TAMs were enriched in hypoxic tumor areas, had a superior proangiogenic activity in vivo, and increased in numbers as tumors progressed. Finally, it was shown that the TAM subsets were poor antigen presenters, but could suppress T-cell activation, albeit by using different suppressive mechanisms. Together, our data help to unravel the complexities of the tumor-infiltrating myeloid cell compartment and provide a rationale for targeting specialized TAM subsets, thereby optimally \"re-educating\" the TAM compartment.",
"title": ""
},
{
"docid": "3e9aa3bcc728f8d735f6b02e0d7f0502",
"text": "Linda Marion is a doctoral student at Drexel University. E-mail: Linda.Marion@drexel.edu. Abstract This exploratory study examined 250 online academic librarian employment ads posted during 2000 to determine current requirements for technologically oriented jobs. A content analysis software program was used to categorize the specific skills and characteristics listed in the ads. The results were analyzed using multivariate analysis (cluster analysis and multidimensional scaling). The results, displayed in a three-dimensional concept map, indicate 19 categories comprised of both computer related skills and behavioral characteristics that can be interpreted along three continua: (1) technical skills to people skills; (2) long-established technologies and behaviors to emerging trends; (3) technical service competencies to public service competencies. There was no identifiable “digital librarian” category.",
"title": ""
},
{
"docid": "5b4e2380172b90c536eb974268a930b6",
"text": "This paper addresses the problem of road scene segmentation in conventional RGB images by exploiting recent advances in semantic segmentation via convolutional neural networks (CNNs). Segmentation networks are very large and do not currently run at interactive frame rates. To make this technique applicable to robotics we propose several architecture refinements that provide the best trade-off between segmentation quality and runtime. This is achieved by a new mapping between classes and filters at the expansion side of the network. The network is trained end-to-end and yields precise road/lane predictions at the original input resolution in roughly 50ms. Compared to the state of the art, the network achieves top accuracies on the KITTI dataset for road and lane segmentation while providing a 20× speed-up. We demonstrate that the improved efficiency is not due to the road segmentation task. Also on segmentation datasets with larger scene complexity, the accuracy does not suffer from the large speed-up.",
"title": ""
},
{
"docid": "0bef4c6547ac1266686bf53fe93f05fc",
"text": "According to some estimates, more than half of the world's population is multilingual to some extent. Because of the centrality of language use to human experience and the deep connections between linguistic and nonlinguistic processing, it would not be surprising to find that there are interactions between bilingualism and cognitive and brain processes. The present review uses the framework of experience-dependent plasticity to evaluate the evidence for systematic modifications of brain and cognitive systems that can be attributed to bilingualism. The review describes studies investigating the relation between bilingualism and cognition in infants and children, younger and older adults, and patients, using both behavioral and neuroimaging methods. Excluded are studies whose outcomes focus primarily on linguistic abilities because of their more peripheral contribution to the central question regarding experience-dependent changes to cognition. Although most of the research discussed in the review reports some relation between bilingualism and cognitive or brain outcomes, several areas of research, notably behavioral studies with young adults, largely fail to show these effects. These discrepancies are discussed and considered in terms of methodological and conceptual issues. The final section proposes an account based on \"executive attention\" to explain the range of research findings and to set out an agenda for the next steps in this field. (PsycINFO Database Record",
"title": ""
},
{
"docid": "a7e7d4232bd5c923746a1ecd7b5d4a27",
"text": "OBJECTIVE\nThe goal of this project was to determine whether screening different groups of elderly individuals in a general or specialty practice would be beneficial in detecting dementia.\n\n\nBACKGROUND\nEpidemiologic studies of aging and dementia have demonstrated that the use of research criteria for the classification of dementia has yielded three groups of subjects: those who are demented, those who are not demented, and a third group of individuals who cannot be classified as normal or demented but who are cognitively (usually memory) impaired.\n\n\nMETHODS\nThe authors conducted computerized literature searches and generated a set of abstracts based on text and index words selected to reflect the key issues to be addressed. Articles were abstracted to determine whether there were sufficient data to recommend the screening of asymptomatic individuals. Other research studies were evaluated to determine whether there was value in identifying individuals who were memory-impaired beyond what one would expect for age but who were not demented. Finally, screening instruments and evaluation techniques for the identification of cognitive impairment were reviewed.\n\n\nRESULTS\nThere were insufficient data to make any recommendations regarding cognitive screening of asymptomatic individuals. Persons with memory impairment who were not demented were characterized in the literature as having mild cognitive impairment. These subjects were at increased risk for developing dementia or AD when compared with similarly aged individuals in the general population.\n\n\nRECOMMENDATIONS\nThere were sufficient data to recommend the evaluation and clinical monitoring of persons with mild cognitive impairment due to their increased risk for developing dementia (Guideline). Screening instruments, e.g., Mini-Mental State Examination, were found to be useful to the clinician for assessing the degree of cognitive impairment (Guideline), as were neuropsychologic batteries (Guideline), brief focused cognitive instruments (Option), and certain structured informant interviews (Option). Increasing attention is being paid to persons with mild cognitive impairment for whom treatment options are being evaluated that may alter the rate of progression to dementia.",
"title": ""
},
{
"docid": "2cca7bc6aad1da4146dea7b99987fcb4",
"text": "The telecare medicine information system (TMIS) allows patients and doctors to access medical services or medical information at remote sites. Therefore, it could bring us very big convenient. To safeguard patients’ privacy, authentication schemes for the TMIS attracted wide attention. Recently, Tan proposed an efficient biometrics-based authentication scheme for the TMIS and claimed their scheme could withstand various attacks. However, in this paper, we point out that Tan’s scheme is vulnerable to the Denial-of-Service attack. To enhance security, we also propose an improved scheme based on Tan’s work. Security and performance analysis shows our scheme not only could overcome weakness in Tan’s scheme but also has better performance.",
"title": ""
},
{
"docid": "cb6d60c4948bcf2381cb03a0e7dc8312",
"text": "While humor has been historically studied from a psychological, cognitive and linguistic standpoint, its study from a computational perspective is an area yet to be explored in Computational Linguistics. There exist some previous works, but a characterization of humor that allows its automatic recognition and generation is far from being specified. In this work we build a crowdsourced corpus of labeled tweets, annotated according to its humor value, letting the annotators subjectively decide which are humorous. A humor classifier for Spanish tweets is assembled based on supervised learning, reaching a precision of 84% and a recall of 69%.",
"title": ""
},
{
"docid": "e0cb22810c7dc3797e71dd39f966e7ce",
"text": "A crystal-based differential oscillator circuit offering simultaneously high stability and ultra-low power consumption is presented for timekeeping and demanding radio applications. The differential circuit structure -in contrast to that of the conventional 3-points- does not require any loading capacitance to be functional and the power consumption can thus be minimized. Although the loading capacitance is omitted a very precise absolute oscillation frequency can be obtained as well as an excellent insensitivity to temperature variations thanks to the reduced parasitics of a deep-submicron technology. The power consumption of a 12.8MHz quartz oscillator including an amplitude regulation mechanism is below 1 µA under a 1.8 to 0.6V supply voltage range.",
"title": ""
},
{
"docid": "1e6310e8b16625e8f8319c7386723e55",
"text": "Exploiting memory disclosure vulnerabilities like the HeartBleed bug may cause arbitrary reading of a victim's memory, leading to leakage of critical secrets such as crypto keys, personal identity and financial information. While isolating code that manipulates critical secrets into an isolated execution environment is a promising countermeasure, existing approaches are either too coarse-grained to prevent intra-domain attacks, or require excessive intervention from low-level software (e.g., hypervisor or OS), or both. Further, few of them are applicable to large-scale software with millions of lines of code. This paper describes a new approach, namely SeCage, which retrofits commodity hardware virtualization extensions to support efficient isolation of sensitive code manipulating critical secrets from the remaining code. SeCage is designed to work under a strong adversary model where a victim application or even the OS may be controlled by the adversary, while supporting large-scale software with small deployment cost. SeCage combines static and dynamic analysis to decompose monolithic software into several compart- ments, each of which may contain different secrets and their corresponding code. Following the idea of separating control and data plane, SeCage retrofits the VMFUNC mechanism and nested paging in Intel processors to transparently provide different memory views for different compartments, while allowing low-cost and transparent invocation across domains without hypervisor intervention.\n We have implemented SeCage in KVM on a commodity Intel machine. To demonstrate the effectiveness of SeCage, we deploy it to the Nginx and OpenSSH server with the OpenSSL library as well as CryptoLoop with small efforts. Security evaluation shows that SeCage can prevent the disclosure of private keys from HeartBleed attacks and memory scanning from rootkits. The evaluation shows that SeCage only incurs small performance and space overhead.",
"title": ""
},
{
"docid": "acb569b267eae92a6e33b52725f28833",
"text": "A multi-objective design procedure is applied to the design of a close-coupled inductor for a three-phase interleaved 140kW DC-DC converter. For the multi-objective optimization, a genetic algorithm is used in combination with a detailed physical model of the inductive component. From the solution of the optimization, important conclusions about the advantages and disadvantages of using close-coupled inductors compared to separate inductors can be drawn.",
"title": ""
},
{
"docid": "ae7e5fba4c48865f96d2d0fb66821a94",
"text": "On the Semantic Web, data will inevitably come from many different ontologies, and information processing across ontologies is not possible without knowing the semantic mappings between them. Manually finding such mappings is tedious, error-prone, and clearly not possible on the Web scale. Hence the development of tools to assist in the ontology mapping process is crucial to the success of the Semantic Web. We describe GLUE, a system that employs machine learning techniques to find such mappings. Given two ontologies, for each concept in one ontology GLUE finds the most similar concept in the other ontology. We give well-founded probabilistic definitions to several practical similarity measures and show that GLUE can work with all of them. Another key feature of GLUE is that it uses multiple learning strategies, each of which exploits well a different type of information either in the data instances or in the taxonomic structure of the ontologies. To further improve matching accuracy, we extend GLUE to incorporate commonsense knowledge and domain constraints into the matching process. Our approach is thus distinguished in that it works with a variety of well-defined similarity notions and that it efficiently incorporates multiple types of knowledge. We describe a set of experiments on several real-world domains and show that GLUE proposes highly accurate semantic mappings. Finally, we extend GLUE to find complex mappings between ontologies and describe experiments that show the promise of the approach.",
"title": ""
},
{
"docid": "83c407843732c4d237ff6e07da40297f",
"text": "Although deep reinforcement learning has achieved great success recently, there are still challenges in Real Time Strategy (RTS) games. Due to its large state and action space, as well as hidden information, RTS games require macro strategies as well as micro level manipulation to obtain satisfactory performance. In this paper, we present a novel hierarchical reinforcement learning model for mastering Multiplayer Online Battle Arena (MOBA) games, a sub-genre of RTS games. In this hierarchical framework, agents make macro strategies by imitation learning and do micromanipulations through reinforcement learning. Moreover, we propose a simple self-learning method to get better sample efficiency for reinforcement part and extract some global features by multi-target detection method in the absence of game engine or API. In 1v1 mode, our agent successfully learns to combat and defeat built-in AI with 100% win rate, and experiments show that our method can create a competitive multi-agent for a kind of mobile MOBA game King of Glory (KOG) in 5v5 mode.",
"title": ""
},
{
"docid": "8f53f02a1bae81e5c06828b6147d2934",
"text": "E-Government, as a vehicle to deliver enhanced services to citizens, is now extending its reach to the elderly population through provision of targeted services. In doing so, the ideals of ubiquitous e-Government may be better achieved. However, there is a lack of studies on e-Government adoption among senior citizens, especially considering that this age group is growing in size and may be averse to new IT applications. This study aims to address this gap by investigating an innovative e- Government service specifically tailored for senior citizens, called CPF e-Withdrawal. Technology adoption model (TAM) is employed as the theoretical foundation, in which perceived usefulness is recognized as the most significant predictor of adoption intention. This study attempts to identify the antecedents of perceived usefulness by drawing from the innovation diffusion literature as well as age-related studies. Our findings agree with TAM and indicate that internet safety perception and perceived ease of use are significant predictors of perceived usefulness.",
"title": ""
},
{
"docid": "fb3018d852c2a7baf96fb4fb1233b5e5",
"text": "The term twin spotting refers to phenotypes characterized by the spatial and temporal co-occurrence of two (or more) different nevi arranged in variable cutaneous patterns, and can be associated with extra-cutaneous anomalies. Several examples of twin spotting have been described in humans including nevus vascularis mixtus, cutis tricolor, lesions of overgrowth, and deficient growth in Proteus and Elattoproteus syndromes, epidermolytic hyperkeratosis of Brocq, and the so-called phacomatoses pigmentovascularis and pigmentokeratotica. We report on a 28-year-old man and a 15-year-old girl, who presented with a previously unrecognized association of paired cutaneous vascular nevi of the telangiectaticus and anemicus types (naevus vascularis mixtus) distributed in a mosaic pattern on the face (in both patients) and over the entire body (in the man) and a complex brain malformation (in both patients) consisting of cerebral hemiatrophy, hypoplasia of the cerebral vessels and homolateral hypertrophy of the skull and sinuses (known as Dyke-Davidoff-Masson malformation). Both patients had facial asymmetry and the young man had facial dysmorphism, seizures with EEG anomalies, hemiplegia, insulin-dependent diabetes mellitus (IDDM), autoimmune thyroiditis, a large hepatic cavernous vascular malformation, and left Legg-Calvé-Perthes disease (LCPD) [LCPD-like presentation]. Array-CGH analysis and mutation analysis of the RASA1 gene were normal in both patients.",
"title": ""
},
{
"docid": "bf1597a417aee9b080f738c7ef2bdffe",
"text": "BACKGROUND\nThe increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging.\n\n\nMETHODS\nWe propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions.\n\n\nRESULTS\nThis research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9.\n\n\nCONCLUSIONS\nThe obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.",
"title": ""
},
{
"docid": "b876e62db8a45ab17d3a9d217e223eb7",
"text": "A study was conducted to evaluate user performance andsatisfaction in completion of a set of text creation tasks usingthree commercially available continuous speech recognition systems.The study also compared user performance on similar tasks usingkeyboard input. One part of the study (Initial Use) involved 24users who enrolled, received training and carried out practicetasks, and then completed a set of transcription and compositiontasks in a single session. In a parallel effort (Extended Use),four researchers used speech recognition to carry out real worktasks over 10 sessions with each of the three speech recognitionsoftware products. This paper presents results from the Initial Usephase of the study along with some preliminary results from theExtended Use phase. We present details of the kinds of usabilityand system design problems likely in current systems and severalcommon patterns of error correction that we found.",
"title": ""
},
{
"docid": "23d7eb4d414e4323c44121040c3b2295",
"text": "BACKGROUND\nThe use of clinical decision support systems to facilitate the practice of evidence-based medicine promises to substantially improve health care quality.\n\n\nOBJECTIVE\nTo describe, on the basis of the proceedings of the Evidence and Decision Support track at the 2000 AMIA Spring Symposium, the research and policy challenges for capturing research and practice-based evidence in machine-interpretable repositories, and to present recommendations for accelerating the development and adoption of clinical decision support systems for evidence-based medicine.\n\n\nRESULTS\nThe recommendations fall into five broad areas--capture literature-based and practice-based evidence in machine--interpretable knowledge bases; develop maintainable technical and methodological foundations for computer-based decision support; evaluate the clinical effects and costs of clinical decision support systems and the ways clinical decision support systems affect and are affected by professional and organizational practices; identify and disseminate best practices for work flow-sensitive implementations of clinical decision support systems; and establish public policies that provide incentives for implementing clinical decision support systems to improve health care quality.\n\n\nCONCLUSIONS\nAlthough the promise of clinical decision support system-facilitated evidence-based medicine is strong, substantial work remains to be done to realize the potential benefits.",
"title": ""
}
] |
scidocsrr
|
0f9308f3886928237fa9837f5f1e2293
|
Scenario-Based Analysis of Software Architecture
|
[
{
"docid": "85180ac475de8437bde80a7dbbfc9759",
"text": "Excellent book is always being the best friend for spending little time in your office, night time, bus, and everywhere. It will be a good way to just look, open, and read the book while in that time. As known, experience and skill don't always come with the much money to acquire them. Reading this book with the PDF object oriented software engineering a use case driven approach will let you know more things.",
"title": ""
}
] |
[
{
"docid": "4fe25c65a4fd1886018482aceb82ad6f",
"text": "Article history: Received 21 March 2011 Revised 28 February 2012 Accepted 5 March 2012 Available online 26 March 2012 The purpose of this paper is (1) to identify critical issues in the current literature on ethical leadership — i.e., the conceptual vagueness of the construct itself and the focus on a Western-based perspective; and (2) to address these issues and recent calls for more collaboration between normative and empirical-descriptive inquiry of ethical phenomena by developing an interdisciplinary integrative approach to ethical leadership. Based on the analysis of similarities between Western and Eastern moral philosophy and ethics principles of the world religions, the present approach identifies four essential normative reference points of ethical leadership— the four central ethical orientations: (1) humane orientation, (2) justice orientation, (3) responsibility and sustainability orientation, and (4) moderation orientation. Research propositions on predictors and consequences of leader expressions of the four central orientations are offered. Real cases of ethical leadership choices, derived from in-depth interviews with international leaders, illustrate how the central orientations play out in managerial practice. © 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "d8a194a88ccf20b8160b75d930969c85",
"text": "We describe the design and hardware implementation of our walking and manipulation controllers that are based on a cascade of online optimizations. A virtual force acting at the robot's center of mass (CoM) is estimated and used to compensated for modeling errors of the CoM and unplanned external forces. The proposed controllers have been implemented on the Atlas robot, a full size humanoid robot built by Boston Dynamics, and used in the DARPA Robotics Challenge Finals, which consisted of a wide variety of locomotion and manipulation tasks.",
"title": ""
},
{
"docid": "1a0d0b0b38e6d6434448cee8959c58a8",
"text": "This paper reports the first results of an investigation into solutions to problems of security in computer systems; it establishes the basis for rigorous investigation by providing a general descriptive model of a computer system. Borrowing basic concepts and constructs from general systems theory, we present a basic result concerning security in computer systems, using precise notions of \"security\" and \"compromise\". We also demonstrate how a change in requirements can be reflected in the resulting mathematical model. A lengthy introductory section is included in order to bridge the gap between general systems theory and practical problem solving. ii PREFACE General systems theory is a relatively new and rapidly growing mathematical discipline which shows great promise for application in the computer sciences. The discipline includes both \"general systems-theory\" and \"general-systems-theory\": that is, one may properly read the phrase \"general systems theory\" in both ways. In this paper, we have borrowed from the works of general systems theorists, principally from the basic work of Mesarovic´, to formulate a mathematical framework within which to deal with the problems of secure computer systems. At the present time we feel that the mathematical representation developed herein is adequate to deal with most if not all of the security problems one may wish to pose. In Section III we have given a result which deals with the most trivial of the secure computer systems one might find viable in actual use. In the concluding section we review the application of our mathematical methodology and suggest major areas of concern in the design of a secure system. The results reported in this paper lay the groundwork for further, more specific investigation into secure computer systems. The investigation will proceed by specializing the elements of the model to represent particular aspects of system design and operation. Such an investigation will be reported in the second volume of this series where we assume a system with centralized access control. A preliminary investigation of distributed access is just beginning; the results of that investigation would be reported in a third volume of the series.",
"title": ""
},
{
"docid": "ad61c6474832ecbe671040dfcb64e6aa",
"text": "This paper provides a brief overview on the recent advances of small-scale unmanned aerial vehicles (UAVs) from the perspective of platforms, key elements, and scientific research. The survey starts with an introduction of the recent advances of small-scale UAV platforms, based on the information summarized from 132 models available worldwide. Next, the evolvement of the key elements, including onboard processing units, navigation sensors, mission-oriented sensors, communication modules, and ground control station, is presented and analyzed. Third, achievements of small-scale UAV research, particularly on platform design and construction, dynamics modeling, and flight control, are introduced. Finally, the future of small-scale UAVs' research, civil applications, and military applications are forecasted.",
"title": ""
},
{
"docid": "8f13fbf6de0fb0685b4a39ee5f3bb415",
"text": "This review presents one of the eight theories of the quality of life (QOL) used for making the SEQOL (self-evaluation of quality of life) questionnaire or the quality of life as realizing life potential. This theory is strongly inspired by Maslow and the review furthermore serves as an example on how to fulfill the demand for an overall theory of life (or philosophy of life), which we believe is necessary for global and generic quality-of-life research. Whereas traditional medical science has often been inspired by mechanical models in its attempts to understand human beings, this theory takes an explicitly biological starting point. The purpose is to take a close view of life as a unique entity, which mechanical models are unable to do. This means that things considered to be beyond the individual's purely biological nature, notably the quality of life, meaning in life, and aspirations in life, are included under this wider, biological treatise. Our interpretation of the nature of all living matter is intended as an alternative to medical mechanism, which dates back to the beginning of the 20th century. New ideas such as the notions of the human being as nestled in an evolutionary and ecological context, the spontaneous tendency of self-organizing systems for realization and concord, and the central role of consciousness in interpreting, planning, and expressing human reality are unavoidable today in attempts to scientifically understand all living matter, including human life.",
"title": ""
},
{
"docid": "cf51f466c72108d5933d070b307e5d6d",
"text": "The study reported here follows the suggestion by Caplan et al. (Justice Q, 2010) that risk terrain modeling (RTM) be developed by doing more work to elaborate, operationalize, and test variables that would provide added value to its application in police operations. Building on the ideas presented by Caplan et al., we address three important issues related to RTM that sets it apart from current approaches to spatial crime analysis. First, we address the selection criteria used in determining which risk layers to include in risk terrain models. Second, we compare the ‘‘best model’’ risk terrain derived from our analysis to the traditional hotspot density mapping technique by considering both the statistical power and overall usefulness of each approach. Third, we test for ‘‘risk clusters’’ in risk terrain maps to determine how they can be used to target police resources in a way that improves upon the current practice of using density maps of past crime in determining future locations of crime occurrence. This paper concludes with an in depth exploration of how one might develop strategies for incorporating risk terrains into police decisionmaking. RTM can be developed to the point where it may be more readily adopted by police crime analysts and enable police to be more effectively proactive and identify areas with the greatest probability of becoming locations for crime in the future. The targeting of police interventions that emerges would be based on a sound understanding of geographic attributes and qualities of space that connect to crime outcomes and would not be the result of identifying individuals from specific groups or characteristics of people as likely candidates for crime, a tactic that has led police agencies to be accused of profiling. In addition, place-based interventions may offer a more efficient method of impacting crime than efforts focused on individuals.",
"title": ""
},
{
"docid": "91f5c7b130a7eadef8df1b596cda1eaf",
"text": "It is well-established that within crisis-related communications, rumors are likely to emerge. False rumors, i.e. misinformation, can be detrimental to crisis communication and response; it is therefore important not only to be able to identify messages that propagate rumors, but also corrections or denials of rumor content. In this work, we explore the task of automatically classifying rumor stances expressed in crisisrelated content posted on social media. Utilizing a dataset of over 4,300 manually coded tweets, we build a supervised machine learning model for this task, achieving an accuracy over 88% across a diverse set of rumors of different types.",
"title": ""
},
{
"docid": "a25e2540e97918b954acbb6fdee57eb7",
"text": "Tweet streams provide a variety of real-life and real-time information on social events that dynamically change over time. Although social event detection has been actively studied, how to efficiently monitor evolving events from continuous tweet streams remains open and challenging. One common approach for event detection from text streams is to use single-pass incremental clustering. However, this approach does not track the evolution of events, nor does it address the issue of efficient monitoring in the presence of a large number of events. In this paper, we capture the dynamics of events using four event operations (create, absorb, split, and merge), which can be effectively used to monitor evolving events. Moreover, we propose a novel event indexing structure, called Multi-layer Inverted List (MIL), to manage dynamic event databases for the acceleration of large-scale event search and update. We thoroughly study the problem of nearest neighbour search using MIL based on upper bound pruning, along with incremental index maintenance. Extensive experiments have been conducted on a large-scale real-life tweet dataset. The results demonstrate the promising performance of our event indexing and monitoring methods on both efficiency and effectiveness.",
"title": ""
},
{
"docid": "5b7ff78bc563c351642e5f316a6d895b",
"text": "OBJECTIVE\nTo determine an albino population's expectations from an outreach albino clinic, understanding of skin cancer risk, and attitudes toward sun protection behavior.\n\n\nDESIGN\nSurvey, June 1, 1997, to September 30, 1997.\n\n\nSETTING\nOutreach albino clinics in Tanzania.\n\n\nPARTICIPANTS\nAll albinos 13 years and older and accompanying adults of younger children attending clinics. Unaccompanied children younger than 13 years and those too sick to answer questions were excluded. Ninety-four questionnaires were completed in 5 villages, with a 100% response rate.\n\n\nINTERVENTIONS\nInterview-based questionnaire with scoring system for pictures depicting poorly sun-protected albinos.\n\n\nRESULTS\nThe most common reasons for attending the clinic were health education and skin examination. Thirteen respondents (14%) believed albinism was inherited; it was more common to believe in superstitious causes of albinism than inheritance. Seventy-three respondents (78%) believed skin cancer was preventable, and 60 (63%) believed skin cancer was related to the sun. Seventy-two subjects (77%) thought sunscreen provided protection from the sun; 9 (10%) also applied it at night. Reasons for not wearing sun-protective clothing included fashion, culture, and heat. The hats provided were thought to have too soft a brim, to shrink, and to be ridiculed. Suggestions for additional clinic services centered on education and employment. Albinos who had read the educational booklet had no better understanding of sun avoidance than those who had not (P =.49).\n\n\nCONCLUSIONS\nThere was a reasonable understanding of risks of skin cancer and sun-avoidance methods. Clinical advice was often not followed for cultural reasons. The hats provided were unsuitable, and there was some confusion about the use of sunscreen. A lack of understanding of the cause of albinism led to many superstitions.",
"title": ""
},
{
"docid": "e2af17b368fef36187c895ad5fd20a58",
"text": "We study in this paper the problem of jointly clustering and learning representations. As several previous studies have shown, learning representations that are both faithful to the data to be clustered and adapted to the clustering algorithm can lead to better clustering performance, all the more so that the two tasks are performed jointly. We propose here such an approach for k-Means clustering based on a continuous reparametrization of the objective function that leads to a truly joint solution. The behavior of our approach is illustrated on various datasets showing its efficacy in learning representations for objects while clustering them.",
"title": ""
},
{
"docid": "a9fae3b86b21e40e71b99e5374cd3d4d",
"text": "Motor vehicle collisions are an important cause of blunt abdominal trauma in pregnant woman. Among the possible outcomes of blunt abdominal trauma, placental abruption, direct fetal trauma, and rupture of the gravid uterus are described. An interesting case of complete fetal decapitation with uterine rupture due to a high-velocity motor vehicle collision is described. The external examination of the fetus showed a disconnection between the cervical vertebrae C3 and C4. The autopsy examination showed hematic infiltration of the epicranic soft tissues, an overlap of the parietal bones, and a subarachnoid hemorrhage in the posterior part of interparietal area. Histological analysis was carried out showing a lack of epithelium and hemorrhages in the subcutaneous tissue, a hematic infiltration between the muscular fibers of the neck and between the collagen and deep muscular fibers of the tracheal wall. Specimens collected from the placenta and from the uterus showed a hematic infiltration with hypotrophy of the placental villi, fibrosis of the mesenchymal villi with ischemic phenomena of the membrane. The convergence of circumstantial data, autopsy results, and histological data led us to conclude that the neck lesion was vital and the cause of death was attributed to the motor vehicle collision.",
"title": ""
},
{
"docid": "7e61b5f63d325505209c3284c8a444a1",
"text": "A method to design low-pass filters (LPF) having a defected ground structure (DGS) and broadened transmission-line elements is proposed. The previously presented technique for obtaining a three-stage LPF using DGS by Lim et al. is generalized to propose a method that can be applied in design N-pole LPFs for N/spl les/5. As an example, a five-pole LPF having a DGS is designed and measured. Accurate curve-fitting results and the successive design process to determine the required size of the DGS corresponding to the LPF prototype elements are described. The proposed LPF having a DGS, called a DGS-LPF, includes transmission-line elements with very low impedance instead of open stubs in realizing the required shunt capacitance. Therefore, open stubs, teeor cross-junction elements, and high-impedance line sections are not required for the proposed LPF, while they all have been essential in conventional LPFs. Due to the widely broadened transmission-line elements, the size of the DGS-LPF is compact.",
"title": ""
},
{
"docid": "001d2da1fbdaf2c49311f6e68b245076",
"text": "Lack of physical activity is a serious health concern for individuals who are visually impaired as they have fewer opportunities and incentives to engage in physical activities that provide the amounts and kinds of stimulation sufficient to maintain adequate fitness and to support a healthy standard of living. Exergames are video games that use physical activity as input and which have the potential to change sedentary lifestyles and associated health problems such as obesity. We identify that exergames have a number properties that could overcome the barriers to physical activity that individuals with visual impairments face. However, exergames rely upon being able to perceive visual cues that indicate to the player what input to provide. This paper presents VI Tennis, a modified version of a popular motion sensing exergame that explores the use of vibrotactile and audio cues. The effectiveness of providing multimodal (tactile/audio) versus unimodal (audio) cues was evaluated with a user study with 13 children who are blind. Children achieved moderate to vigorous levels of physical activity- the amount required to yield health benefits. No significant difference in active energy expenditure was found between both versions, though children scored significantly better with the tactile/audio version and also enjoyed playing this version more, which emphasizes the potential of tactile/audio feedback for engaging players for longer periods of time.",
"title": ""
},
{
"docid": "32e92e1be00613e06a7bc03d457704ac",
"text": "Computer systems often fail due to many factors such as software bugs or administrator errors. Diagnosing such production run failures is an important but challenging task since it is difficult to reproduce them in house due to various reasons: (1) unavailability of users' inputs and file content due to privacy concerns; (2) difficulty in building the exact same execution environment; and (3) non-determinism of concurrent executions on multi-processors.\n Therefore, programmers often have to diagnose a production run failure based on logs collected back from customers and the corresponding source code. Such diagnosis requires expert knowledge and is also too time-consuming, tedious to narrow down root causes. To address this problem, we propose a tool, called SherLog, that analyzes source code by leveraging information provided by run-time logs to infer what must or may have happened during the failed production run. It requires neither re-execution of the program nor knowledge on the log's semantics. It infers both control and data value information regarding to the failed execution.\n We evaluate SherLog with 8 representative real world software failures (6 software bugs and 2 configuration errors) from 7 applications including 3 servers. Information inferred by SherLog are very useful for programmers to diagnose these evaluated failures. Our results also show that SherLog can analyze large server applications such as Apache with thousands of logging messages within only 40 minutes.",
"title": ""
},
{
"docid": "f794d4a807a4d69727989254c557d2d1",
"text": "The purpose of this study was to describe the operative procedures and clinical outcomes of a new three-column internal fixation system with anatomical locking plates on the tibial plateau to treat complex three-column fractures of the tibial plateau. From June 2011 to May 2015, 14 patients with complex three-column fractures of the tibial plateau were treated with reduction and internal fixation through an anterolateral approach combined with a posteromedial approach. The patients were randomly divided into two groups: a control group which included seven cases using common locking plates, and an experimental group which included seven cases with a new three-column internal fixation system with anatomical locking plates. The mean operation time of the control group was 280.7 ± 53.7 minutes, which was 215.0 ± 49.1 minutes in the experimental group. The mean intra-operative blood loss of the control group was 692.8 ± 183.5 ml, which was 471.4 ± 138.0 ml in the experimental group. The difference was statistically significant between the two groups above. The differences were not statistically significant between the following mean numbers of the two groups: Rasmussen score immediately after operation; active extension–flexion degrees of knee joint at three and 12 months post-operatively; tibial plateau varus angle (TPA) and posterior slope angle (PA) immediately after operation, at three and at 12 months post-operatively; HSS (The Hospital for Special Surgery) knee-rating score at 12 months post-operatively. All fractures healed. A three-column internal fixation system with anatomical locking plates on tibial plateau is an effective and safe tool to treat complex three-column fractures of the tibial plateau and it is more convenient than the common plate.",
"title": ""
},
{
"docid": "1b581e17dad529b3452d3fbdcb1b3dd1",
"text": "Authorship attribution is the task of identifying the author of a given text. The main concern of this task is to define an appropriate characterization of documents that captures the writing style of authors. This paper proposes a new method for authorship attribution supported on the idea that a proper identification of authors must consider both stylistic and topic features of texts. This method characterizes documents by a set of word sequences that combine functional and content words. The experimental results on poem classification demonstrated that this method outperforms most current state-of-the-art approaches, and that it is appropriate to handle the attribution of short documents.",
"title": ""
},
{
"docid": "063389c654f44f34418292818fc781e7",
"text": "In a cross-disciplinary study, we carried out an extensive literature review to increase understanding of vulnerability indicators used in the disciplines of earthquakeand flood vulnerability assessments. We provide insights into potential improvements in both fields by identifying and comparing quantitative vulnerability indicators grouped into physical and social categories. Next, a selection of indexand curve-based vulnerability models that use these indicators are described, comparing several characteristics such as temporal and spatial aspects. Earthquake vulnerability methods traditionally have a strong focus on object-based physical attributes used in vulnerability curve-based models, while flood vulnerability studies focus more on indicators applied to aggregated land-use classes in curve-based models. In assessing the differences and similarities between indicators used in earthquake and flood vulnerability models, we only include models that separately assess either of the two hazard types. Flood vulnerability studies could be improved using approaches from earthquake studies, such as developing object-based physical vulnerability curve assessments and incorporating time-of-the-day-based building occupation patterns. Likewise, earthquake assessments could learn from flood studies by refining their selection of social vulnerability indicators. Based on the lessons obtained in this study, we recommend future studies for exploring risk assessment methodologies across different hazard types.",
"title": ""
},
{
"docid": "08b2de5f1c6356c988ac9d6f09ca9a31",
"text": "Novel conditions are derived that guarantee convergence of the sum-product algorithm (also known as loopy belief propagation or simply belief propagation (BP)) to a unique fixed point, irrespective of the initial messages, for parallel (synchronous) updates. The computational complexity of the conditions is polynomial in the number of variables. In contrast with previously existing conditions, our results are directly applicable to arbitrary factor graphs (with discrete variables) and are shown to be valid also in the case of factors containing zeros, under some additional conditions. The conditions are compared with existing ones, numerically and, if possible, analytically. For binary variables with pairwise interactions, sufficient conditions are derived that take into account local evidence (i.e., single-variable factors) and the type of pair interactions (attractive or repulsive). It is shown empirically that this bound outperforms existing bounds.",
"title": ""
},
{
"docid": "cbe37cbe2234797a0e3625dbc5c98b68",
"text": "This paper investigates a visual interaction system for vehicle-to-vehicle (V2V) platform, called V3I. Our system employs common visual cameras that are mounted on connected vehicles to perceive the existence of isolated vehicles in the same roadway, and provides human drivers with imagery situational awareness. This allows effective interactions between vehicles even with a low permeation rate of V2V devices. The underlying research problem for V3I includes two aspects: i) tracking isolated vehicles of interest over time through local cameras; ii) at each time-step fusing the results of local visual perceptions to obtain a global location map that involves both isolated and connected vehicles. In this paper, we introduce a unified probabilistic approach to solve the above two problems, i.e., tracking and localization, in a joint fashion. Our approach will explore both the visual features of individual vehicles in images and the pair-wise spatial relationships between vehicles. We develop a fast Markov Chain Monte Carlo (MCMC) algorithm to search the joint solution space efficiently, which enables real-time application. To evaluate the performance of the proposed approach, we collect and annotate a set of video sequences captured with a group of vehicle-resident cameras. Extensive experiments with comparisons clearly demonstrate that the proposed V3I approach can precisely recover the dynamic location map of the surrounding and thus enable direct visual interactions between vehicles .",
"title": ""
},
{
"docid": "eb59f239621dde59a13854c5e6fa9f54",
"text": "This paper presents a novel application of grammatical inference techniques to the synthesis of behavior models of software systems. This synthesis is used for the elicitation of software requirements. This problem is formulated as a deterministic finite-state automaton induction problem from positive and negative scenarios provided by an end-user of the software-to-be. A query-driven state merging algorithm (QSM) is proposed. It extends the RPNI and Blue-Fringe algorithms by allowing membership queries to be submitted to the end-user. State merging operations can be further constrained by some prior domain knowledge formulated as fluents, goals, domain properties, and models of external software components. The incorporation of domain knowledge both reduces the number of queries and guarantees that the induced model is consistent with such knowledge. The proposed techniques are implemented in the ISIS tool and practical evaluations on standard requirements engineering test cases and synthetic data illustrate the interest of this approach. Contact author: Pierre Dupont Department of Computing Science and Engineering (INGI) Université catholique de Louvain Place Sainte Barbe, 2. B-1348 Louvain-la-Neuve Belgium Email: Pierre.Dupont@uclouvain.be Phone: +32 10 47 91 14 Fax: +32 10 45 03 45",
"title": ""
}
] |
scidocsrr
|
aea48d17b29d7ab2d782d1f532d4eb32
|
Solving Single-digit Sudoku Subproblems
|
[
{
"docid": "87a6fd003dd6e23f27e791c9de8b8ba6",
"text": "The well-known travelling salesman problem is the following: \" A salesman is required ~,o visit once and only once each of n different cities starting from a base city, and returning to this city. What path minimizes the to ta l distance travelled by the salesman?\" The problem has been treated by a number of different people using a var ie ty of techniques; el. Dantzig, Fulkerson, Johnson [1], where a combination of ingemtity and linear programming is used, and Miller, Tucker and Zemlin [2], whose experiments using an all-integer program of Gomory did not produce results i~ cases with ten cities although some success was achieved in eases of simply four cities. The purpose of this note is to show tha t this problem can easily be formulated in dynamic programming terms [3], and resolved computationally for up to 17 cities. For larger numbers, the method presented below, combined with various simple manipulations, may be used to obtain quick approximate solutions. Results of this nature were independently obtained by M. Held and R. M. Karp, who are in the process of publishing some extensions and computat ional results.",
"title": ""
},
{
"docid": "38506c89b32c7c82d45040fd99c36986",
"text": "We provide a simple linear time transformation from a direct ed or undirected graph with labeled edges to an unlabeled digraph, such that paths in the input graph in which no two consecutive edges have the same label correspond to paths in the transformed graph and vice v ersa. Using this transformation, we provide efficient algorithms for finding paths and cycles with no two consecuti ve equal labels. We also consider related problems where the paths and cycles are required to be simple; we find ef ficient algorithms for the undirected case of these problems but show the directed case to be NP-complete. We app ly our path and cycle finding algorithms in a program for generating and solving Sudoku puzzles, and show experimentally that they lead to effective puzzlesolving rules that may also be of interest to human Sudoku puz zle solvers.",
"title": ""
}
] |
[
{
"docid": "40bd8351735f780ba104fa63383002fe",
"text": "M a y / J u n e 2 0 0 0 I E E E S O F T W A R E 37 between the user-requirements specification and the software-requirements specification, mandating complete documentation of each according to various rules. Other cases emphasize this distinction less. For instance, some groups at Microsoft argue that the difficulty of keeping a technical specification consistent with the program is more trouble than the benefit merits.2 We can find a wide range of views in industry literature and from the many organizations that write software. Is it possible to clarify these various artifacts and study their properties, given the wide variations in the use of terms and the many different kinds of software being written? Our aim is to provide a framework for talking about key artifacts, their attributes, and relationships at a general level, but precisely enough that we can rigorously analyze substantive properties.",
"title": ""
},
{
"docid": "60f31d60213abe65faec3eb69edb1eea",
"text": "In this paper, a novel multi-layer four-way out-of-phase power divider based on substrate integrated waveguide (SIW) is proposed. The four-way power division is realized by 3-D mode coupling; vertical partitioning of a SIW followed by lateral coupling to two half-mode SIW. The measurement results show the excellent insertion loss (S<inf>21</inf>, S<inf>31</inf>, S<inf>41</inf>, S<inf>51</inf>: −7.0 ± 0.5 dB) and input return loss (S<inf>11</inf>: −10 dB) in X-band (7.63 GHz ∼ 11.12 GHz). We expect that the proposed power divider play an important role for the integration of compact multi-way SIW circuits.",
"title": ""
},
{
"docid": "7881f99465004a45f3089b0ec23925e0",
"text": "In recent decades, extensive studies from diverse disciplines have focused on children's developmental awareness of different gender roles and the relationships between genders. Among these studies, researchers agree that children's picture books have an increasingly significant place in children's development because these books are a widely available cultural resource, offering young children a multitude of opportunities to gain information, become familiar with the printed pictures, be entertained, and experience perspectives other than their own. In such books, males are habitually described as active and domineering, while females rarely reveal their identities and very frequently are represented as meek and mild. This valuable venue for children's gender development thus unfortunately reflects engrained societal attitudes and biases in the available choices and expectations assigned to different genders. This discriminatory portrayal in many children's picture books also runs the risk of leading children toward a misrepresented and misguided realization of their true potential in their expanding world.",
"title": ""
},
{
"docid": "333bffc73983bc159248420d76afc7e6",
"text": "In this paper we study approximate landmark-based methods for point-to-point distance estimation in very large networks. These methods involve selecting a subset of nodes as landmarks and computing offline the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, it can be estimated quickly by combining the precomputed distances. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. We therefore explore theoretical insights to devise a variety of simple methods that scale well in very large networks. The efficiency of the suggested techniques is tested experimentally using five real-world graphs having millions of edges. While theoretical bounds support the claim that random landmarks work well in practice, our extensive experimentation shows that smart landmark selection can yield dramatically more accurate results: for a given target accuracy, our methods require as much as 250 times less space than selecting landmarks at random. In addition, we demonstrate that at a very small accuracy loss our techniques are several orders of magnitude faster than the state-of-the-art exact methods. Finally, we study an application of our methods to the task of social search in large graphs.",
"title": ""
},
{
"docid": "8b0a90d4f31caffb997aced79c59e50c",
"text": "Visual SLAM systems aim to estimate the motion of a moving camera together with the geometric structure and appearance of the world being observed. To the extent that this is possible using only an image stream, the core problem that must be solved by any practical visual SLAM system is that of obtaining correspondence throughout the images captured. Modern visual SLAM pipelines commonly obtain correspondence by using sparse feature matching techniques and construct maps using a composition of point, line or other simple geometric primitives. The resulting sparse feature map representations provide sparsely furnished, incomplete reconstructions of the observed scene. Related techniques from multiple view stereo (MVS) achieve high quality dense reconstruction by obtaining dense correspondences over calibrated image sequences. Despite the usefulness of the resulting dense models, these techniques have been of limited use in visual SLAM systems. The computational complexity of estimating dense surface geometry has been a practical barrier to its use in real-time SLAM. Furthermore, MVS algorithms have typically required a fixed length, calibrated image sequence to be available throughout the optimisation — a condition fundamentally at odds with the online nature of SLAM. With the availability of massively-parallel commodity computing hardware, we demonstrate new algorithms that achieve high quality incremental dense reconstruction within online visual SLAM. The result is a live dense reconstruction (LDR) of scenes that makes possible numerous applications that can utilise online surface modelling, for instance: planning robot interactions with unknown objects, augmented reality with characters that interact with the scene, or providing enhanced data for object recognition. The core of this thesis goes beyond LDR to demonstrate fully dense visual SLAM. We replace the sparse feature map representation with an incrementally updated, non-parametric, dense surface model. By enabling real-time dense depth map estimation through novel short baseline MVS, we can continuously update the scene model and further leverage its predictive capabilities to achieve robust camera pose estimation with direct whole image alignment. We demonstrate the capabilities of dense visual SLAM using a single moving passive camera, and also when real-time surface measurements are provided by a commodity depth camera. The results demonstrate state-of-the-art, pick-up-and-play 3D reconstruction and camera tracking systems useful in many real world scenarios. Acknowledgements There are key individuals who have provided me with all the support and tools that a student who sets out on an adventure could want. Here, I wish to acknowledge those friends and colleagues, that by providing technical advice or much needed fortitude, helped bring this work to life. Prof. Andrew Davison’s robot vision lab provides a unique research experience amongst computer vision labs in the world. First and foremost, I thank my supervisor Andy for giving me the chance to be part of that experience. His brilliant guidance and support of my growth as a researcher are well matched by his enthusiasm for my work. This is made most clear by his fostering the joy of giving live demonstrations of work in progress. His complete faith in my ability drove me on and gave me license to develop new ideas and build bridges to research areas that we knew little about. Under his guidance I’ve been given every possible opportunity to develop my research interests, and this thesis would not be possible without him. My appreciation for Prof. Murray Shanahan’s insights and spirit began with our first conversation. Like ripples from a stone cast into a pond, the presence of his ideas and depth of knowledge instantly propagated through my mind. His enthusiasm and capacity to discuss any topic, old or new to him, and his ability to bring ideas together across the worlds of science and philosophy, showed me an openness to thought that I continue to try to emulate. I am grateful to Murray for securing a generous scholarship for me in the Department of Computing and for providing a home away from home in his cognitive robotics lab. I am indebted to Prof. Owen Holland who introduced me to the world of research at the University of Essex. Owen showed me a first glimpse of the breadth of ideas in robotics, AI, cognition and beyond. I thank Owen for introducing me to the idea of continuing in academia for a doctoral degree and for introducing me to Murray. I have learned much with many friends and colleagues at Imperial College, but there are three who have been instrumental. I thank Steven Lovegrove, Ankur Handa and Renato Salas-Moreno who travelled with me on countless trips into the unknown, sometimes to chase a small concept but more often than not in pursuit of the bigger picture we all wanted to see. They indulged me with months of exploration, collaboration and fun, leading to us understand ideas and techniques that were once out of reach. Together, we were able to learn much more. Thank you Hauke Strasdatt, Luis Pizarro, Jan Jachnick, Andreas Fidjeland and members of the robot vision and cognitive robotics labs for brilliant discussions and for sharing the",
"title": ""
},
{
"docid": "5868ec5c17bf7349166ccd0600cc6b07",
"text": "Secure devices are often subject to attacks and behavioural analysis in order to inject faults on them and/or extract otherwise secret information. Glitch attacks, sudden changes on the power supply rails, are a common technique used to inject faults on electronic devices. Detectors are designed to catch these attacks. As the detectors become more efficient, new glitches that are harder to detect arise. Common glitch detection approaches, such as directly monitoring the power rails, can potentially find it hard to detect fast glitches, as these become harder to differentiate from noise. This paper proposes a design which, instead of monitoring the power rails, monitors the effect of a glitch on a sensitive circuit, hence reducing the risk of detecting noise as glitches.",
"title": ""
},
{
"docid": "0d38949c93a7b86a0245a7e5bfe89114",
"text": "Software Defined Radio (SDR) is a flexible architecture which can be configured to adapt various wireless standards, waveforms, frequency bands, bandwidths, and modes of operations. This paper presents a detailed survey of the existing hardware and software platform for SDRs. It also presents prototype system for designing and testing of software defined radios in MATLAB/Simulink and briefly discusses the salient functions of the prototype system for Cognitive Radio (CR). A prototype system for wireless personal area network is built and interfaced with a Universal Software Radio Peripheral-2 (USRP2) main-board and RFX2400 daughter board from Ettus Research LLC. The philosophy behind the prototype is to do all waveform-specific processing such as channel coding, modulation, filtering etc. on a host (PC) and general purpose high-speed operations like digital up and down conversion, decimation and interpolation etc. inside FPGA on an USRP2. MATLAB has a rich family of toolboxes that allows building software-defined and cognitive radio to explore various spectrum sensing, prediction and management techniques.",
"title": ""
},
{
"docid": "6fb50b6f34358cf3229bd7645bf42dcd",
"text": "With the in-depth study of sentiment analysis research, finer-grained opinion mining, which aims to detect opinions on different review features as opposed to the whole review level, has been receiving more and more attention in the sentiment analysis research community recently. Most of existing approaches rely mainly on the template extraction to identify the explicit relatedness between product feature and opinion terms, which is insufficient to detect the implicit review features and mine the hidden sentiment association in reviews, which satisfies (1) the review features are not appear explicit in the review sentences; (2) it can be deduced by the opinion words in its context. From an information theoretic point of view, this paper proposed an iterative reinforcement framework based on the improved information bottleneck algorithm to address such problem. More specifically, the approach clusters product features and opinion words simultaneously and iteratively by fusing both their semantic information and co-occurrence information. The experimental results demonstrate that our approach outperforms the template extraction based approaches.",
"title": ""
},
{
"docid": "0c34e8355f1635b3679159abd0a82806",
"text": "Bar charts are an effective way to convey numeric information, but today's algorithms cannot parse them. Existing methods fail when faced with even minor variations in appearance. Here, we present DVQA, a dataset that tests many aspects of bar chart understanding in a question answering framework. Unlike visual question answering (VQA), DVQA requires processing words and answers that are unique to a particular bar chart. State-of-the-art VQA algorithms perform poorly on DVQA, and we propose two strong baselines that perform considerably better. Our work will enable algorithms to automatically extract numeric and semantic information from vast quantities of bar charts found in scientific publications, Internet articles, business reports, and many other areas.",
"title": ""
},
{
"docid": "913ea886485fae9b567146532ca458ac",
"text": "This article presents a new method to illustrate the feasibility of 3D topology creation. We base the 3D construction process on testing real cases of implementation of 3D parcels construction in a 3D cadastral system. With the utilization and development of dense urban space, true 3D geometric volume primitives are needed to represent 3D parcels with the adjacency and incidence relationship. We present an effective straightforward approach to identifying and constructing the valid volumetric cadastral object from the given faces, and build the topological relationships among 3D cadastral objects on-thefly, based on input consisting of loose boundary 3D faces made by surveyors. This is drastically different from most existing methods, which focus on the validation of single volumetric objects after the assumption of the object’s creation. Existing methods do not support the needed types of geometry/ topology (e.g. non 2-manifold, singularities) and how to create and maintain valid 3D parcels is still a challenge in practice. We will show that the method does not change the faces themselves and faces in a given input are independently specified. Various volumetric objects, including non-manifold 3D cadastral objects (legal spaces), can be constructed correctly by this method, as will be shown from the",
"title": ""
},
{
"docid": "758bc9b5e633d59afb155650239591a9",
"text": "A growing body of works address automated mining of biochemical knowledge from digital repositories of scientific literature, such as MEDLINE. Some of these works use abstracts as the unit of text from which to extract facts. Others use sentences for this purpose, while still others use phrases. Here we compare abstracts, sentences, and phrases in MEDLINE using the standard information retrieval performance measures of recall, precision, and effectiveness, for the task of mining interactions among biochemical terms based on term co-occurrence. Results show statistically significant differences that can impact the choice of text unit.",
"title": ""
},
{
"docid": "19d29667e1632ff6f0a7446de22cdb84",
"text": "Chronic kidney disease (CKD) is defined by persistent urine abnormalities, structural abnormalities or impaired excretory renal function suggestive of a loss of functional nephrons. The majority of patients with CKD are at risk of accelerated cardiovascular disease and death. For those who progress to end-stage renal disease, the limited accessibility to renal replacement therapy is a problem in many parts of the world. Risk factors for the development and progression of CKD include low nephron number at birth, nephron loss due to increasing age and acute or chronic kidney injuries caused by toxic exposures or diseases (for example, obesity and type 2 diabetes mellitus). The management of patients with CKD is focused on early detection or prevention, treatment of the underlying cause (if possible) to curb progression and attention to secondary processes that contribute to ongoing nephron loss. Blood pressure control, inhibition of the renin–angiotensin system and disease-specific interventions are the cornerstones of therapy. CKD complications such as anaemia, metabolic acidosis and secondary hyperparathyroidism affect cardiovascular health and quality of life, and require diagnosis and treatment.",
"title": ""
},
{
"docid": "6724af38a637d61ccc2a4ad8119c6e1a",
"text": "INTRODUCTION Pivotal to athletic performance is the ability to more maintain desired athletic performance levels during particularly critical periods of competition [1], such as during pressurised situations that typically evoke elevated levels of anxiety (e.g., penalty kicks) or when exposed to unexpected adversities (e.g., unfavourable umpire calls on crucial points) [2, 3]. These kinds of situations become markedly important when athletes, who are separated by marginal physical and technical differences, are engaged in closely contested matches, games, or races [4]. It is within these competitive conditions, in particular, that athletes’ responses define their degree of success (or lack thereof); responses that are largely dependent on athletes’ psychological attributes [5]. One of these attributes appears to be mental toughness (MT), which has often been classified as a critical success factor due to the role it plays in fostering adaptive responses to positively and negatively construed pressures, situations, and events [6 8]. However, as scholars have intensified",
"title": ""
},
{
"docid": "6cf3f0b1cb7a687d0c04dc91c574cda8",
"text": "In recent years, crowdsourcing has become essential in a wide range of Web applications. One of the biggest challenges of crowdsourcing is the quality of crowd answers as workers have wide-ranging levels of expertise and the worker community may contain faulty workers. Although various techniques for quality control have been proposed, a post-processing phase in which crowd answers are validated is still required. Validation is typically conducted by experts, whose availability is limited and who incur high costs. Therefore, we develop a probabilistic model that helps to identify the most beneficial validation questions in terms of both, improvement of result correctness and detection of faulty workers. Our approach allows us to guide the expert's work by collecting input on the most problematic cases, thereby achieving a set of high quality answers even if the expert does not validate the complete answer set. Our comprehensive evaluation using both real-world and synthetic datasets demonstrates that our techniques save up to 50% of expert efforts compared to baseline methods when striving for perfect result correctness. In absolute terms, for most cases, we achieve close to perfect correctness after expert input has been sought for only 20\\% of the questions.",
"title": ""
},
{
"docid": "13ac4474f01136b2603f2b7ee9eedf19",
"text": "Teamwork is best achieved when members of the team understand one another. Human-robot collaboration poses a particular challenge to this goal due to the differences between individual team members, both mentally/computationally and physically. One way in which this challenge can be addressed is by developing explicit models of human teammates. Here, we discuss, compare and contrast the many techniques available for modeling human cognition and behavior, and evaluate their benefits and drawbacks in the context of human-robot collaboration.",
"title": ""
},
{
"docid": "164fca8833981d037f861aada01d5f7f",
"text": "Kernel methods provide a principled way to perform non linear, nonparametric learning. They rely on solid functional analytic foundations and enjoy optimal statistical properties. However, at least in their basic form, they have limited applicability in large scale scenarios because of stringent computational requirements in terms of time and especially memory. In this paper, we take a substantial step in scaling up kernel methods, proposing FALKON, a novel algorithm that allows to efficiently process millions of points. FALKON is derived combining several algorithmic principles, namely stochastic subsampling, iterative solvers and preconditioning. Our theoretical analysis shows that optimal statistical accuracy is achieved requiring essentially O(n) memory and O(n √ n) time. An extensive experimental analysis on large scale datasets shows that, even with a single machine, FALKON outperforms previous state of the art solutions, which exploit parallel/distributed architectures.",
"title": ""
},
{
"docid": "5b17c5637af104b1f20ff1ca9ce9c700",
"text": "According to the traditional understanding of cerebrospinal fluid (CSF) physiology, the majority of CSF is produced by the choroid plexus, circulates through the ventricles, the cisterns, and the subarachnoid space to be absorbed into the blood by the arachnoid villi. This review surveys key developments leading to the traditional concept. Challenging this concept are novel insights utilizing molecular and cellular biology as well as neuroimaging, which indicate that CSF physiology may be much more complex than previously believed. The CSF circulation comprises not only a directed flow of CSF, but in addition a pulsatile to and fro movement throughout the entire brain with local fluid exchange between blood, interstitial fluid, and CSF. Astrocytes, aquaporins, and other membrane transporters are key elements in brain water and CSF homeostasis. A continuous bidirectional fluid exchange at the blood brain barrier produces flow rates, which exceed the choroidal CSF production rate by far. The CSF circulation around blood vessels penetrating from the subarachnoid space into the Virchow Robin spaces provides both a drainage pathway for the clearance of waste molecules from the brain and a site for the interaction of the systemic immune system with that of the brain. Important physiological functions, for example the regeneration of the brain during sleep, may depend on CSF circulation.",
"title": ""
},
{
"docid": "7a8a98b91680cbc63594cd898c3052c8",
"text": "Policy-based access control is a technology that achieves separation of concerns through evaluating an externalized policy at each access attempt. While this approach has been well-established for request-response applications, it is not supported for database queries of data-driven applications, especially for attribute-based policies. In particular, search operations for such applications involve poor scalability with regard to the data set size for this approach, because they are influenced by dynamic runtime conditions. This paper proposes a scalable application-level middleware solution that performs runtime injection of the appropriate rules into the original search query, so that the result set of the search includes only items to which the subject is entitled. Our evaluation shows that our method scales far better than current state of practice approach that supports policy-based access control.",
"title": ""
},
{
"docid": "a077507e8d2bde5bb326873413b5bd99",
"text": "Encryption is widely used across the internet to secure communications and ensure that information cannot be intercepted and read by a third party. However, encryption also allows cybercriminals to hide their messages and carry out successful malware attacks while avoiding detection. Further aiding criminals is the fact that web browsers display a green lock symbol in the URL bar when a connection to a website is encrypted. This symbol gives a false sense of security to users, who are in turn more likely to fall victim to phishing attacks. The risk of encrypted traffic means that information security researchers must explore new techniques to detect, classify, and take countermeasures against malicious traffic. So far there exists no approach for TLS detection in the wild. In this paper, we propose a method for identifying malicious use of web certificates using deep neural networks. Our system uses the content of TLS certificates to successfully identify legitimate certificates as well as malicious patterns used by attackers. The results show that our system is capable of identifying malware certificates with an accuracy of 94.87% and phishing certificates with an accuracy of 88.64%.",
"title": ""
},
{
"docid": "e7b2956529e0a0a29c9abaf8bb044a6c",
"text": "Information extraction studies have been conducted to improve the efficiency ansd accuracy of information retrieval. We developed information extraction techniques to extract name of company, period of document, currency, revenue, and number of employee information from financial report documents automatically. Different with other works, we applied a multi-strategy approach for developing extraction techniques. We separated information based on its similar characteristics before designing extraction techniques. We assumed that the difference of characteristics owned by each information induces the difference of strategy applied. First strategy is constructing extraction techniques using rule-based extraction method on information, which has good regularity on orthographic and layout features such as name of company, period of document and currency. Second strategy is applying machine learning-based extraction method on information, which has rich contextual and list look-up features such as revenue and number of employee. On the first strategy, rule patterns are defined by combining orthographic, layout, and limited contextual features. Defined rule patterns succeed to extract information and gain precision, recall, and F1-measure more than 0.98. On the second strategy, we conducted extraction task as classification task. First, we built classification models using Naive Bayes and Support Vector Machines algorithms. Then, we extracted the most informative features to train the classification models. The best classification model is used for extraction task. Contextual and list look-up features play important role in improving extraction performance. Second strategy succeed to extract revenue and number of employee information and gains precision, recall, and F-1 measure more than 0.93.",
"title": ""
}
] |
scidocsrr
|
5281a4d088a09066798e305cc71004dc
|
Knowledge as a Teacher: Knowledge-Guided Structural Attention Networks
|
[
{
"docid": "a5b7253f56a487552ba3b0ce15332dd1",
"text": "We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (Socher et al., 2013) and TransE (Bordes et al., 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and/or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2% vs. 54.7% by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as BornInCitypa, bq ^ CityInCountrypb, cq ùñ Nationalitypa, cq. We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics, and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-ofthe-art confidence-based rule mining approach in mining horn rules that involve compositional reasoning.",
"title": ""
},
{
"docid": "2917b7b1453f9e6386d8f47129b605fb",
"text": "We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form–function relationship in language, our “composed” word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).",
"title": ""
},
{
"docid": "a64bcfefdebc43809636d6d39887f6e2",
"text": "This paper investigates the use of deep belief networks (DBN) for semantic tagging, a sequence classification task, in spoken language understanding (SLU). We evaluate the performance of the DBN based sequence tagger on the well-studied ATIS task and compare our technique to conditional random fields (CRF), a state-of-the-art classifier for sequence classification. In conjunction with lexical and named entity features, we also use dependency parser based syntactic features and part of speech (POS) tags [1]. Under both noisy conditions (output of automatic speech recognition system) and clean conditions (manual transcriptions), our deep belief network based sequence tagger outperforms the best CRF based system described in [1] by an absolute 2% and 1% F-measure, respectively.Upon carrying out an analysis of cases where CRF and DBN models made different predictions, we observed that when discrete features are projected onto a continuous space during neural network training, the model learns to cluster these features leading to its improved generalization capability, relative to a CRF model, especially in cases where some features are either missing or noisy.",
"title": ""
}
] |
[
{
"docid": "43fe2c4898a643be10928e8f677a59ef",
"text": "When people want to move to a new job, it is often difficult since there is too much job information available. To select an appropriate job and then submit a resume is tedious. It is particularly difficult for university students since they normally do not have any work experience and also are unfamiliar with the job market. To deal with the information overload for students during their transition into work, a job recommendation system can be very valuable. In this research, after fully investigating the pros and cons of current job recommendation systems for university students, we propose a student profiling based re-ranking framework. In this system, the students are recommended a list of potential jobs based on those who have graduated and obtained job offers over the past few years. Furthermore, recommended employers are also used as input for job recommendation result re-ranking. Our experimental study on real recruitment data over the past four years has shown this method’s potential.",
"title": ""
},
{
"docid": "fcb69bd97835da9f244841d54996f070",
"text": "A conventional transverse slot substrate integrated waveguide (SIW) periodic leaky wave antenna (LWA) provides a fan beam, usually E-plane beam having narrow beam width and H-plane having wider beamwidth. The main beam direction changes with frequency sweep. In the applications requiring a pencil beam, an array of the antenna is generally used to decrease the H-plane beam width which requires long and tiring optimization steps. In this paper, it is shown that the H-plane beamwidth can be easily decreased by using two baffles with a conventional leaky wave antenna. A prototype periodic leaky wave antenna with baffles is designed and fabricated for X-band applications. The E- and H-plane 3 dB beam widths of the antenna at 10.5GHz are, respectively, 6° and 22°. Over the frequency range 8.2–14 GHz, the antenna scans from θ = −60° to θ = 15°, from backward to forward direction. The use of baffles also improves the gain of the antenna including broadside direction by approximately 4 dB.",
"title": ""
},
{
"docid": "9445631e0850d2126750ffa50ae007ee",
"text": "Modern Visual Question Answering (VQA) models have been shown to rely heavily on superficial correlations between question and answer words learned during training – e.g. overwhelmingly reporting the type of room as kitchen or the sport being played as tennis, irrespective of the image. Most alarmingly, this shortcoming is often not well reflected during evaluation because the same strong priors exist in test distributions; however, a VQA system that fails to ground questions in image content would likely perform poorly in real-world settings. In this work, we present a novel regularization scheme for VQA that reduces this effect. We introduce a question-only model that takes as input the question encoding from the VQA model and must leverage language biases in order to succeed. We then pose training as an adversarial game between the VQA model and this question-only adversary – discouraging the VQA model from capturing language biases in its question encoding. Further, we leverage this question-only model to estimate the increase in model confidence after considering the image, which we maximize explicitly to encourage visual grounding. Our approach is a model agnostic training procedure and simple to implement. We show empirically that it can improve performance significantly on a bias-sensitive split of the VQA dataset for multiple base models – achieving state-of-the-art on this task. Further, on standard VQA tasks, our approach shows significantly less drop in accuracy compared to existing bias-reducing VQA models.",
"title": ""
},
{
"docid": "c6ec311353b0872bcc1dfd09abb7632e",
"text": "Deep neural network algorithms are difficult to analyze because they lack structure allowing to understand the properties of underlying transforms and invariants. Multiscale hierarchical convolutional networks are structured deep convolutional networks where layers are indexed by progressively higher dimensional attributes, which are learned from training data. Each new layer is computed with multidimensional convolutions along spatial and attribute variables. We introduce an efficient implementation of such networks where the dimensionality is progressively reduced by averaging intermediate layers along attribute indices. Hierarchical networks are tested on CIFAR image data bases where they obtain comparable precisions to state of the art networks, with much fewer parameters. We study some properties of the attributes learned from these databases.",
"title": ""
},
{
"docid": "a96a0d676a6818429689fbbc4f05022b",
"text": "This paper presents a new approach with Muller method for solving profit based unit commitment (PBUC). In deregulated environment, the generation companies (GENCOs) schedule their generators to maximize their profit rather than satisfying the power demand. While solving the PBUC problem, the information of forecasted price at the given predicted power demand is known. The PBUC problem is solved by the proposed approach in two stages. Initially, committed units table obtains information of the committed units and finally the non linear programming sub problem of economic dispatch is solved by Muller method. The proposed approach has been tested on a power system with 3 and 10 generating units. Simulation results of the proposed approach have been compared with existing methods and also with the traditional unit commitment. It is observed from the simulation results that the proposed algorithm provides maximum profit with less computational time compare to existing methods.",
"title": ""
},
{
"docid": "7fbc4db77312a4ee26cf6565b36d9664",
"text": "This paper describes a novel system for creating virtual creatures that move and behave in simulated three-dimensional physical worlds. The morphologies of creatures and the neural systems for controlling their muscle forces are both generated automatically using genetic algorithms. Different fitness evaluation functions are used to direct simulated evolutions towards specific behaviors such as swimming, walking, jumping, and following.\nA genetic language is presented that uses nodes and connections as its primitive elements to represent directed graphs, which are used to describe both the morphology and the neural circuitry of these creatures. This genetic language defines a hyperspace containing an indefinite number of possible creatures with behaviors, and when it is searched using optimization techniques, a variety of successful and interesting locomotion strategies emerge, some of which would be difficult to invent or built by design.",
"title": ""
},
{
"docid": "17faf590307caf41095530fcec1069c7",
"text": "Fine-grained visual recognition typically depends on modeling subtle difference from object parts. However, these parts often exhibit dramatic visual variations such as occlusions, viewpoints, and spatial transformations, making it hard to detect. In this paper, we present a novel attention-based model to automatically, selectively and accurately focus on critical object regions with higher importance against appearance variations. Given an image, two different Convolutional Neural Networks (CNNs) are constructed, where the outputs of two CNNs are correlated through bilinear pooling to simultaneously focus on discriminative regions and extract relevant features. To capture spatial distributions among the local regions with visual attention, soft attention based spatial LongShort Term Memory units (LSTMs) are incorporated to realize spatially recurrent yet visually selective over local input patterns. All the above intuitions equip our network with the following novel model: two-stream CNN layers, bilinear pooling layer, spatial recurrent layer with location attention are jointly trained via an end-to-end fashion to serve as the part detector and feature extractor, whereby relevant features are localized and extracted attentively. We show the significance of our network against two well-known visual recognition tasks: fine-grained image classification and person re-identification.",
"title": ""
},
{
"docid": "48096a9a7948a3842afc082fa6e223a6",
"text": "We present a method for using previously-trained ‘teacher’ agents to kickstart the training of a new ‘student’ agent. To this end, we leverage ideas from policy distillation (Rusu et al., 2015; Parisotto et al., 2015) and population based training (Jaderberg et al., 2017). Our method places no constraints on the architecture of the teacher or student agents, and it regulates itself to allow the students to surpass their teachers in performance. We show that, on a challenging and computationally-intensive multi-task benchmark (Beattie et al., 2016), kickstarted training improves the data efficiency of new agents, making it significantly easier to iterate on their design. We also show that the same kickstarting pipeline can allow a single student agent to leverage multiple ‘expert’ teachers which specialise on individual tasks. In this setting kickstarting yields surprisingly large gains, with the kickstarted agent matching the performance of an agent trained from scratch in almost 10× fewer steps, and surpassing its final performance by 42%. Kickstarting is conceptually simple and can easily be incorporated into reinforcement learning experiments.",
"title": ""
},
{
"docid": "8e1947a9e890ef110c75a52d706eec2a",
"text": "Despite the rapid increase in online shopping, the literature is silent in terms of the interrelationship between perceived risk factors, the marketing impacts, and their influence on product and web-vendor consumer trust. This research focuses on holidaymakers’ perspectives using Internet bookings for their holidays. The findings reveal the associations between Internet perceived risks and the relatively equal influence of product and e-channel risks in consumers’ trust, and that online purchasing intentions are equally influenced by product and e-channel consumer trust. They also illustrate the relationship between marketing strategies and perceived risks, and provide managerial suggestions for further e-purchasing tourism improvement.",
"title": ""
},
{
"docid": "cab026368678748f2a48d797c6c7bfac",
"text": "The development of a microstrip-based L-band Dicke radiometer with the long-term stability required for future ocean salinity measurements to an accuracy of 0.1 psu is presented. This measurement requires the L-band radiometers to have calibration stabilities of les 0.05 K over 2 days. This research has focused on determining the optimum radiometer requirements and configuration to achieve this objective. System configuration and component performance have been evaluated with radiometer test beds at both JPL and GSFC. The GSFC test bed uses a cryogenic chamber that allows long-term characterization at radiometric temperatures in the range of 70 - 120 K. The research has addressed several areas including component characterization as a function of temperature and DC bias, system linearity, optimum noise diode injection calibration, and precision temperature control of components. A breadboard radiometer, utilizing microstrip-based technologies, has been built to demonstrate this long-term stability",
"title": ""
},
{
"docid": "b2246b58bb9fb6c6ff58115e25da49dc",
"text": "Human action in video sequences can be seen as silhouettes of a moving torso and protruding limbs undergoing articulated motion. We regard human actions as three-dimensional shapes induced by the silhouettes in the space-time volume. We adopt a recent approach by Gorelick et al. (2004) for analyzing 2D shapes and generalize it to deal with volumetric space-time action shapes. Our method utilizes properties of the solution to the Poisson equation to extract space-time features such as local space-time saliency, action dynamics, shape structure and orientation. We show that these features are useful for action recognition, detection and clustering. The method is fast, does not require video alignment and is applicable in (but not limited to) many scenarios where the background is known. Moreover, we demonstrate the robustness of our method to partial occlusions, non-rigid deformations, significant changes in scale and viewpoint, high irregularities in the performance of an action and low quality video",
"title": ""
},
{
"docid": "25921de89de837e2bcd2a815ec181564",
"text": "Satellite-based Global Positioning Systems (GPS) have enabled a variety of location-based services such as navigation systems, and become increasingly popular and important in our everyday life. However, GPS does not work well in indoor environments where walls, floors and other construction objects greatly attenuate satellite signals. In this paper, we propose an Indoor Positioning System (IPS) based on widely deployed indoor WiFi systems. Our system uses not only the Received Signal Strength (RSS) values measured at the current location but also the previous location information to determine the current location of a mobile user. We have conducted a large number of experiments in the Schorr Center of the University of Nebraska-Lincoln, and our experiment results show that our proposed system outperforms all other WiFi-based RSS IPSs in the comparison, and is 5% more accurate on average than others. iii ACKNOWLEDGMENTS Firstly, I would like to express my heartfelt gratitude to my advisor and committee chair, Professor Lisong Xu and the co-advisor Professor Zhigang Shen for their constant encouragement and guidance throughout the course of my master's study and all the stages of the writing of this thesis. Without their consistent and illuminating instruction, this thesis work could not have reached its present form. Their technical and editorial advice and infinite patience were essential for the completion of this thesis. I feel privileged to have had the opportunity to study under them. I thank Professor Ziguo Zhong and Professor Mehmet Vuran for serving on my Master's Thesis defense committee, and their involvement has greatly improved and clarified this work. I specially thank Prof Ziguo Zhong again, since his support has always been very generous in both time and research resources. I thank all the CSE staff and friends, for their friendship and for all the memorable times in UNL. I would like to thank everyone who has helped me along the way. At last, I give my deepest thanks go to my parents for their self-giving love and support throughout my life.",
"title": ""
},
{
"docid": "ad0a69f92d511e02a24b8d77d3a17641",
"text": "Requirement engineering is an integral part of the software development lifecycle since the basis for developing successful software depends on comprehending its requirements in the first place. Requirement engineering involves a number of processes for gathering requirements in accordance with the needs and demands of users and stakeholders of the software product. In this paper, we have reviewed the prominent processes, tools and technologies used in the requirement gathering phase. The study is useful to perceive the current state of the affairs pertaining to the requirement engineering research and to understand the strengths and limitations of the existing requirement engineering techniques. The study also summarizes the best practices and how to use a blend of the requirement engineering techniques as an effective methodology to successfully conduct the requirement engineering task. The study also highlights the importance of security requirements as though they are part of the nonfunctional requirement, yet are naturally considered fundamental to secure software development.",
"title": ""
},
{
"docid": "426a7c1572e9d68f4ed2429f143387d5",
"text": "Face tracking is an active area of computer vision research and an important building block for many applications. However, opposed to face detection, there is no common benchmark data set to evaluate a tracker’s performance, making it hard to compare results between different approaches. In this challenge we propose a data set, annotation guidelines and a well defined evaluation protocol in order to facilitate the evaluation of face tracking systems in the future.",
"title": ""
},
{
"docid": "15341073c2c47072f94bd41574312c3c",
"text": "In this paper, we review some advances made recently in the study of mobile phone datasets. This area of research has emerged a decade ago, with the increasing availability of large-scale anonymized datasets, and has grown into a stand-alone topic. We survey the contributions made so far on the social networks that can be constructed with such data, the study of personal mobility, geographical partitioning, urban planning, and help towards development as well as security and privacy issues.",
"title": ""
},
{
"docid": "fc1241fb88935978cc93911ccf56c41d",
"text": "Compared with object detection in static images, object detection in videos is more challenging due to degraded image qualities. An effective way to address this problem is to exploit temporal contexts by linking the same object across video to form tubelets and aggregating classification scores in the tubelets. In this paper, we focus on obtaining high quality object linking results for better classification. Unlike previous methods that link objects by checking boxes between neighboring frames, we propose to link in the same frame. To achieve this goal, we extend prior methods in following aspects: (1) a cuboid proposal network that extracts spatio-temporal candidate cuboids which bound the movement of objects; (2) a short tubelet detection network that detects short tubelets in short video segments; (3) a short tubelet linking algorithm that links temporally-overlapping short tubelets to form long tubelets. Experiments on the ImageNet VID dataset show that our method outperforms both the static image detector and the previous state of the art. In particular, our method improves results by 8.8% over the static image detector for fast moving objects.",
"title": ""
},
{
"docid": "c474df285da8106b211dc7fe62733423",
"text": "In this paper, we propose an effective method to recognize human actions using 3D skeleton joints recovered from 3D depth data of RGBD cameras. We design a new action feature descriptor for action recognition based on differences of skeleton joints, i.e., EigenJoints which combine action information including static posture, motion property, and overall dynamics. Accumulated Motion Energy (AME) is then proposed to perform informative frame selection, which is able to remove noisy frames and reduce computational cost. We employ non-parametric Naïve-Bayes-Nearest-Neighbor (NBNN) to classify multiple actions. The experimental results on several challenging datasets demonstrate that our approach outperforms the state-of-the-art methods. In addition, we investigate how many frames are necessary for our method to perform classification in the scenario of online action recognition. We observe that the first 30% to 40% frames are sufficient to achieve comparable results to that using the entire video sequences on the MSR Action3D dataset.",
"title": ""
},
{
"docid": "87edb2e38382527fe8c4d8bda50398ed",
"text": "Convolutional neural networks (CNNs) are a popular and highly performant choice for pixel-wise dense prediction or generation. One of the commonly required components in such CNNs is a way to increase the resolution of the network’s input. The lower resolution inputs can be, for example, low-dimensional noise vectors in image generation [7] or low resolution (LR) feature maps for network visualization [4]. Originally described in Zeiler et al. [3], a network layer performing this upscaling task is commonly referred to as a “Deconvolution layer”, and has been used in a wide range of applications including super-resolution [1], semantic segmentation [5], flow estimation [6] and generative modeling [7]. The deconvolution layer can be described and implemented in various ways. This led to many names that are often used synonymously, including sub-pixel or fractional convolutional layer [7], transposed convolutional layer [8,9], inverse, up, backward convolutional layer [5,6] and more.",
"title": ""
},
{
"docid": "31dde18fc029d199404254e394914787",
"text": "We present an analysis of traffic to websites known for publishing fake news in the months preceding the 2016 US presidential election. The study is based on the combined instrumentation data from two popular desktop web browsers: Internet Explorer 11 and Edge. We find that social media was the primary outlet for the circulation of fake news stories and that aggregate voting patterns were strongly correlated with the average daily fraction of users visiting websites serving fake news. This correlation was observed both at the state level and at the county level, and remained stable throughout the main election season. We propose a simple model based on homophily in social networks to explain the linear association. Finally, we highlight examples of different types of fake news stories: while certain stories continue to circulate in the population, others are short-lived and die out in a few days.",
"title": ""
}
] |
scidocsrr
|
cdd6a25df3e536a2cbc3904578485e06
|
Online cost-sensitive neural network classifiers for non-stationary and imbalanced data streams
|
[
{
"docid": "eeff4d71a0af418828d5783a041b466f",
"text": "In recent years, advances in hardware technology have facilitated ne w ways of collecting data continuously. In many applications such as network monitorin g, the volume of such data is so large that it may be impossible to store the data on disk. Furthermore, even when the data can be stored, the volume of th incoming data may be so large that it may be impossible to process any partic ular record more than once. Therefore, many data mining and database op erati ns such as classification, clustering, frequent pattern mining and indexing b ecome significantly more challenging in this context. In many cases, the data patterns may evolve continuously, as a result of which it is necessary to design the mining algorithms effectively in order to accou nt f r changes in underlying structure of the data stream. This makes the solution s of the underlying problems even more difficult from an algorithmic and computa tion l point of view. This book contains a number of chapters which are caref ully chosen in order to discuss the broad research issues in data streams. The purp ose of this chapter is to provide an overview of the organization of the stream proces sing and mining techniques which are covered in this book.",
"title": ""
},
{
"docid": "f7d535f9a5eeae77defe41318d642403",
"text": "On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.",
"title": ""
}
] |
[
{
"docid": "4411021509e81968639775d366da2e77",
"text": "This study aimed to explore both the direct and indirect relationships between depression, loneliness, low self-control, and Internet addiction in a sample of Turkish youth, based on a cognitive-behavioral model of generalized problematic Internet use. Data for the present study were collected from 648 undergraduate students with a mean age of 22.46 years (SD = 2.45). Participants completed scales for depression, loneliness, self-control and Internet addiction. Structural equation modeling was used to test the model in which depression and loneliness predicted Internet addiction through low self-control. The findings revealed that of the two factors, only loneliness was related to Internet addiction through low self-control. The results are discussed in terms of the cognitive-behavioral model of generalized problematic Internet use, and implications for practice are considered. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cf4a6b4c2deff7147c1a8a3be6f43e71",
"text": "Weakly supervised semantic segmentation receives much research attention since it alleviates the need to obtain a large amount of dense pixel-wise ground-truth annotations for the training images. Compared with other forms of weak supervision, image labels are quite efficient to obtain. In our work, we focus on the weakly supervised semantic segmentation with image label annotations. Recent progress for this task has been largely dependent on the quality of generated pseudoannotations. In this work, inspired by spatial neural-attention for image captioning, we propose a decoupled spatial neural attention network for generating pseudo-annotations. Our decoupled attention structure could simultaneously identify the object regions and localize the discriminative parts which generates high-quality pseudo-annotations in one forward path. The generated pseudoannotations lead to the segmentation results which achieve the state-of-the-art in weakly-supervised semantic segmentation.",
"title": ""
},
{
"docid": "027681fed6a8932935ea8ef9e49cea13",
"text": "Nowadays smartphones are ubiquitous and - to some extent - already used to support sports training, e.g. runners or bikers track their trip with a gps-enabled smartphone. But recent mobile technology has powerful processors that allow even more complex tasks like image or graphics processing. In this work we address the question on how mobile technology can be used for collaborative boulder training. More specifically, we present a mobile augmented reality application to support various parts of boulder training. The proposed approach also incorporates sharing and other social features. Thus our solution supports collaborative training by providing an intuitive way to create, share and define goals and challenges together with friends. Furthermore we propose a novel method of trackable generation for augmented reality. Synthetically generated images of climbing walls are used as trackables for real, existing walls.",
"title": ""
},
{
"docid": "654916f0c3f09bb4063c009efca55bae",
"text": "We consider the problem of threshold secret sharing in groups with hierarchical structure. In such settings, the secret is shared among a group of participants that is partitioned into levels. The access structure is then determined by a sequence of threshold requirements: a subset of participants is authorized if it has at least k0 0 members from the highest level, as well as at least k1 > k0 members from the two highest levels and so forth. Such problems may occur in settings where the participants differ in their authority or level of confidence and the presence of higher level participants is imperative to allow the recovery of the common secret. Even though secret sharing in hierarchical groups has been studied extensively in the past, none of the existing solutions addresses the simple setting where, say, a bank transfer should be signed by three employees, at least one of whom must be a department manager. We present a perfect secret sharing scheme for this problem that, unlike most secret sharing schemes that are suitable for hierarchical structures, is ideal. As in Shamir's scheme, the secret is represented as the free coefficient of some polynomial. The novelty of our scheme is the usage of polynomial derivatives in order to generate lesser shares for participants of lower levels. Consequently, our scheme uses Birkhoff interpolation, i.e., the construction of a polynomial according to an unstructured set of point and derivative values. A substantial part of our discussion is dedicated to the question of how to assign identities to the participants from the underlying finite field so that the resulting Birkhoff interpolation problem will be well posed. In addition, we devise an ideal and efficient secret sharing scheme for the closely related hierarchical threshold access structures that were studied by Simmons and Brickell.",
"title": ""
},
{
"docid": "087f9c2abb99d8576645a2460298c1b5",
"text": "In a community cloud, multiple user groups dynamically share a massive number of data blocks. The authors present a new associative data sharing method that uses virtual disks in the MeePo cloud, a research storage cloud built at Tsinghua University. Innovations in the MeePo cloud design include big data metering, associative data sharing, data block prefetching, privileged access control (PAC), and privacy preservation. These features are improved or extended from competing features implemented in DropBox, CloudViews, and MySpace. The reported results support the effectiveness of the MeePo cloud.",
"title": ""
},
{
"docid": "c2a32d79289299ef255ab53af02b7c6a",
"text": "Deep latent variable models have been shown to facilitate the response generation for open-domain dialog systems. However, these latent variables are highly randomized, leading to uncontrollable generated responses. In this paper, we propose a framework allowing conditional response generation based on specific attributes. These attributes can be either manually assigned or automatically detected. Moreover, the dialog states for both speakers are modeled separately in order to reflect personal features. We validate this framework on two different scenarios, where the attribute refers to genericness and sentiment states respectively. The experiment result testified the potential of our model, where meaningful responses can be generated in accordance with the specified attributes.",
"title": ""
},
{
"docid": "e3e9ae0a3d41d6b631c82409cdcf1fba",
"text": "Electronic health records (EHRs) present a wealth of data that are vital for improving patient-centered outcomes, although the data can present significant statistical challenges. In particular, EHR data contains substantial missing information that if left unaddressed could reduce the validity of conclusions drawn. Properly addressing the missing data issue in EHR data is complicated by the fact that it is sometimes difficult to differentiate between missing data and a negative value. For example, a patient without a documented history of heart failure may truly not have disease or the clinician may have simply not documented the condition. Approaches for reducing missing data in EHR systems come from multiple angles, including: increasing structured data documentation, reducing data input errors, and utilization of text parsing / natural language processing. This paper focuses on the analytical approaches for handling missing data, primarily multiple imputation. The broad range of variables available in typical EHR systems provide a wealth of information for mitigating potential biases caused by missing data. The probability of missing data may be linked to disease severity and healthcare utilization since unhealthier patients are more likely to have comorbidities and each interaction with the health care system provides an opportunity for documentation. Therefore, any imputation routine should include predictor variables that assess overall health status (e.g. Charlson Comorbidity Index) and healthcare utilization (e.g. number of encounters) even when these comorbidities and patient encounters are unrelated to the disease of interest. Linking the EHR data with other sources of information (e.g. National Death Index and census data) can also provide less biased variables for imputation. Additional methodological research with EHR data and improved epidemiological training of clinical investigators is warranted.",
"title": ""
},
{
"docid": "290a0812a162c52db838e69c8b4f0d09",
"text": "Recent advances in next-generation sequencing technologies have facilitated the use of deoxyribonucleic acid (DNA) as a novel covert channels in steganography. There are various methods that exist in other domains to detect hidden messages in conventional covert channels. However, they have not been applied to DNA steganography. The current most common detection approaches, namely frequency analysis-based methods, often overlook important signals when directly applied to DNA steganography because those methods depend on the distribution of the number of sequence characters. To address this limitation, we propose a general sequence learning-based DNA steganalysis framework. The proposed approach learns the intrinsic distribution of coding and non-coding sequences and detects hidden messages by exploiting distribution variations after hiding these messages. Using deep recurrent neural networks (RNNs), our framework identifies the distribution variations by using the classification score to predict whether a sequence is to be a coding or non-coding sequence. We compare our proposed method to various existing methods and biological sequence analysis methods implemented on top of our framework. According to our experimental results, our approach delivers a robust detection performance compared to other tools.",
"title": ""
},
{
"docid": "bbe3551f2ed95dc2ca08dcff67186fba",
"text": "A high-dimensional shape transformation posed in a mass-preserving framework is used as a morphological signature of a brain image. Population differences with complex spatial patterns are then determined by applying a nonlinear support vector machine (SVM) pattern classification method to the morphological signatures. Significant reduction of the dimensionality of the morphological signatures is achieved via wavelet decomposition and feature reduction methods. Applying the method to MR images with simulated atrophy shows that the method can correctly detect subtle and spatially complex atrophy, even when the simulated atrophy represents only a 5% variation from the original image. Applying this method to actual MR images shows that brains can be correctly determined to be male or female with a successful classification rate of 97%, using the leave-one-out method. This proposed method also shows a high classification rate for old adults' age classification, even under difficult test scenarios. The main characteristic of the proposed methodology is that, by applying multivariate pattern classification methods, it can detect subtle and spatially complex patterns of morphological group differences which are often not detectable by voxel-based morphometric methods, because these methods analyze morphological measurements voxel-by-voxel and do not consider the entirety of the data simultaneously.",
"title": ""
},
{
"docid": "5d2eabccd2e9873b00de3d21903f8ba7",
"text": "In prior work we have demonstrated the noise robustness of a novel microphone solution, the PARAT earplug communication terminal. Here we extend that work with results for the ETSI Advanced Front-End and segmental cepstral mean and variance normalization (CMVN). We also propose a method for doing CMVN in the model domain. This removes the need to train models on normalized features, which may significantly extend the applicability of CMVN. The recognition results are comparable to those of the traditional approach.",
"title": ""
},
{
"docid": "fa387f6bc59f9e6b062641d53776a103",
"text": "Terrorist attacks are increasing in both frequency and complexity around the world. In 2016 alone, there were more than 13,400 terrorist attacks globally, killing more than 34,000 people.1 Of equal concern, chemical-warfare agents that were developed for the battlefield are being used on civilians in major cities and conflict zones. The recent sarin attacks in Syria,2,3 the latest in a series of chemical attacks in that region,3,4 along with the use of the nerve agent VX in the assassination of Kim Jong-nam in Malaysia and the Sovietera agent Novichok in the poisoning of Sergei Skripal in the United Kingdom,5 all represent a worrisome trend in the use of deadly chemical agents by various rogue groups in civilian settings. In light of the rise in coordinated, multimodal terrorist attacks in Western urban centers,6,7 concern has been expressed about an increase in the use of chemical agents by terrorists on civilian targets around the world. Such attacks entail unique issues in on-the-scene safety8,9 and also require a rapid medical response.10 As health care providers, we must be proactive in how we prepare for and respond to this new threat. This article reviews the toxidromes (constellations of signs and symptoms that are characteristic of a given class of agents) for known and suspected chemicalwarfare agents that have properties that are well suited for terrorist attacks — namely, high volatility and rapid onset of incapacitating or lethal effects.11 Poisoncontrol procedures currently use toxidromes to identify specific classes of agents. Although symptoms such as eye irritation and coughing are common to a number of classes, specific clinical findings, including fasciculations, hypersecretions, early seizure, and miosis or mydriasis, can be rapidly identified as part of an acutephase triage system and used to differentiate among classes of agents. This should lead to reduced morbidity and mortality while also decreasing the risk to responding health care workers.12 The combined group of chemical-warfare agents examined here includes nerve agents, asphyxiants (blood agents), opioid agents, anesthetic agents, anticholinergic (antimuscarinic) agents, botulinum toxin, pulmonary agents, caustic agents (acids), riot-control agents, T-2 toxin, and vesicants (Table 1).",
"title": ""
},
{
"docid": "0d90fdb9568ca23c608bdfdae03d26c9",
"text": "Thank you for reading pattern recognition statistical structural and neural approaches. Maybe you have knowledge that, people have look numerous times for their chosen readings like this pattern recognition statistical structural and neural approaches, but end up in malicious downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they juggled with some harmful bugs inside their computer.",
"title": ""
},
{
"docid": "cb130a706cf66f92a1918c58a87847ed",
"text": "Single component organic photodetectors capable of broadband light sensing represent a paradigm shift for designing flexible and inexpensive optoelectronic devices. The present study demonstrates the application of a new quadrupolar 1,4-dihydropyrrolo[3,2-b]pyrrole derivative with spectral sensitivity across 350-830 nm as a potential broadband organic photodetector (OPD) material. The amphoteric redox characteristics evinced from the electrochemical studies are exploited to conceptualize a single component OPD with ITO and Al as active electrodes. The photodiode showed impressive broadband photoresponse to monochromatic light sources of 365, 470, 525, 589, 623, and 830 nm. Current density-voltage (J-V) and transient photoresponse studies showed stable and reproducible performance under continuous on/off modulations. The devices operating in reverse bias at 6 V displayed broad spectral responsivity (R) and very good detectivity (D*) peaking a maximum 0.9 mA W-1 and 1.9 × 1010 Jones (at 623 nm and 500 μW cm-2) with a fast rise and decay times of 75 and 140 ms, respectively. Low dark current densities ranging from 1.8 × 10-10 Acm-2 at 1 V to 7.2 × 10-9 A cm-2 at 6 V renders an operating range to amplify the photocurrent signal, spectral responsivity, and detectivity. Interestingly, the fabricated OPDs display a self-operational mode which is rarely reported for single component organic systems.",
"title": ""
},
{
"docid": "c949773416c88c141ca7ee5065852af6",
"text": "Aim: This study attempts to measure patient and Registration and Admission (R&A) staff satisfaction levels towards the Traditional Queuing Method (TQM) in comparison with a proposed Online Registration System (ORS). This study also investigates patients’ perceptions of the ORS and the feasibility and acceptance of the R&A staff in a healthcare organization. Materials and Methods: A stratified random sampling technique was used to distribute 385 questionnaires among outpatients registration area to gather indicating information and perspectives. Additionally, eleven face-to-face semi-structured interviews with front line hospital workers in the R&A department were conducted using a thematic content analysis approach to analyze the contents and produce results. In order for the researcher to have a direct understanding of the registration processes and activities and to gain a better understanding of the patients’ behaviors and attitudes toward them; a non-participant observation approach was conducted where observational encounters’ notes were taken and then analyzed. Results: It was found that most outpatient population (patients and registration staff) prefer ORS for a range of reasons including time consumption, cost benefit, patient comfort, data sensitivity, effortless, easiness, accuracy, and less errors. On the other hand, around 10% of them chose to go on with the TQM. Their reasons ranged from the unavailability of computer devices or internet connections to their educational backgrounds or physical disabilities. Computing devices and internet availability proved not to be an issue for the successful implementation of the ORS system, as most participants consented to having an internet connection or a device to enter ORS system (91%). Conclusion: Since more than half of the participated patients were unhappy with the TQM at registration desks (59.7%), this dissatisfaction should be addressed by an ORS implementaion that would reduce waiting time, enhance the level of attention, and improve services from frontline staff toward patients’ care.",
"title": ""
},
{
"docid": "695c396f27ba31f15f7823511473925c",
"text": "Design and experimental analysis of beam steering in microstrip patch antenna array using dumbbell shaped Defected Ground Structure (DGS) for S-band (5.2 GHz) application was carried out in this study. The Phase shifting in antenna has been achieved using different size and position of dumbbell shape DGS. DGS has characteristics of slow wave, wide stop band and compact size. The obtained radiation pattern has provided steerable main lobe and nulls at predefined direction. The radiation pattern for different size and position of dumbbell structure in microstrip patch antenna array was measured and comparative study has been carried out.",
"title": ""
},
{
"docid": "87ebf3c29afc0ea6b8c386f8f5ba31f9",
"text": "In this study, we present a weakly supervised approach that discovers the discriminative structures of sketch images, given pairs of sketch images and web images. In contrast to traditional approaches that use global appearance features or relay on keypoint features, our aim is to automatically learn the shared latent structures that exist between sketch images and real images, even when there are significant appearance differences across its relevant real images. To accomplish this, we propose a deep convolutional neural network, named SketchNet. We firstly develop a triplet composed of sketch, positive and negative real image as the input of our neural network. To discover the coherent visual structures between the sketch and its positive pairs, we introduce the softmax as the loss function. Then a ranking mechanism is introduced to make the positive pairs obtain a higher score comparing over negative ones to achieve robust representation. Finally, we formalize above-mentioned constrains into the unified objective function, and create an ensemble feature representation to describe the sketch images. Experiments on the TUBerlin sketch benchmark demonstrate the effectiveness of our model and show that deep feature representation brings substantial improvements over other state-of-the-art methods on sketch classification.",
"title": ""
},
{
"docid": "f27391f29b44bfa9989146566a288b79",
"text": "An appealing feature of blockchain technology is smart contracts. A smart contract is executable code that runs on top of the blockchain to facilitate, execute and enforce an agreement between untrusted parties without the involvement of a trusted third party. In this paper, we conduct a systematic mapping study to collect all research that is relevant to smart contracts from a technical perspective. The aim of doing so is to identify current research topics and open challenges for future studies in smart contract research. We extract 24 papers from different scientific databases. The results show that about two thirds of the papers focus on identifying and tackling smart contract issues. Four key issues are identified, namely, codifying, security, privacy and performance issues. The rest of the papers focuses on smart contract applications or other smart contract related topics. Research gaps that need to be addressed in future studies are provided.",
"title": ""
},
{
"docid": "ba6f35f4d7449aacfe832eddb122bf98",
"text": "Users who downloaded this article also downloaded: Mohamed Zairi, (1997),\"Business process management: a boundaryless approach to modern competitiveness\", Business Process Management Journal, Vol. 3 Iss 1 pp. 64-80 http://dx.doi.org/10.1108/14637159710161585 Ryan K.L. Ko, Stephen S.G. Lee, Eng Wah Lee, (2009),\"Business process management (BPM) standards: a survey\", Business Process Management Journal, Vol. 15 Iss 5 pp. 744-791 http://dx.doi.org/10.1108/14637150910987937 Mohamed Zairi, David Sinclair, (1995),\"Business process re#engineering and process management: A survey of current practice and future trends in integrated management\", Business Process Re-engineering & Management Journal, Vol. 1 Iss 1 pp. 8-30",
"title": ""
},
{
"docid": "732da8eb4c41d6bf70ded5866fadd334",
"text": "Ferroelectric field effect transistors (FeFETs) based on ferroelectric hafnium oxide (HfO2) thin films show high potential for future embedded nonvolatile memory applications. However, HfO2 films besides their recently discovered ferroelectric behavior are also prone to undesired charge trapping effects. Therefore, the scope of this paper is to verify the possibility of the charge trapping during standard operation of the HfO2-based FeFET memories. The kinetics of the charge trapping and its interplay with the ferroelectric polarization switching are analyzed in detail using the single-pulse ID-VG technique. Furthermore, the impact of the charge trapping on the important memory characteristics such as retention and endurance is investigated.",
"title": ""
},
{
"docid": "87eb69d6404bf42612806a5e6d67e7bb",
"text": "In this paper we present an analysis of an AltaVista Search Engine query log consisting of approximately 1 billion entries for search requests over a period of six weeks. This represents almost 285 million user sessions, each an attempt to fill a single information need. We present an analysis of individual queries, query duplication, and query sessions. We also present results of a correlation analysis of the log entries, studying the interaction of terms within queries. Our data supports the conjecture that web users differ significantly from the user assumed in the standard information retrieval literature. Specifically, we show that web users type in short queries, mostly look at the first 10 results only, and seldom modify the query. This suggests that traditional information retrieval techniques may not work well for answering web search requests. The correlation analysis showed that the most highly correlated items are constituents of phrases. This result indicates it may be useful for search engines to consider search terms as parts of phrases even if the user did not explicitly specify them as such.",
"title": ""
}
] |
scidocsrr
|
dc93d87a5857982f3138c024e639d4ae
|
ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras
|
[
{
"docid": "662ae9d792b3889dbd0450a65259253a",
"text": "We present a new parametrization for point features within monocular simultaneous localization and mapping (SLAM) that permits efficient and accurate representation of uncertainty during undelayed initialization and beyond, all within the standard extended Kalman filter (EKF). The key concept is direct parametrization of the inverse depth of features relative to the camera locations from which they were first viewed, which produces measurement equations with a high degree of linearity. Importantly, our parametrization can cope with features over a huge range of depths, even those that are so far from the camera that they present little parallax during motion---maintaining sufficient representative uncertainty that these points retain the opportunity to \"come in'' smoothly from infinity if the camera makes larger movements. Feature initialization is undelayed in the sense that even distant features are immediately used to improve camera motion estimates, acting initially as bearing references but not permanently labeled as such. The inverse depth parametrization remains well behaved for features at all stages of SLAM processing, but has the drawback in computational terms that each point is represented by a 6-D state vector as opposed to the standard three of a Euclidean XYZ representation. We show that once the depth estimate of a feature is sufficiently accurate, its representation can safely be converted to the Euclidean XYZ form, and propose a linearity index that allows automatic detection and conversion to maintain maximum efficiency---only low parallax features need be maintained in inverse depth form for long periods. We present a real-time implementation at 30 Hz, where the parametrization is validated in a fully automatic 3-D SLAM system featuring a handheld single camera with no additional sensing. Experiments show robust operation in challenging indoor and outdoor environments with a very large ranges of scene depth, varied motion, and also real time 360deg loop closing.",
"title": ""
},
{
"docid": "5dac8ef81c7a6c508c603b3fd6a87581",
"text": "In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.",
"title": ""
},
{
"docid": "1f27caaaeae8c82db6a677f66f2dee74",
"text": "State of the art visual SLAM systems have recently been presented which are capable of accurate, large-scale and real-time performance, but most of these require stereo vision. Important application areas in robotics and beyond open up if similar performance can be demonstrated using monocular vision, since a single camera will always be cheaper, more compact and easier to calibrate than a multi-camera rig. With high quality estimation, a single camera moving through a static scene of course effectively provides its own stereo geometry via frames distributed over time. However, a classic issue with monocular visual SLAM is that due to the purely projective nature of a single camera, motion estimates and map structure can only be recovered up to scale. Without the known inter-camera distance of a stereo rig to serve as an anchor, the scale of locally constructed map portions and the corresponding motion estimates is therefore liable to drift over time. In this paper we describe a new near real-time visual SLAM system which adopts the continuous keyframe optimisation approach of the best current stereo systems, but accounts for the additional challenges presented by monocular input. In particular, we present a new pose-graph optimisation technique which allows for the efficient correction of rotation, translation and scale drift at loop closures. Especially, we describe the Lie group of similarity transformations and its relation to the corresponding Lie algebra. We also present in detail the system’s new image processing front-end which is able accurately to track hundreds of features per frame, and a filter-based approach for feature initialisation within keyframe-based SLAM. Our approach is proven via large-scale simulation and real-world experiments where a camera completes large looped trajectories.",
"title": ""
}
] |
[
{
"docid": "4ff75b22504d23c936610d3845337f1b",
"text": "In the May 2007 issue of Pediatric Radiology, the article “Can classic metaphyseal lesions follow uncomplicated caesarean section?” [1] suggested that enough trauma could occur under these circumstances to produce fractures previously described as “highly specific for child abuse” [2]. However, the question of whether themetaphyses were normal to begin with was not raised. Why should this be an issue? Vitamin D deficiency (DD), initially believed to primarily affect the elderly and dark-skinned populations in the US, is now being demonstrated in otherwise healthy young adults, children, and infants of all races. In a review article on vitamin D published in the New England Journal of Medicine last year [3], Holick reviewed some of the recent literature, showing deficiency and insufficiency rates of 52% among Hispanic and African-American adolescents in Boston, 48% among white preadolescent females in Maine, 42% among African American females between 15 and 49 years of age, and 32% among healthy white men and women 18 to 29 years of age in Boston. A recent study of healthy infants and toddlers aged 8 to 24 months in Boston found an insufficiency rate of 40% and a deficiency rate of 12.1% [4]. In September 2007, a number of articles about congenital rickets were published in the Archives of Diseases in Childhood including an international perspective of mother and newborn DD reported from around the world [5]. Concentrations of 25-hydroxyvitamin D [25(OH)D] less than 25 nmol/l (10 ng/ml) were found in 18%, 25%, 80%, 42% and 61% of pregnant women in the UK, UAE, Iran, northern India and New Zealand, respectively, and in 60 to 84% of non-western women in the Netherlands. Currently, most experts in the US define DD as a 25(OH)D level less than 50 nmol/l (20 ng/ml). Levels between 20 and 30 ng/ml are considered to indicate insufficiency, reflecting increasing parathyroid hormone (PTH) levels and decreasing calcium absorption [3]. With such high prevalence of DD in our healthy young women, congenital deficiency is inevitable, since neonatal 25(OH)D concentrations are approximately two-thirds the maternal level [6]. Bodnar et al. [7] at the University of Pittsburgh, in the largest US study of mother and newborn infant vitamin D levels, found deficient or insufficient levels in 83% of black women and 92% of their newborns, as well as in 47% of white women and 66% of their newborns. The deficiencies were worse in the winter than in the summer. Over 90% of these women were on prenatal vitamins. Research is currently underway to formulate more appropriate recommendations for vitamin D supplementation during pregnancy (http://clinicaltrials.gov, ID: R01 HD043921). The obvious question is, “Why has DD once again become so common?” Multiple events have led to the high rates of DD. In the past, many foods were fortified with Pediatr Radiol (2008) 38:1210–1216 DOI 10.1007/s00247-008-1001-z",
"title": ""
},
{
"docid": "90414004f8681198328fb48431a34573",
"text": "Process models play important role in computer aided process engineering. Although the structure of these models are a priori known, model parameters should be estimated based on experiments. The accuracy of the estimated parameters largely depends on the information content of the experimental data presented to the parameter identification algorithm. Optimal experiment design (OED) can maximize the confidence on the model parameters. The paper proposes a new additive sequential evolutionary experiment design approach to maximize the amount of information content of experiments. The main idea is to use the identified models to design new experiments to gradually improve the model accuracy while keeping the collected information from previous experiments. This scheme requires an effective optimization algorithm, hence the main contribution of the paper is the incorporation of Evolutionary Strategy (ES) into a new iterative scheme of optimal experiment design (AS-OED). This paper illustrates the applicability of AS-OED for the design of feeding profile for a fed-batch biochemical reactor.",
"title": ""
},
{
"docid": "17db3273504bba730c9e43c8ea585250",
"text": "In this paper, License plate localization and recognition (LPLR) is presented. It uses image processing and character recognition technology in order to identify the license number plates of the vehicles automatically. This system is considerable interest because of its good application in traffic monitoring systems, surveillance devices and all kind of intelligent transport system. The objective of this work is to design algorithm for License Plate Localization and Recognition (LPLR) of Tanzanian License Plates. The plate numbers used are standard ones with black and yellow or black and white colors. Also, the letters and numbers are placed in the same row (identical vertical levels), resulting in frequent changes in the horizontal intensity. Due to that, the horizontal changes of the intensity have been easily detected, since the rows that contain the number plates are expected to exhibit many sharp variations. Hence, the edge finding method is exploited to find the location of the plate. To increase readability of the plate number, part of the image was enhanced, noise removal and smoothing median filter is used due to easy development. The algorithm described in this paper is implemented using MATLAB 7.11.0(R2010b).",
"title": ""
},
{
"docid": "b8a3e056fe80783b51190c378d5ddcb2",
"text": "We investigate the capability of GPS signals of opportunity to detect and localize targets on the sea surface. The proposed approach to target detection is new, and stems from the advantages offered by GPS-Reflectometry (GPS-R) in terms of spatial and temporal sampling, and low cost/low power technology, extending the range of applications of GPS-R beyond remote sensing. Here the exploitation of GPS signals backscattered from a target is proposed, to enhance the target return with respect to the sea clutter. A link budget is presented, showing that the target return is stronger than the background sea clutter when certain conditions are verified. The findings agree with the only empirical measurement found in literature, where a strong return from a target was fortuitously registered during an airborne campaign. This study provides a first proof-of-concept of GPS-based target detection, highlighting all the potentials of this innovative approach.",
"title": ""
},
{
"docid": "ef7d2afe9206e56479a4098b6255aa4b",
"text": "Cloud is becoming a dominant computing platform. Naturally, a question that arises is whether we can beat notorious DDoS attacks in a cloud environment. Researchers have demonstrated that the essential issue of DDoS attack and defense is resource competition between defenders and attackers. A cloud usually possesses profound resources and has full control and dynamic allocation capability of its resources. Therefore, cloud offers us the potential to overcome DDoS attacks. However, individual cloud hosted servers are still vulnerable to DDoS attacks if they still run in the traditional way. In this paper, we propose a dynamic resource allocation strategy to counter DDoS attacks against individual cloud customers. When a DDoS attack occurs, we employ the idle resources of the cloud to clone sufficient intrusion prevention servers for the victim in order to quickly filter out attack packets and guarantee the quality of the service for benign users simultaneously. We establish a mathematical model to approximate the needs of our resource investment based on queueing theory. Through careful system analysis and real-world data set experiments, we conclude that we can defeat DDoS attacks in a cloud environment.",
"title": ""
},
{
"docid": "88acb55335bc4530d8dfe5f44738d39f",
"text": "Driving is an attention-demanding task, especially with children in the back seat. While most recommendations prefer to reduce children's screen time in common entertainment systems, e.g. DVD players and tablets, parents often rely on these systems to entertain the children during car trips. These systems often lack key components that are important for modern parents, namely, sociability and educational content. In this contribution we introduce PANDA, a parental affective natural driving assistant. PANDA is a virtual in-car entertainment agent that can migrate around the car to interact with the parent-driver or with children in the back seat. PANDA supports the parent-driver via speech interface, helps to mediate her interaction with children in the back seat, and works to reduce distractions for the driver while also engaging, entertaining and educating children. We present the design of PANDA system and preliminary tests of the prototype system in a car setting.",
"title": ""
},
{
"docid": "0dac38edf20c2a89a9eb46cd1300162c",
"text": "Common software weaknesses, such as improper input validation, integer overflow, can harm system security directly or indirectly, causing adverse effects such as denial-of-service, execution of unauthorized code. Common Weakness Enumeration (CWE) maintains a standard list and classification of common software weakness. Although CWE contains rich information about software weaknesses, including textual descriptions, common sequences and relations between software weaknesses, the current data representation, i.e., hyperlined documents, does not support advanced reasoning tasks on software weaknesses, such as prediction of missing relations and common consequences of CWEs. Such reasoning tasks become critical to managing and analyzing large numbers of common software weaknesses and their relations. In this paper, we propose to represent common software weaknesses and their relations as a knowledge graph, and develop a translation-based, description-embodied knowledge representation learning method to embed both software weaknesses and their relations in the knowledge graph into a semantic vector space. The vector representations (i.e., embeddings) of software weaknesses and their relations can be exploited for knowledge acquisition and inference. We conduct extensive experiments to evaluate the performance of software weakness and relation embeddings in three reasoning tasks, including CWE link prediction, CWE triple classification, and common consequence prediction. Our knowledge graph embedding approach outperforms other description- and/or structure-based representation learning methods.",
"title": ""
},
{
"docid": "aac3060a199b016e38be800c213c9dba",
"text": "In this paper, we investigate the use of electroencephalograhic signals for the purpose of recognizing unspoken speech. The term unspoken speech refers to the process in which a subject imagines speaking a given word without moving any articulatory muscle or producing any audible sound. Early work by Wester (Wester, 2006) presented results which were initially interpreted to be related to brain activity patterns due to the imagination of pronouncing words. However, subsequent investigations lead to the hypothesis that the good recognition performance might instead have resulted from temporal correlated artifacts in the brainwaves since the words were presented in blocks. In order to further investigate this hypothesis, we run a study with 21 subjects, recording 16 EEG channels using a 128 cap montage. The vocabulary consists of 5 words, each of which is repeated 20 times during a recording session in order to train our HMM-based classifier. The words are presented in blockwise, sequential, and random order. We show that the block mode yields an average recognition rate of 45.50%, but it drops to chance level for all other modes. Our experiments suggest that temporal correlated artifacts were recognized instead of words in block recordings and back the above-mentioned hypothesis.",
"title": ""
},
{
"docid": "822a4971bb1e92ddf47fd732a652ebb9",
"text": "The axial-flux permanent-magnet machine (AFPM) topology is suited for direct-drive applications and, due to their enhanced flux-weakening capability, AFPMs having slotted windings are the most promising candidates for use in wheel-motor drives. In consideration of this, this paper deals with an experimental study devoted to investigate a number of technical solutions to be used in AFPMs having slotted windings in order to achieve substantial reduction of both cogging torque and no-load power loss in the machine. To conduct such an experimental study, a laboratory machine was purposely built incorporating facilities that allow easy-to-achieve offline modifications of the overall magnetic arrangement at the machine air gaps, such as magnet skewing, angular shifting between rotor discs, and accommodation of either PVC or Somaloy wedges for closing the slot openings. The paper discusses experimental results and gives guidelines for the design of AFPMs with improved performance.",
"title": ""
},
{
"docid": "5c9f0843ebc26bf8a52d7633acc33c58",
"text": "This thesis aim is to present results on a stochastic model called reinforced random walk. This process was conceived in the late 1980’s by Coppersmith and Diaconis and can be regarded as a generalization of a random walk on a weighted graph. These reinforced walks have non-homogeneous transition probabilities, which arise from an interaction between the process and the weights. We survey articles on the subject, perform simulations and extend a theorem by Pemantle.",
"title": ""
},
{
"docid": "8bb465b2ec1f751b235992a79c6f7bf1",
"text": "Continuum robotics has rapidly become a rich and diverse area of research, with many designs and applications demonstrated. Despite this diversity in form and purpose, there exists remarkable similarity in the fundamental simplified kinematic models that have been applied to continuum robots. However, this can easily be obscured, especially to a newcomer to the field, by the different applications, coordinate frame choices, and analytical formalisms employed. In this paper we review several modeling approaches in a common frame and notational convention, illustrating that for piecewise constant curvature, they produce identical results. This discussion elucidates what has been articulated in different ways by a number of researchers in the past several years, namely that constant-curvature kinematics can be considered as consisting of two separate submappings: one that is general and applies to all continuum robots, and another that is robot-specific. These mappings are then developed both for the singlesection and for the multi-section case. Similarly, we discuss the decomposition of differential kinematics (the robot’s Jacobian) into robot-specific and robot-independent portions. The paper concludes with a perspective on several of the themes of current research that are shaping the future of continuum robotics.",
"title": ""
},
{
"docid": "8a695d5913c3b87fb21864c0bdd3d522",
"text": "Environmental topics have gained much consideration in corporate green operations. Globalization, stakeholder pressures, and stricter environmental regulations have made organizations develop environmental practices. Thus, green supply chain management (GSCM) is now a proactive approach for organizations to enhance their environmental performance and achieve competitive advantages. This study pioneers using the decision-making trial and evaluation laboratory (DEMATEL) method with intuitionistic fuzzy sets to handle the important and causal relationships between GSCM practices and performances. DEMATEL evaluates GSCM practices to find the main practices to improve both environmental and economic performances. This study uses intuitionistic fuzzy set theory to handle the linguistic imprecision and the ambiguity of human being’s judgment. A case study from the automotive industry is presented to evaluate the efficiency of the proposed method. The results reveal ‘‘internal management support’’, ‘‘green purchasing’’ and ‘‘ISO 14001 certification’’ are the most significant GSCM practices. The practical results of this study offer useful insights for managers to become more environmentally responsible, while improving their economic and environmental performance goals. Further, a sensitivity analysis of results, managerial implications, conclusions, limitations and future research opportunities are provided. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "693b07ee12e83aae9f7f9f0c5a637403",
"text": "Many consumer-centric industries provide products and services to millions of consumers. These industries include healthcare and wellness, retail, hospitality and travel, sports and entertainment, legal services, financial services, residential real estate and many more. IT professionals and business executives are used to thinking about enterprise-centric ERP systems as the IT center of gravity, but increasingly the focus of IT activity is shifting from the enterprise Center to the Edge of the enterprise as consumers are digitally connected and activated. Enabling this shift requires managing both IT deployment and organizational transformation at the Center of the enterprise, as well as accommodating consumers’ digital interactions at the Edge and understanding how to realize new strategic value through the shift. This article examines the phenomenon of Center-Edge digital transformation in consumercentric industries through a case study in the healthcare industry. It provides guidelines for IT and business executives in any consumer-centric industry who would like to understand how to",
"title": ""
},
{
"docid": "510a43227819728a77ff0c7fa06fa2d0",
"text": "The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While there is a plethora of classification algorithms that can be applied to time series, all of the current empirical evidence suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping. In this work we make a surprising claim. There is an invariance that the community has missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where complex objects are incorrectly assigned to a simpler class. We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series classification experiments ever attempted, and show that complexity-invariant distance measures can produce improvements in accuracy in the vast majority of cases.",
"title": ""
},
{
"docid": "0a855a4e04d5b2c34d6f03653ad93daf",
"text": "The analysis of human activities is one of the most intriguing and important open issues for the automated video surveillance community. Since few years ago, it has been handled following a mere Computer Vision and Pattern Recognition perspective, where an activity corresponded to a temporal sequence of explicit actions (run, stop, sit, walk, etc.). Even under this simplistic assumption, the issue is hard, due to the strong diversity of the people appearance, the number of individuals considered (we may monitor single individuals, groups, crowd), the variability of the environmental conditions (indoor/outdoor, different weather conditions), and the kinds of sensors employed. More recently, the automated surveillance of human activities has been faced considering a new perspective, that brings in notions and principles from the social, affective, and psychological literature, and that is called Social Signal Processing (SSP). SSP employs primarily nonverbal cues, most of them are outside of conscious awareness, like face expressions and gazing, body posture and gestures, vocal characteristics, relative distances in the space and the like. This paper is the first review analyzing this new trend, proposing a structured snapshot of the state of the art and envisaging novel challenges in the surveillance domain where the cross-pollination of Computer Science technologies and Sociology theories may offer valid investigation strategies.",
"title": ""
},
{
"docid": "055c9fad6d2f246fc1b6cbb1bce26a92",
"text": "This work uses deep learning models for daily directional movements prediction of a stock price using financial news titles and technical indicators as input. A comparison is made between two different sets of technical indicators, set 1: Stochastic %K, Stochastic %D, Momentum, Rate of change, William’s %R, Accumulation/Distribution (A/D) oscillator and Disparity 5; set 2: Exponential Moving Average, Moving Average Convergence-Divergence, Relative Strength Index, On Balance Volume and Bollinger Bands. Deep learning methods can detect and analyze complex patterns and interactions in the data allowing a more precise trading process. Experiments has shown that Convolutional Neural Network (CNN) can be better than Recurrent Neural Networks (RNN) on catching semantic from texts and RNN is better on catching the context information and modeling complex temporal characteristics for stock market forecasting. So, there are two models compared in this paper: a hybrid model composed by a CNN for the financial news and a Long Short-Term Memory (LSTM) for technical indicators, named as SI-RCNN; and a LSTM network only for technical indicators, named as I-RNN. The output of each model is used as input for a trading agent that buys stocks on the current day and sells the next day when the model predicts that the price is going up, otherwise the agent sells stocks on the current day and buys the next day. The proposed method shows a major role of financial news in stabilizing the results and almost no improvement when comparing different sets of technical indicators.",
"title": ""
},
{
"docid": "b396c013aba3aa80c6ea83db116031a1",
"text": "Fine-grained Sketch-based Image Retrieval (Fine-grained SBIR), which uses hand-drawn sketches to search the target object images, has been an emerging topic over the last few years. The difficulties of this task not only come from the ambiguous and abstract characteristics of sketches with less useful information, but also the cross-modal gap at both visual and semantic level. However, images on the web are always exhibited with multimodal contents. In this paper, we consider Fine-grained SBIR as a cross-modal retrieval problem and propose a deep multimodal embedding model that exploits all the beneficial multimodal information sources in sketches and images. In our experiment with large quantity of public data, we show that the proposed method outperforms the state-of-the-art methods for Fine-grained SBIR.",
"title": ""
},
{
"docid": "7b1b0e31384cb99caf0f3d8cf8134a53",
"text": "Toxic epidermal necrolysis (TEN) is one of the most threatening adverse reactions to various drugs. No case of concomitant occurrence TEN and severe granulocytopenia following the treatment with cefuroxime has been reported to date. Herein we present a case of TEN that developed eighteen days of the initiation of cefuroxime axetil therapy for urinary tract infection in a 73-year-old woman with chronic renal failure and no previous history of allergic diathesis. The condition was associated with severe granulocytopenia and followed by gastrointestinal hemorrhage, severe sepsis and multiple organ failure syndrome development. Despite intensive medical treatment the patient died. The present report underlines the potential of cefuroxime to simultaneously induce life threatening adverse effects such as TEN and severe granulocytopenia. Further on, because the patient was also taking furosemide for chronic renal failure, the possible unfavorable interactions between the two drugs could be hypothesized. Therefore, awareness of the possible drug interaction is necessary, especially when given in conditions of their altered pharmacokinetics as in case of chronic renal failure.",
"title": ""
},
{
"docid": "c426832d19409bd842ca98ffef212cb5",
"text": "Cybersecurity is among the highest priorities in industries, academia and governments. Cyber-threats information sharing among different organizations has the potential to maximize vulnerabilities discovery at a minimum cost. Cyber-threats information sharing has several advantages. First, it diminishes the chance that an attacker exploits the same vulnerability to launch multiple attacks in different organizations. Second, it reduces the likelihood an attacker can compromise an organization and collect data that will help him launch an attack on other organizations. Cyberspace has numerous interconnections and critical infrastructure owners are dependent on each other's service. This well-known problem of cyber interdependency is aggravated in a public cloud computing platform. The collaborative effort of organizations in developing a countermeasure for a cyber-breach reduces each firm's cost of investment in cyber defense. Despite its multiple advantages, there are costs and risks associated with cyber-threats information sharing. When a firm shares its vulnerabilities with others there is a risk that these vulnerabilities are leaked to the public (or to attackers) resulting in loss of reputation, market share and revenue. Therefore, in this strategic environment the firms committed to share cyber-threats information might not truthfully share information due to their own self-interests. Moreover, some firms acting selfishly may rationally limit their cybersecurity investment and rely on information shared by others to protect themselves. This can result in under investment in cybersecurity if all participants adopt the same strategy. This paper will use game theory to investigate when multiple self-interested firms can invest in vulnerability discovery and share their cyber-threat information. We will apply our algorithm to a public cloud computing platform as one of the fastest growing segments of the cyberspace.",
"title": ""
},
{
"docid": "887b43b17bc273e7478feaecb0ff9cba",
"text": "For many years, there has been considerable debate about whether the IT revolution was paying off in higher productivity. Studies in the 1980s found no connection between IT investment and productivity in the U.S. economy, a situation referred to as the productivity paradox. Since then, a decade of studies at the firm and country level has consistently shown that the impact of IT investment on labor productivity and economic growth is significant and positive. This article critically reviews the published research, more than 50 articles, on computers and productivity. It develops a general framework for classifying the research, which facilitates identifying what we know, how well we know it, and what we do not know. The framework enables us to systematically organize, synthesize, and evaluate the empirical evidence and to identify both limitations in existing research and data and substantive areas for future research.The review concludes that the productivity paradox as first formulated has been effectively refuted. At both the firm and the country level, greater investment in IT is associated with greater productivity growth. At the firm level, the review further concludes that the wide range of performance of IT investments among different organizations can be explained by complementary investments in organizational capital such as decentralized decision-making systems, job training, and business process redesign. IT is not simply a tool for automating existing processes, but is more importantly an enabler of organizational changes that can lead to additional productivity gains.In mid-2000, IT capital investment began to fall sharply due to slowing economic growth, the collapse of many Internet-related firms, and reductions in IT spending by other firms facing fewer competitive pressures from Internet firms. This reduction in IT investment has had devastating effects on the IT-producing sector, and may lead to slower economic and productivity growth in the U.S. economy. While the turmoil in the technology sector has been unsettling to investors and executives alike, this review shows that it should not overshadow the fundamental changes that have occurred as a result of firms' investments in IT. Notwithstanding the demise of many Internet-related companies, the returns to IT investment are real, and innovative companies continue to lead the way.",
"title": ""
}
] |
scidocsrr
|
702d303e9180a55aaf3d7f2be5d56866
|
Convolutional capsule network for classification of breast cancer histology images
|
[
{
"docid": "c2b1dd2d2dd1835ed77cf6d43044eed8",
"text": "The artificial neural networks that are used to recognize shapes typically use one or more layers of learned feature detectors that produce scalar outputs. By contrast, the computer vision community uses complicated, hand-engineered features, like SIFT [6], that produce a whole vector of outputs including an explicit representation of the pose of the feature. We show how neural networks can be used to learn features that output a whole vector of instantiation parameters and we argue that this is a much more promising way of dealing with variations in position, orientation, scale and lighting than the methods currently employed in the neural networks community. It is also more promising than the handengineered features currently used in computer vision because it provides an efficient way of adapting the features to the domain.",
"title": ""
},
{
"docid": "f25a5e20c5e92e9a77d708424b05f69d",
"text": "Prompt and widely available diagnostics of breast cancer is crucial for the prognosis of patients. One of the diagnostic methods is the analysis of cytological material from the breast. This examination requires extensive knowledge and experience of the cytologist. Computer-aided diagnosis can speed up the diagnostic process and allow for large-scale screening. One of the largest challenges in the automatic analysis of cytological images is the segmentation of nuclei. In this study, four different clustering algorithms are tested and compared in the task of fast nuclei segmentation. K-means, fuzzy C-means, competitive learning neural networks and Gaussian mixture models were incorporated for clustering in the color space along with adaptive thresholding in grayscale. These methods were applied in a medical decision support system for breast cancer diagnosis, where the cases were classified as either benign or malignant. In the segmented nuclei, 42 morphological, topological and texture features were extracted. Then, these features were used in a classification procedure with three different classifiers. The system was tested for classification accuracy by means of microscopic images of fine needle breast biopsies. In cooperation with the Regional Hospital in Zielona Góra, 500 real case medical images from 50 patients were collected. The acquired classification accuracy was approximately 96-100%, which is very promising and shows that the presented method ensures accurate and objective data acquisition that could be used to facilitate breast cancer diagnosis.",
"title": ""
}
] |
[
{
"docid": "86c998f5ffcddb0b74360ff27b8fead4",
"text": "Attention networks in multimodal learning provide an efficient way to utilize given visual information selectively. However, the computational cost to learn attention distributions for every pair of multimodal input channels is prohibitively expensive. To solve this problem, co-attention builds two separate attention distributions for each modality neglecting the interaction between multimodal inputs. In this paper, we propose bilinear attention networks (BAN) that find bilinear attention distributions to utilize given vision-language information seamlessly. BAN considers bilinear interactions among two groups of input channels, while low-rank bilinear pooling extracts the joint representations for each pair of channels. Furthermore, we propose a variant of multimodal residual networks to exploit eight-attention maps of the BAN efficiently. We quantitatively and qualitatively evaluate our model on visual question answering (VQA 2.0) and Flickr30k Entities datasets, showing that BAN significantly outperforms previous methods and achieves new state-of-the-arts on both datasets.",
"title": ""
},
{
"docid": "b29555dc0d8e2002549b0a533e05e082",
"text": "A fully passive, compact, and low-cost capacitive wireless radio frequency identification (RFID)-enabled sensing system for capacitive sensing and other Internet of Things applications is proposed. This calibration-free sensor utilizes a dual-tag topology, which consists of two closely spaced RFID tags with dipole antennas and printed capacitive sensor component connected to one of the tags. A series LC resonator is used to both reduce the antenna size and improve the isolation between the two antennas and the design/optimization steps are discussed in detail. All components except for the RFID chips are inkjet printed on an off-the-shelf photopaper using a silver nanoparticle ink. The complete sensor dimension is 84 mm × mm and the sensor is compatible with EPC Class 1 Gen 2 (UHF) standard reader technology at 915 MHz.",
"title": ""
},
{
"docid": "73df32e956fd9fdc72dfead2c1780f7e",
"text": "Sparsity driven signal processing has gained tremendous popularity in the last decade. At its core, the assumption is that the signal of interest is sparse with respect to either a fixed transformation or a signal dependent dictionary. To better capture the data characteristics, various dictionary learning methods have been proposed for both reconstruction and classification tasks. For classification particularly, most approaches proposed so far have focused on designing explicit constraints on the sparse code to improve classification accuracy while simply adopting l0-norm or l1-norm for sparsity regularization. Motivated by the success of structured sparsity in the area of Compressed Sensing, we propose a structured dictionary learning framework (StructDL) that incorporates the structure information on both group and task levels in the learning process. Its benefits are two-fold: (i) the label consistency between dictionary atoms and training data are implicitly enforced; and (ii) the classification performance is more robust in the cases of a small dictionary size or limited training data than other techniques. Using the subspace model, we derive the conditions for StructDL to guarantee the performance and show theoretically that StructDL is superior to l0-norm or l1-norm regularized dictionary learning for classification. Extensive experiments have been performed on both synthetic simulations and real world applications, such as face recognition and object classification, to demonstrate the validity of the proposed DL framework.",
"title": ""
},
{
"docid": "de970d5359f2bf5ed510852e8d68d57d",
"text": "The effect of dietary Bacillus-based direct-fed microbials (DFMs; eight single strains designated as Bs2084, LSSAO1, 3AP4, Bs18, 15AP4, 22CP1, Bs27, and Bs278, and one multiple-strain DFM product [AVICORR]) on growth performance, intestinal lesions, and innate and acquired immunities were evaluated in broiler chickens following Eimeria maxima (EM) infection. EM-induced reduction of body weight gain and intestinal lesions were significantly decreased by addition of 15AP4 or Bs27 into broiler diets compared with EM-infected control birds. Serum nitric oxide levels were increased in infected chickens fed with Bs27, but lowered in those given Bs2084, LSSAO1, 3AP4 or 15AP4 compared with the infected controls. Recombinant coccidial antigen (3-1E)-stimulated spleen cell proliferation was increased in chickens given Bs27, 15AP4, LSSAO1, 3AP4, or Bs18, compared with the infected controls. Finally, all experimental diets increased concanavalin A-induced splenocyte mitogenesis in infected broilers compared with the nonsupplemented and infected controls. In summary, dietary Bacillus subtilis-based DFMs reduced the clinical signs of experimental avian coccidiosis and increased various parameters of immunity in broiler chickens in a strain-dependent manner.",
"title": ""
},
{
"docid": "72a6a7fe366def9f97ece6d1ddc46a2e",
"text": "Our work in this paper presents a prediction of quality of experience based on full reference parametric (SSIM, VQM) and application metrics (resolution, bit rate, frame rate) in SDN networks. First, we used DCR (Degradation Category Rating) as subjective method to build the training model and validation, this method is based on not only the quality of received video but also the original video but all subjective methods are too expensive, don't take place in real time and takes much time for example our method takes three hours to determine the average MOS (Mean Opinion Score). That's why we proposed novel method based on machine learning algorithms to obtain the quality of experience in an objective manner. Previous researches in this field help us to use four algorithms: Decision Tree (DT), Neural Network, K nearest neighbors KNN and Random Forest RF thanks to their efficiency. We have used two metrics recommended by VQEG group to assess the best algorithm: Pearson correlation coefficient r and Root-Mean-Square-Error RMSE. The last part of the paper describes environment based on: Weka to analyze ML algorithms, MSU tool to calculate SSIM and VQM and Mininet for the SDN simulation.",
"title": ""
},
{
"docid": "e247a8ae2a83150d83a8248ec96a4708",
"text": "Benjamin Beck’s definition of tool use has served the field of animal cognition well for over 25 years (Beck 1980, Animal Tool Behavior: the Use and Manufacture of Tools, New York, Garland STPM). This article proposes a new, more explanatory definition that accounts for tool use in terms of two complementary subcategories of behaviours: behaviours aimed at altering a target object by mechanical means and behaviours that mediate the flow of information between the tool user and the environment or other organisms in the environment. The conceptual foundation and implications of the new definition are contrasted with those of existing definitions, particularly Beck’s. The new definition is informally evaluated with respect to a set of scenarios that highlights differences from Beck’s definition as well as those of others in the literature.",
"title": ""
},
{
"docid": "3ed3b4f507c32f6423ca3918fa3eb843",
"text": "In recent years, it has been clearly evidenced that most cells in a human being are not human: they are microbial, represented by more than 1000 microbial species. The vast majority of microbial species give rise to symbiotic host-bacterial interactions that are fundamental for human health. The complex of these microbial communities has been defined as microbiota or microbiome. These bacterial communities, forged over millennia of co-evolution with humans, are at the basis of a partnership with the developing human newborn, which is based on reciprocal molecular exchanges and cross-talking. Recent data on the role of the human microbiota in newborns and children clearly indicate that microbes have a potential importance to pediatrics, contributing to host nutrition, developmental regulation of intestinal angiogenesis, protection from pathogens, and development of the immune system. This review is aimed at reporting the most recent data on the knowledge of microbiota origin and development in the human newborn, and on the multiple factors influencing development and maturation of our microbiota, including the use and abuse of antibiotic therapies.",
"title": ""
},
{
"docid": "9ab9db92758aaeb7618a3a5428b9cb1b",
"text": "A goal of our research is to produce a light-weight, low-cost five fingered robotic hand that has similar degrees of freedom as a human hand. The joints in the fingers of the developed robotic hand are powered by a newly proposed strings transmission named “Twist Drive”. The transmission converts torque into a pulling force by using a pair of strings that twist on each other. The basic characteristics of the transmission are given in the paper. A robotic hand prototype with 18 joints of which 14 are independently powered by Twist Drives was produced. The size of the hand is equal to the size of an adult human's hand and its weight including the power circuits is approximately 800 grams. The mechanical and the control systems of the hand are presented in the paper.",
"title": ""
},
{
"docid": "e415deac22afd9221995385e681b7f63",
"text": "AIM & OBJECTIVES\nThe purpose of this in vitro study was to evaluate and compare the microleakage of pit and fissure sealants after using six different preparation techniques: (a) brush, (b) pumice slurry application, (c) bur, (d) air polishing, (e) air abrasion, and (f) longer etching time.\n\n\nMATERIAL & METHOD\nThe study was conducted on 60 caries-free first premolars extracted for orthodontic purpose. These teeth were randomly assigned to six groups of 10 teeth each. Teeth were prepared using one of six occlusal surface treatments prior to placement of Clinpro\" 3M ESPE light-cured sealant. The teeth were thermocycled for 500 cycles and stored in 0.9% normal saline. Teeth were sealed apically and coated with nail varnish 1 mm from the margin and stained in 1% methylene blue for 24 hours. Each tooth was divided buccolingually parallel to the long axis of the tooth, yielding two sections per tooth for analysis. The surfaces were scored from 0 to 2 for the extent of microleakage.\n\n\nSTATISTICAL ANALYSIS\nResults obtained for microleakage were analyzed by using t-tests at sectional level and chi-square test and analysis of variance (ANOVA) at the group level.\n\n\nRESULTS\nThe results of round bur group were significantly superior when compared to all other groups. The application of air polishing and air abrasion showed better results than pumice slurry, bristle brush, and longer etching time. Round bur group was the most successful cleaning and preparing technique. Air polishing and air abrasion produced significantly less microleakage than traditional pumice slurry, bristle brush, and longer etching time.",
"title": ""
},
{
"docid": "58331d0d42452d615b5a20da473ef5e2",
"text": "This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives. First, it puts forward a novel concept of “history of word” to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation. Second, it introduces an improved attention scoring function that better utilizes the “history of word” concept. Third, it proposes a fully-aware multi-level attention mechanism to capture the complete information in one text (such as a question) and exploit it in its counterpart (such as context or passage) layer by layer. We apply FusionNet to the Stanford Question Answering Dataset (SQuAD) and it achieves the first position for both single and ensemble model on the official SQuAD leaderboard at the time of writing (Oct. 4th, 2017). Meanwhile, we verify the generalization of FusionNet with two adversarial SQuAD datasets and it sets up the new state-of-the-art on both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to 51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%.",
"title": ""
},
{
"docid": "9bf951269881138b9fae1d345be5b2e8",
"text": "A biofuel from any biodegradable formation process such as a food waste bio-digester plant is a mixture of several gases such as methane (CH4), carbon dioxide (CO2), hydrogen sulfide (H2S), ammonia (NH3) and impurities like water and dust particles. The results are reported of a parametric study of the process of separation of methane, which is the most important gas in the mixture and usable as a biofuel, from particles and H2S. A cyclone, which is a conventional, economic and simple device for gas-solid separation, is considered based on the modification of three Texas A&M cyclone designs (1D2D, 2D2D and 1D3D) by the inclusion of an air inlet tube. A parametric sizing is performed of the cyclone for biogas purification, accounting for the separation of hydrogen sulfide (H2S) and dust particles from the biofuel. The stochiometric oxidation of H2S to form elemental sulphur is considered a useful cyclone design criterion. The proposed design includes geometric parameters and several criteria for quantifying the performance of cyclone separators such as the Lapple Model for minimum particle diameter collected, collection efficiency and pressure drop. For biogas volumetric flow rates between 0 and 1 m/s and inlet flow velocities of 12 m/s, 15 m/s and 18 m/s for the 1D2D, 2D2D and 1D3D cyclones, respectively, it is observed that the 2D2D configuration is most economic in terms of sizing (total height and diameter of cyclone). The 1D2D configuration experiences the lowest pressure drop. A design algorithm coupled with a user-friendly graphics interface is developed on the MATLAB platform, providing a tool for sizing and designing suitable cyclones.",
"title": ""
},
{
"docid": "34cc70a2acf5680442f0511c50215d25",
"text": "Machine Learning has traditionally focused on narrow artificial intelligence solutions for specific problems. Despite this, we observe two trends in the state-of-the-art: One, increasing architectural homogeneity in algorithms and models. Two, algorithms having more general application: New techniques often beat many benchmarks simultaneously. We review the changes responsible for these trends and look to computational neuroscience literature to anticipate future progress.",
"title": ""
},
{
"docid": "183dd878b9762be9a6356b49e6f8db69",
"text": "Resting-state functional magnetic resonance imaging (fMRI) has attracted more and more attention because of its effectiveness, simplicity and non-invasiveness in exploration of the intrinsic functional architecture of the human brain. However, user-friendly toolbox for \"pipeline\" data analysis of resting-state fMRI is still lacking. Based on some functions in Statistical Parametric Mapping (SPM) and Resting-State fMRI Data Analysis Toolkit (REST), we have developed a MATLAB toolbox called Data Processing Assistant for Resting-State fMRI (DPARSF) for \"pipeline\" data analysis of resting-state fMRI. After the user arranges the Digital Imaging and Communications in Medicine (DICOM) files and click a few buttons to set parameters, DPARSF will then give all the preprocessed (slice timing, realign, normalize, smooth) data and results for functional connectivity, regional homogeneity, amplitude of low-frequency fluctuation (ALFF), and fractional ALFF. DPARSF can also create a report for excluding subjects with excessive head motion and generate a set of pictures for easily checking the effect of normalization. In addition, users can also use DPARSF to extract time courses from regions of interest.",
"title": ""
},
{
"docid": "35146179b441c6f34bf3baf1c895136e",
"text": "Search and browsing activity is known to be a valuable source of information about user's search intent. It is extensively utilized by most of modern search engines to improve ranking by constructing certain ranking features as well as by personalizing search. Personalization aims at two major goals: extraction of stable preferences of a user and specification and disambiguation of the current query. The common way to approach these problems is to extract information from user's search and browsing long-term history and to utilize short-term history to determine the context of a given query. Personalization of the web search for the first queries in new search sessions of new users is more difficult due to the lack of both long- and short-term data.\n In this paper we study the problem of short-term personalization. To be more precise, we restrict our attention to the set of initial queries of search sessions. These, with the lack of contextual information, are known to be the most challenging for short-term personalization and are not covered by previous studies on the subject. To approach this problem in the absence of the search context, we employ short-term browsing context. We apply a widespread framework for personalization of search results based on the re-ranking approach and evaluate our methods on the large scale data. The proposed methods are shown to significantly improve non-personalized ranking of one of the major commercial search engines. To the best of our knowledge this is the first study addressing the problem of short-term personalization based on recent browsing history. We find that performance of this re-ranking approach can be reasonably predicted given a query. When we restrict the use of our method to the queries with largest expected gain, the resulting benefit of personalization increases significantly",
"title": ""
},
{
"docid": "fbe8379aa9af67d746df0c2335f3675a",
"text": "The large volume of data produced by the increasingly deployed Internet of Things (IoT), is shifting security priorities to consider data access control from a data-centric perspective. To secure the IoT, it becomes essential to implement a data access control solution that offers the necessary flexibility required to manage a large number of IoT devices. The concept of Ciphertext-Policy Attribute-based Encryption (CP-ABE) fulfills such requirement. It allows the data source to encrypt data while cryptographically enforcing a security access policy, whereby only authorized data users with the desired attributes are able to decrypt data. Yet, despite these manifest advantages; CP-ABE has not been designed taking into consideration energy efficiency. Many IoT devices, like sensors and actuators, cannot be part of CP-ABE enforcement points, because of their resource limitations in terms of CPU, memory, battery, etc. In this paper, we propose to extend the basic CP-ABE scheme using effective pre-computation techniques. We will experimentally compute the energy saving potential offered by the proposed variant of CP-ABE, and thus demonstrate the applicability of CP-ABE in the IoT.",
"title": ""
},
{
"docid": "c6f52d8333406bce50d72779f07d5ac2",
"text": "Dimensionality reduction studies methods that effectively reduce data dimensionality for efficient data processing tasks such as pattern recognition, machine learning, text retrieval, and data mining. We introduce the field of dimensionality reduction by dividing it into two parts: feature extraction and feature selection. Feature extraction creates new features resulting from the combination of the original features; and feature selection produces a subset of the original features. Both attempt to reduce the dimensionality of a dataset in order to facilitate efficient data processing tasks. We introduce key concepts of feature extraction and feature selection, describe some basic methods, and illustrate their applications with some practical cases. Extensive research into dimensionality reduction is being carried out for the past many decades. Even today its demand is further increasing due to important high-dimensional applications such as gene expression data, text categorization, and document indexing.",
"title": ""
},
{
"docid": "824767fbfc389f5a2da52aa179a325d2",
"text": "We present a real-time algorithm to estimate the 3D pose of a previously unseen face from a single range image. Based on a novel shape signature to identify noses in range images, we generate candidates for their positions, and then generate and evaluate many pose hypotheses in parallel using modern graphics processing units (GPUs). We developed a novel error function that compares the input range image to precomputed pose images of an average face model. The algorithm is robust to large pose variations of plusmn90deg yaw, plusmn45deg pitch and plusmn30deg roll rotation, facial expression, partial occlusion, and works for multiple faces in the field of view. It correctly estimates 97.8% of the poses within yaw and pitch error of 15deg at 55.8 fps. To evaluate the algorithm, we built a database of range images with large pose variations and developed a method for automatic ground truth annotation.",
"title": ""
},
{
"docid": "845348dda35036869b1ecc12658d5603",
"text": "Recent studies on human motor control have been largely innuenced by two important statements: (1) Sensory feedback is too slow to be involved at least in fast motor control actions; (2) Learned internal model of the systems plays an important role in motor control. As a result , the human motor control system is often described as open-loop and particularly as a system inverse. System inverse control is limited by too many problems to be a plausible candidate. Instead, an alternative between open-loop and feedback control is proposed here: the \"open-loop intermittent feedback optimal control\". In this scheme, a prediction of the future behaviour of the system, that requires feedback information and a system model, is used to determine a sequence of actions which is run open-loop. The prediction of a new control sequence is performed intermittently (due to computational demand and slow sensory feedback) but with a suucient frequency to ensure small control errors. The inverted pendulum on a cart is used to illustrate the viability of this scheme.",
"title": ""
},
{
"docid": "6a9188705f76848ab99f3f9d504228a7",
"text": "We present a novel method for classifying 3D objects that is particularly tailored for the requirements in robotic applications. The major challenges here are the comparably small amount of available training data and the fact that often data is perceived in streams and not in fixed-size pools. Traditional state-of-the-art learning methods, however, require a large amount of training data, and their online learning capabilities are usually limited. Therefore, we propose a modality-specific selection of convolutional neural networks (CNN), pre-trained or fine-tuned, in combination with a classifier that is designed particularly for online learning from data streams, namely the Mondrian Forest (MF). We show that this combination of trained features obtained from a CNN can be improved further if a feature selection algorithm is applied. In our experiments, we use the resulting features both with a MF and a linear Support Vector Machine (SVM). With SVM we beat the state of the art on an RGB-D dataset, while with MF a strong result for active learning is achieved.",
"title": ""
},
{
"docid": "884281b32a82a1d1f9811acc73257387",
"text": "The low power wide area network (LPWAN) technologies, which is now embracing a booming era with the development in the Internet of Things (IoT), may offer a brand new solution for current smart grid communications due to their excellent features of low power, long range, and high capacity. The mission-critical smart grid communications require secure and reliable connections between the utilities and the devices with high quality of service (QoS). This is difficult to achieve for unlicensed LPWAN technologies due to the crowded license-free band. Narrowband IoT (NB-IoT), as a licensed LPWAN technology, is developed based on the existing long-term evolution specifications and facilities. Thus, it is able to provide cellular-level QoS, and henceforth can be viewed as a promising candidate for smart grid communications. In this paper, we introduce NB-IoT to the smart grid and compare it with the existing representative communication technologies in the context of smart grid communications in terms of data rate, latency, range, etc. The overall requirements of communications in the smart grid from both quantitative and qualitative perspectives are comprehensively investigated and each of them is carefully examined for NB-IoT. We further explore the representative applications in the smart grid and analyze the corresponding feasibility of NB-IoT. Moreover, the performance of NB-IoT in typical scenarios of the smart grid communication environments, such as urban and rural areas, is carefully evaluated via Monte Carlo simulations.",
"title": ""
}
] |
scidocsrr
|
8cd377e53c9871b880451d84fb5626e2
|
IcoRating: A Deep-Learning System for Scam ICO Identification
|
[
{
"docid": "d310779b1006f90719a0ece3cf2583b2",
"text": "While neural networks have been successfully applied to many natural language processing tasks, they come at the cost of interpretability. In this paper, we propose a general methodology to analyze and interpret decisions from a neural model by observing the effects on the model of erasing various parts of the representation, such as input word-vector dimensions, intermediate hidden units, or input words. We present several approaches to analyzing the effects of such erasure, from computing the relative difference in evaluation metrics, to using reinforcement learning to erase the minimum set of input words in order to flip a neural model’s decision. In a comprehensive analysis of multiple NLP tasks, including linguistic feature classification, sentence-level sentiment analysis, and document level sentiment aspect prediction, we show that the proposed methodology not only offers clear explanations about neural model decisions, but also provides a way to conduct error analysis on neural models.",
"title": ""
}
] |
[
{
"docid": "c93446256aa423b1b350e04bfcf2c844",
"text": "Addressing diversity and apparent contradictions in manifestations of happiness, this article delineates subjective well-being (SWB) as a dynamic system in the face of possible adversity. SWB constitutes a favorable psychological environment that regulates the hostile-world scenario, defined as one’s image of actual or potential threats to one’s life or integrity. SWB operates in various modules: experiential, wherein private awareness of SWB dwells on relevant core themes; declarative, wherein public selfreports of SWB function as social behavior; differential, wherein synchronic dimensions of SWB form well-being types; and narrative, wherein diachronic valences of SWB construct trajectories along one’s life story. By explicating the regulatory and configurational nature of SWB, the present conceptualization emphasizes the process, rather than the outcome, of pursuing happiness.",
"title": ""
},
{
"docid": "4ea7482524661175e8268c15eb22a6ae",
"text": "We present a fully unsupervised, extractive text summarization system that leverages a submodularity framework introduced by past research. The framework allows summaries to be generated in a greedy way while preserving near-optimal performance guarantees. Our main contribution is the novel coverage reward term of the objective function optimized by the greedy algorithm. This component builds on the graph-of-words representation of text and the k-core decomposition algorithm to assign meaningful scores to words. We evaluate our approach on the AMI and ICSI meeting speech corpora, and on the DUC2001 news corpus. We reach state-of-the-art performance on all datasets. Results indicate that our method is particularly well-suited to the meeting domain.",
"title": ""
},
{
"docid": "c167db403413e60c2ed15e728bca81b4",
"text": "OBJECTIVES\nAttachment style refers to a systematic pattern of emotions, behaviors, and expectations that people have for how others will respond in relationships. Extensive evidence has documented the importance of attachment security in infants, children, adolescents, and adults, but the effects of attachment among exclusively older adult populations have received less attention. The present study explored the relationships between attachment style in late adulthood and eudaimonic well-being, which refers to a life replete with meaning, productive activity, and striving to reach one's potential. It also explored the mediating role of self-compassion, which can be described as a kind and forgiving attitude toward the self.\n\n\nMETHOD\nA sample of 126 community-dwelling older adults (mean age = 70.40 years) completed measures tapping adult attachment, self-compassion, and six theoretically derived markers of eudaimonic well-being.\n\n\nRESULTS\nAttachment anxiety and avoidance were inversely related to self-acceptance, personal growth, interpersonal relationship quality, purpose in life, and environmental mastery. Mediation analyses showed that self-compassion mediated each of these relationships.\n\n\nCONCLUSION\nResults support the importance of attachment orientation for psychological well-being in late life and indicate that secure attachment facilitates an attitude of kindness and acceptance toward the self.",
"title": ""
},
{
"docid": "7b046793afebf1c722ce3a62da11c534",
"text": "Location Based Social Networks (LBSNs) are new Web 2.0 systems that are attracting new users in exponential rates. LBSNs like Foursquare and Yelp allow users to share their geographic location with friends through smartphones equipped with GPS, search for interesting places as well as posting tips about existing locations. By allowing users to comment on locations, LBSNs increasingly have to deal with new forms of spammers, which aim at advertising unsolicited messages on tips about locations. Spammers may jeopardize the trust of users on the system, thus, compromising its success in promoting location-based social interactions. In spite of that, the available literature is very limited in providing a deep understanding of this problem. In this paper, we investigated the task of identifying different types of tip spam on a popular Brazilian LBSN system, namely Apontador. Based on a labeled collection of tips provided by Apontador as well as crawled information about users and locations, we identified three types of irregular tips, namely Local Marketing, Pollution and, Bad-mouthing. We leveraged our characterization study towards a classification approach able to differentiate these tips with high accuracy.",
"title": ""
},
{
"docid": "8e31b1f0ed3055332136d8161149e9ed",
"text": "Data collection has become easy due to the rapid development of both mobile devices and wireless networks. In each second, numerous data are generated by user devices and collected through wireless networks. These data, carrying user and network related information, are invaluable for network management. However, they were seldom employed to improve network performance in existing research work. In this article we propose a bandwidth allocation algorithm to increase the throughput of cellular network users by exploring user and network data collected from user devices. With the aid of these data, users can be categorized into clusters and share bandwidth to improve the resource utilization of the network. Simulation results indicate that the proposed scheme is able to rationally form clusters among mobile users and thus significantly increase the throughput and bandwidth efficiency of the network.",
"title": ""
},
{
"docid": "924ad8ede64cf872d979098f41214528",
"text": "BACKGROUND\nSurveys are popular methods to measure public perceptions in emergencies but can be costly and time consuming. We suggest and evaluate a complementary \"infoveillance\" approach using Twitter during the 2009 H1N1 pandemic. Our study aimed to: 1) monitor the use of the terms \"H1N1\" versus \"swine flu\" over time; 2) conduct a content analysis of \"tweets\"; and 3) validate Twitter as a real-time content, sentiment, and public attention trend-tracking tool.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nBetween May 1 and December 31, 2009, we archived over 2 million Twitter posts containing keywords \"swine flu,\" \"swineflu,\" and/or \"H1N1.\" using Infovigil, an infoveillance system. Tweets using \"H1N1\" increased from 8.8% to 40.5% (R(2) = .788; p<.001), indicating a gradual adoption of World Health Organization-recommended terminology. 5,395 tweets were randomly selected from 9 days, 4 weeks apart and coded using a tri-axial coding scheme. To track tweet content and to test the feasibility of automated coding, we created database queries for keywords and correlated these results with manual coding. Content analysis indicated resource-related posts were most commonly shared (52.6%). 4.5% of cases were identified as misinformation. News websites were the most popular sources (23.2%), while government and health agencies were linked only 1.5% of the time. 7/10 automated queries correlated with manual coding. Several Twitter activity peaks coincided with major news stories. Our results correlated well with H1N1 incidence data.\n\n\nCONCLUSIONS\nThis study illustrates the potential of using social media to conduct \"infodemiology\" studies for public health. 2009 H1N1-related tweets were primarily used to disseminate information from credible sources, but were also a source of opinions and experiences. Tweets can be used for real-time content analysis and knowledge translation research, allowing health authorities to respond to public concerns.",
"title": ""
},
{
"docid": "0ec337f7af66ede2a97ade80ce27c131",
"text": "The processing time required by a cryptographic primitive implemented in hardware is an important metric for its performance but it has not received much attention in recent publications on lightweight cryptography. Nevertheless, there are important applications for cost effective low-latency encryption. As the first step in the field, this paper explores the lowlatency behavior of hardware implementations of a set of block ciphers. The latency of the implementations is investigated as well as the trade-offs with other metrics such as circuit area, time-area product, power, and energy consumption. The obtained results are related back to the properties of the underlying cipher algorithm and, as it turns out, the number of rounds, their complexity, and the similarity of encryption and decryption procedures have a strong impact on the results. We provide a qualitative description and conclude with a set of recommendations for aspiring low-latency block cipher designers.",
"title": ""
},
{
"docid": "61b89a2be8b2acc34342dfcc0249f4d5",
"text": "Transfer-learning and meta-learning are two effective methods to apply knowledge learned from large data sources to new tasks. In few-class, few-shot target task settings (i.e. when there are only a few classes and training examples available in the target task), meta-learning approaches that optimize for future task learning have outperformed the typical transfer approach of initializing model weights from a pre-trained starting point. But as we experimentally show, meta-learning algorithms that work well in the few-class setting do not generalize well in many-shot and many-class cases. In this paper, we propose a joint training approach that combines both transfer-learning and meta-learning. Benefiting from the advantages of each, our method obtains improved generalization performance on unseen target tasks in both fewand many-class and fewand manyshot scenarios.",
"title": ""
},
{
"docid": "16c04f6bc853cf2f8a02e01241aea209",
"text": "To build a healthy online environment, adult image recognition is a crucial and challenging task. Recent deep learning based methods have brought great advances to this task. However, the recognition accuracy and generalization ability need to be further improved. In this paper, a local-context aware network is proposed to improve the recognition accuracy and a corresponding curriculum learning strategy is proposed to guarantee a good generalization ability. The main idea is to integrate the global classification and the local sensitive region detection into one network and optimize them simulatenously. Such strategy helps the classification networks focus more on suspicious regions and thus provide better recognition performance. Two datasets containing over 150,000 images have been collected to evaluate the performance of the proposed approach. From the experiment results, it is observed that our approach can always achieve the best classification accuracy compared with several state-of-the-art approaches investigated.",
"title": ""
},
{
"docid": "a2c93e5497ab4e0317b9e86db6d31dbb",
"text": "Digital photographs are often used in treatment monitoring for home care of less advanced pressure ulcers. We investigated assessment agreement when stage III and IV pressure ulcers in individuals with spinal cord injury were evaluated in person and with the use of digital photographs. Two wound-care nurses assessed 31 wounds among 15 participants. One nurse assessed all wounds in person, while the other used digital photographs. Twenty-four wound description categories were applied in the nurses' assessments. Kappa statistics were calculated to investigate agreement beyond chance (p < or = 0.05). For 10 randomly selected \"double-rated wounds,\" both nurses applied both assessment methods. Fewer categories were evaluated for the double-rated wounds, because some categories were chosen infrequently and agreement could not be measured. Interrater agreement with the two methods was observed for 12 of the 24 categories (50.0%). However, of the 12 categories with agreement beyond chance, agreement was only \"slight\" (kappa = 0-0.20) or \"fair\" (kappa = 0.21-0.40) for 6 categories. The highest agreement was found for the presence of undermining (kappa = 0.853, p < 0.001). Interrater agreement was similar to intramethod agreement (41.2% of the categories demonstrated agreement beyond chance) for the nurses' in-person assessment of the double-rated wounds. The moderate agreement observed may be attributed to variation in subjective perception of qualitative wound characteristics.",
"title": ""
},
{
"docid": "6f8559ae0c06383d30ded2b2651beeff",
"text": "Gradient-based meta-learning methods leverage gradient descent to learn the commonalities among various tasks. While previous such methods have been successful in meta-learning tasks, they resort to simple gradient descent during metatesting. Our primary contribution is the MT-net, which enables the meta-learner to learn on each layer’s activation space a subspace that the taskspecific learner performs gradient descent on. Additionally, a task-specific learner of an MT-net performs gradient descent with respect to a metalearned distance metric, which warps the activation space to be more sensitive to task identity. We demonstrate that the dimension of this learned subspace reflects the complexity of the task-specific learner’s adaptation task, and also that our model is less sensitive to the choice of initial learning rates than previous gradient-based meta-learning methods. Our method achieves state-of-the-art or comparable performance on few-shot classification and regression tasks.",
"title": ""
},
{
"docid": "a9d1cdfd844a7347d255838d5eb74b03",
"text": "An economy based on the exchange of capital, assets and services between individuals has grown significantly, spurred by proliferation of internet-based platforms that allow people to share underutilized resources and trade with reasonably low transaction costs. The movement toward this economy of “sharing” translates into market efficiencies that bear new products, reframe established services, have positive environmental effects, and may generate overall economic growth. This emerging paradigm, entitled the collaborative economy, is disruptive to the conventional company-driven economic paradigm as evidenced by the large number of peer-to-peer based services that have captured impressive market shares sectors ranging from transportation and hospitality to banking and risk capital. The panel explores economic, social, and technological implications of the collaborative economy, how digital technologies enable it, and how the massive sociotechnical systems embodied in these new peer platforms may evolve in response to the market and social forces that drive this emerging ecosystem.",
"title": ""
},
{
"docid": "bc110bfad28a199a6fb9ad816e9660ea",
"text": "This paper presents an improvement of a previously proposed pitch determination algorithm (PDA). Particularly aiming at handling alternate cycles in speech signal, the algorithm estimates pitch through spectrum shifting on logarithmic frequency scale and calculating the Subharmonic-to-Harmonic Ratio (SHR). The evaluation results on two databases show that this algorithm performs considerably better than other PDAs compared. Application of SHR to voice quality analysis task is also presented. The implementation and evaluation routines are available from <http://mel.speech.nwu.edu/sunxj/pda.htm>.",
"title": ""
},
{
"docid": "308622daf5f4005045f3d002f5251f8c",
"text": "The design of multiple human activity recognition applications in areas such as healthcare, sports and safety relies on wearable sensor technologies. However, when making decisions based on the data acquired by such sensors in practical situations, several factors related to sensor data alignment, data losses, and noise, among other experimental constraints, deteriorate data quality and model accuracy. To tackle these issues, this paper presents a data-driven iterative learning framework to classify human locomotion activities such as walk, stand, lie, and sit, extracted from the Opportunity dataset. Data acquired by twelve 3-axial acceleration sensors and seven inertial measurement units are initially de-noised using a two-stage consecutive filtering approach combining a band-pass Finite Impulse Response (FIR) and a wavelet filter. A series of statistical parameters are extracted from the kinematical features, including the principal components and singular value decomposition of roll, pitch, yaw and the norm of the axial components. The novel interactive learning procedure is then applied in order to minimize the number of samples required to classify human locomotion activities. Only those samples that are most distant from the centroids of data clusters, according to a measure presented in the paper, are selected as candidates for the training dataset. The newly built dataset is then used to train an SVM multi-class classifier. The latter will produce the lowest prediction error. The proposed learning framework ensures a high level of robustness to variations in the quality of input data, while only using a much lower number of training samples and therefore a much shorter training time, which is an important consideration given the large size of the dataset.",
"title": ""
},
{
"docid": "20080faeab434b9bf8e5fbd440564d04",
"text": "Raw datasets collected for fake news detection usually contain some noise such as missing values. In order to improve the performance of machine learning based fake news detection, a novel data preprocessing method is proposed in this paper to process the missing values. Specifically, we have successfully handled the missing values problem by using data imputation for both categorical and numerical features. For categorical features, we imputed missing values with the most frequent value in the columns. For numerical features, the mean value of the column is used to impute numerical missing values. In addition, TF-IDF vectorization is applied in feature extraction to filter out irrelevant features. Experimental results show that Multi-Layer Perceptron (MLP) classifier with the proposed data preprocessing method outperforms baselines and improves the prediction accuracy by more than 15%.",
"title": ""
},
{
"docid": "903dc946b338c178634fcf9f14e1b1eb",
"text": "Detecting system anomalies is an important problem in many fields such as security, fault management, and industrial optimization. Recently, invariant network has shown to be powerful in characterizing complex system behaviours. In the invariant network, a node represents a system component and an edge indicates a stable, significant interaction between two components. Structures and evolutions of the invariance network, in particular the vanishing correlations, can shed important light on locating causal anomalies and performing diagnosis. However, existing approaches to detect causal anomalies with the invariant network often use the percentage of vanishing correlations to rank possible casual components, which have several limitations: (1) fault propagation in the network is ignored, (2) the root casual anomalies may not always be the nodes with a high percentage of vanishing correlations, (3) temporal patterns of vanishing correlations are not exploited for robust detection, and (4) prior knowledge on anomalous nodes are not exploited for (semi-)supervised detection. To address these limitations, in this article we propose a network diffusion based framework to identify significant causal anomalies and rank them. Our approach can effectively model fault propagation over the entire invariant network and can perform joint inference on both the structural and the time-evolving broken invariance patterns. As a result, it can locate high-confidence anomalies that are truly responsible for the vanishing correlations and can compensate for unstructured measurement noise in the system. Moreover, when the prior knowledge on the anomalous status of some nodes are available at certain time points, our approach is able to leverage them to further enhance the anomaly inference accuracy. When the prior knowledge is noisy, our approach also automatically learns reliable information and reduces impacts from noises. By performing extensive experiments on synthetic datasets, bank information system datasets, and coal plant cyber-physical system datasets, we demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "8be957572c846ddda107d8343094401b",
"text": "Corporate accounting statements provide financial markets, and tax services with valuable data on the economic health of companies, although financial indices are only focused on a very limited part of the activity within the company. Useful tools in the field of processing extended financial and accounting data are the methods of Artificial Intelligence, aiming the efficient delivery of financial information to tax services, investors, and financial markets where lucrative portfolios can be created. Key-words: Financial Indices, Artificial Intelligence, Data Mining, Neural Networks, Genetic Algorithms",
"title": ""
},
{
"docid": "850a7daa56011e6c53b5f2f3e33d4c49",
"text": "Multi-objective evolutionary algorithms (MOEAs) have achieved great progress in recent decades, but most of them are designed to solve unconstrained multi-objective optimization problems. In fact, many real-world multi-objective problems usually contain a number of constraints. To promote the research of constrained multi-objective optimization, we first propose three primary types of difficulty, which reflect the challenges in the real-world optimization problems, to characterize the constraint functions in CMOPs, including feasibility-hardness, convergencehardness and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable constrained multi-objective optimization problems (CMOPs) with three types of parameterized constraint functions according to the proposed three primary types of difficulty. In fact, combination of the three primary constraint functions with different parameters can lead to construct a large variety of CMOPs, whose difficulty can be uniquely defined by a triplet with each of its parameter specifying the level of each primary difficulty type respectively. Furthermore, the number of objectives in this toolkit are able to scale to more than two. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs named DAS-CMOP1-9. To evaluate the proposed test problems, two popular CMOEAs MOEA/D-CDP and NSGA-II-CDP are adopted to test their performances on DAS-CMOP1-9 with different difficulty triplets. The experiment results demonstrate that none of them can solve these problems efficiently, which stimulate us to develop new constrained MOEAs to solve the suggested DAS-CMOPs.",
"title": ""
},
{
"docid": "35d9cfbb5f0b2623ce83973ae3235c74",
"text": "Text entry has been a bottleneck of nontraditional computing devices. One of the promising methods is the virtual keyboard for touch screens. Correcting previous estimates on virtual keyboard efficiency in the literature, we estimated the potential performance of the existing QWERTY, FITALY, and OPTI designs of virtual keyboards to be in the neighborhood of 28, 36, and 38 words per minute (wpm), respectively. This article presents 2 quantitative design techniques to search for virtual keyboard layouts. The first technique simulated the dynamics of a keyboard with digraph springs between keys, which produced a Hooke keyboard with 41.6 wpm movement efficiency. The second technique used a Metropolis random walk algorithm guided by a “Fitts-digraph energy” objective function that quantifies the movement efficiency of a virtual keyboard. This method produced various Metropolis keyboards with different HUMAN-COMPUTER INTERACTION, 2002, Volume 17, pp. 89–XXX Copyright © 2002, Lawrence Erlbaum Associates, Inc. Shumin Zhai is a human–computer interaction researcher with an interest in inventing and analyzing interaction methods and devices based on human performance insights and experimentation; he is a Research Staff Member in the User Sciences and Experience Research Department of the IBM Almaden Research Center. Michael Hunter is a graduate student of Computer Science at Brigham Young University; he is interested in designing graphical and haptic user interfaces. Barton A. Smith is an experimental scientist with an interest in machines, people, and society; he is manager of the Human Interface Research Group at the IBM Almaden Research Center. shapes and structures with approximately 42.5 wpm movement efficiency, which was 50% higher than QWERTY and 10% higher than OPTI. With a small reduction (41.16 wpm) of movement efficiency, we introduced 2 more design objectives that produced the ATOMIK layout. One was alphabetical tuning that placed the keys with a tendency from A to Z so a novice user could more easily locate the keys. The other was word connectivity enhancement so the most frequent words were easier to find, remember, and type.",
"title": ""
},
{
"docid": "34f8765ca28666cfeb94e324882a71d6",
"text": "We are living in the era of the fourth industrial revolution, namely Industry 4.0. This paper presents the main aspects related to Industry 4.0, the technologies that will enable this revolution, and the main application domains that will be affected by it. The effects that the introduction of Internet of Things (IoT), Cyber-Physical Systems (CPS), crowdsensing, crowdsourcing, cloud computing and big data will have on industrial processes will be discussed. The main objectives will be represented by improvements in: production efficiency, quality and cost-effectiveness; workplace health and safety, as well as quality of working conditions; products’ quality and availability, according to mass customisation requirements. The paper will further discuss the common denominator of these enhancements, i.e., data collection and analysis. As data and information will be crucial for Industry 4.0, crowdsensing and crowdsourcing will introduce new advantages and challenges, which will make most of the industrial processes easier with respect to traditional technologies.",
"title": ""
}
] |
scidocsrr
|
9d08e3d46c90422ecdda34d42ff6df8e
|
The two faces of adolescents' success with peers: adolescent popularity, social adaptation, and deviant behavior.
|
[
{
"docid": "80ee585d49685a24a2011a1ddc27bb55",
"text": "A developmental model of antisocial behavior is outlined. Recent findings are reviewed that concern the etiology and course of antisocial behavior from early childhood through adolescence. Evidence is presented in support of the hypothesis that the route to chronic delinquency is marked by a reliable developmental sequence of experiences. As a first step, ineffective parenting practices are viewed as determinants for childhood conduct disorders. The general model also takes into account the contextual variables that influence the family interaction process. As a second step, the conduct-disordered behaviors lead to academic failure and peer rejection. These dual failures lead, in turn, to increased risk for depressed mood and involvement in a deviant peer group. This third step usually occurs during later childhood and early adolescence. It is assumed that children following this developmental sequence are at high risk for engaging in chronic delinquent behavior. Finally, implications for prevention and intervention are discussed.",
"title": ""
}
] |
[
{
"docid": "b732824ec9677b639e34de68818aae50",
"text": "Although there is wide agreement that backfilling produces significant benefits in scheduling of parallel jobs, there is no clear consensus on which backfilling strategy is preferable e.g. should conservative backfilling be used or the more aggressive EASY backfilling scheme; should a First-Come First-Served(FCFS) queue-priority policy be used, or some other such as Shortest job First(SF) or eXpansion Factor(XF); In this paper, we use trace-based simulation to address these questions and glean new insights into the characteristics of backfilling strategies for job scheduling. We show that by viewing performance in terms of slowdowns and turnaround times of jobs within various categories based on their width (processor request size), length (job duration) and accuracy of the user’s estimate of run time, some consistent trends may be observed.",
"title": ""
},
{
"docid": "28b2bbcfb8960ff40f2fe456a5b00729",
"text": "This paper presents an adaptation of Lesk’s dictionary– based word sense disambiguation algorithm. Rather than using a standard dictionary as the source of glosses for our approach, the lexical database WordNet is employed. This provides a rich hierarchy of semantic relations that our algorithm can exploit. This method is evaluated using the English lexical sample data from the Senseval-2 word sense disambiguation exercise, and attains an overall accuracy of 32%. This represents a significant improvement over the 16% and 23% accuracy attained by variations of the Lesk algorithm used as benchmarks during the Senseval-2 comparative exercise among word sense disambiguation",
"title": ""
},
{
"docid": "3df9bacf95281fc609ee7fd2d4724e91",
"text": "The deleterious effects of plastic debris on the marine environment were reviewed by bringing together most of the literature published so far on the topic. A large number of marine species is known to be harmed and/or killed by plastic debris, which could jeopardize their survival, especially since many are already endangered by other forms of anthropogenic activities. Marine animals are mostly affected through entanglement in and ingestion of plastic litter. Other less known threats include the use of plastic debris by \"invader\" species and the absorption of polychlorinated biphenyls from ingested plastics. Less conspicuous forms, such as plastic pellets and \"scrubbers\" are also hazardous. To address the problem of plastic debris in the oceans is a difficult task, and a variety of approaches are urgently required. Some of the ways to mitigate the problem are discussed.",
"title": ""
},
{
"docid": "6003e69fb8e1cc994fc29b036d2fcdc8",
"text": "This paper proposes a novel framework for labelling problems which is able to combine multiple segmentations in a principled manner. Our method is based on higher order conditional random fields and uses potentials defined on sets of pixels (image segments) generated using unsupervised segmentation algorithms. These potentials enforce label consistency in image regions and can be seen as a generalization of the commonly used pairwise contrast sensitive smoothness potentials. The higher order potential functions used in our framework take the form of the Robust P n model and are more general than the P n Potts model recently proposed by Kohli et al. We prove that the optimal swap and expansion moves for energy functions composed of these potentials can be computed by solving a st-mincut problem. This enables the use of powerful graph cut based move making algorithms for performing inference in the framework. We test our method on the problem of multi-class object segmentation by augmenting the conventional crf used for object segmentation with higher order potentials defined on image regions. Experiments on challenging data sets show that integration of higher order potentials quantitatively and qualitatively improves results leading to much better definition of object boundaries. We believe that this method can be used to yield similar improvements for many other labelling problems.",
"title": ""
},
{
"docid": "28c0afcde94ba0fcf39678cba0b5999a",
"text": "To describe the aponeurotic expansion of the supraspinatus tendon with anatomic correlations and determine its prevalence in a series of patients imaged with MRI. In the first part of this HIPAA-compliant and IRB-approved study, we retrospectively reviewed 150 consecutive MRI studies of the shoulder obtained on a 1.5-T system. The aponeurotic expansion at the level of the bicipital groove was classified as: not visualized (type 0), flat-shaped (type 1), oval-shaped and less than 50 % the size of the adjacent long head of the biceps section (type 2A), or oval-shaped and more than 50 % the size of the adjacent long head of the biceps section (type 2B). In the second part of this study, we examined both shoulders of 25 cadavers with ultrasound. When aponeurotic expansion was seen at US, a dissection was performed to characterize its origin and termination. An aponeurotic expansion of the supraspinatus located anterior and lateral to the long head of the biceps in its groove was clearly demonstrated in 49 % of the shoulders with MRI. According to our classification, its shape was type 1 in 35 %, type 2A in 10 % and type 2B in 4 %. This structure was also identified in 28 of 50 cadaveric shoulders with ultrasound and confirmed at dissection in 10 cadavers (20 shoulders). This structure originated from the most anterior and superficial aspect of the supraspinatus tendon and inserted distally on the pectoralis major tendon. The aponeurotic expansion of the supraspinatus tendon can be identified with MRI or ultrasound in about half of the shoulders. It courses anteriorly and laterally to the long head of the biceps tendon, outside its synovial sheath.",
"title": ""
},
{
"docid": "4e35031e5d0e6698f90bfec7a1e6bfb8",
"text": "Numerous studies have examined the neuronal inputs and outputs of many areas within the mammalian cerebral cortex, but how these areas are organized into neural networks that communicate across the entire cortex is unclear. Over 600 labeled neuronal pathways acquired from tracer injections placed across the entire mouse neocortex enabled us to generate a cortical connectivity atlas. A total of 240 intracortical connections were manually reconstructed within a common neuroanatomic framework, forming a cortico-cortical connectivity map that facilitates comparison of connections from different cortical targets. Connectivity matrices were generated to provide an overview of all intracortical connections and subnetwork clusterings. The connectivity matrices and cortical map revealed that the entire cortex is organized into four somatic sensorimotor, two medial, and two lateral subnetworks that display unique topologies and can interact through select cortical areas. Together, these data provide a resource that can be used to further investigate cortical networks and their corresponding functions.",
"title": ""
},
{
"docid": "12363093cb0441e0817d4c92ab88e7fb",
"text": "Imperforate hymen, a condition in which the hymen has no aperture, usually occurs congenitally, secondary to failure of development of a lumen. A case of a documented simulated \"acquired\" imperforate hymen is presented in this article. The patient, a 5-year-old girl, was the victim of sexual abuse. Initial examination showed tears, scars, and distortion of the hymen, laceration of the perineal body, and loss of normal anal tone. Follow-up evaluations over the next year showed progressive healing. By 7 months after the injury, the hymen was replaced by a thick, opaque scar with no orifice. Patients with an apparent imperforate hymen require a sensitive interview and careful visual inspection of the genital and anal areas to delineate signs of injury. The finding of an apparent imperforate hymen on physical examination does not eliminate the possibility of antecedent vaginal penetration and sexual abuse.",
"title": ""
},
{
"docid": "4929aae1291a93873ab77961e9aa6e60",
"text": "We describe a nonnegative variant of the ”Sparse PCA” problem. The goal is to create a low dimensional representation from a collection of points which on the one hand maximizes the variance of the projected points and on the other uses only parts of the original coordinates, and thereby creating a sparse representation. What distinguishes our problem from other Sparse PCA formulations is that the projection involves only nonnegative weights of the original coordinates — a desired quality in various fields, including economics, bioinformatics and computer vision. Adding nonnegativity contributes to sparseness, where it enforces a partitioning of the original coordinates among the new axes. We describe a simple yet efficient iterative coordinate-descent type of scheme which converges to a local optimum of our optimization criteria, giving good results on large real world datasets.",
"title": ""
},
{
"docid": "ede379625a9b40375d352c5b6f1c100e",
"text": "This study explored the effects of sequential combinations of consumer experiences. Four kinds of sequential combinations of consumer experiences were designed: exposing to escapist virtual experience preceding direct experience (VEescapist → DE), exposing to education virtual experience preceding direct experience (VEeducation → DE), exposing to escapist virtual experience preceding indirect experience (VEescapist → IDE), and exposing to education virtual experience preceding indirect experience (VEeducation → IDE). The results indicated that “VEescapist → IDE” produces the highest product knowledge and brand attitude; “VEescapist → DE” produces the lowest perceived risk. Additionally, the moderating roles of need for touch and product involvement also explored. For the high need for touch, “VEescapist → IDE” produces the highest product knowledge and “VEescapist → DE” produces the lowest perceived risk; for the high product involvement, “VEescapist → IDE” produces the highest product knowledge and brand attitude and “VEescapist → DE” produces the lowest perceived risk.",
"title": ""
},
{
"docid": "118db394bb1000f64154573b2b77b188",
"text": "Question answering requires access to a knowledge base to check facts and reason about information. Knowledge in the form of natural language text is easy to acquire, but difficult for automated reasoning. Highly-structured knowledge bases can facilitate reasoning, but are difficult to acquire. In this paper we explore tables as a semi-structured formalism that provides a balanced compromise to this tradeoff. We first use the structure of tables to guide the construction of a dataset of over 9000 multiple-choice questions with rich alignment annotations, easily and efficiently via crowd-sourcing. We then use this annotated data to train a semistructured feature-driven model for question answering that uses tables as a knowledge base. In benchmark evaluations, we significantly outperform both a strong unstructured retrieval baseline and a highlystructured Markov Logic Network model.",
"title": ""
},
{
"docid": "b3b9532fc2b2cd72962b0f1449fc3ebc",
"text": "This study aimed to analyze: 1) the pattern of repetition velocity decline during a single set to failure against different submaximal loads (50-85% 1RM) in the bench press exercise; and 2) the reliability of the percentage of performed repetitions, with respect to the maximum possible number that can be completed, when different magnitudes of velocity loss have been reached within each set. Twenty-two men performed 8 tests of maximum number of repetitions (MNR) against loads of 50-55-60-65-70-75-80-85% 1RM, in random order, every 6-7 days. Another 28 men performed 2 separate MNR tests against 60% 1RM. A very close relationship was found between the relative loss of velocity in a set and the percentage of performed repetitions. This relationship was very similar for all loads, but particularly for 50-70% 1RM, even though the number of repetitions completed at each load was significantly different. Moreover, the percentage of performed repetitions for a given velocity loss showed a high absolute reliability. Equations to predict the percentage of performed repetitions from relative velocity loss are provided. By monitoring repetition velocity and using these equations, one can estimate, with considerable precision, how many repetitions are left in reserve in a bench press exercise set.",
"title": ""
},
{
"docid": "035fbb25ed4a97ceb6f92b464b617dfa",
"text": "The microblogging service Twitter is one of the world's most popular online social networks and assembles a huge amount of data produced by interactions between users. A careful analysis of this data allows identifying groups of users who share similar traits, opinions, and preferences. We call community detection the process of user group identification, which grants valuable insights not available upfront. In order to extract useful knowledge from Twitter data many methodologies have been proposed, which define the attributes to be used in community detection problems by manual and empirical criteria - oftentimes guided by the aimed type of community and what the researcher attaches importance to. However, such approach cannot be generalized because it is well known that the task of finding out an appropriate set of attributes leans on context, domain, and data set. Aiming to the advance of community detection domain, reduce computational cost and improve the quality of related researches, this paper proposes a standard methodology for community detection in Twitter using feature selection methods. Results of the present research directly affect the way community detection methodologies have been applied to Twitter and quality of outcomes produced.",
"title": ""
},
{
"docid": "64753b3c47e52ff6f1760231dc13cd63",
"text": "Theatrical improvisation (impro or improv) is a demanding form of live, collaborative performance. Improv is a humorous and playful artform built on an open-ended narrative structure which simultaneously celebrates effort and failure. It is thus an ideal test bed for the development and deployment of interactive artificial intelligence (AI)-based conversational agents, or artificial improvisors. This case study introduces an improv show experiment featuring human actors and artificial improvisors. We have previously developed a deeplearning-based artificial improvisor, trained on movie subtitles, that can generate plausible, context-based, lines of dialogue suitable for theatre (Mathewson and Mirowski 2017b). In this work, we have employed it to control what a subset of human actors say during an improv performance. We also give human-generated lines to a different subset of performers. All lines are provided to actors with headphones and all performers are wearing headphones. This paper describes a Turing test, or imitation game, taking place in a theatre, with both the audience members and the performers left to guess who is a human and who is a machine. In order to test scientific hypotheses about the perception of humans versus machines we collect anonymous feedback from volunteer performers and audience members. Our results suggest that rehearsal increases proficiency and possibility to control events in the performance. That said, consistency with real world experience is limited by the interface and the mechanisms used to perform the show. We also show that human-generated lines are shorter, more positive, and have less difficult words with more grammar and spelling mistakes than the artificial improvisor generated lines.",
"title": ""
},
{
"docid": "ee638ed1b80332080aa55161656910e2",
"text": "The bone marrow (BM) stromal niche can protect acute lymphoblastic leukemia (ALL) cells against the cytotoxicity of chemotherapeutic agents and is a possible source of relapse. The stromal-derived factor-1 (SDF-1)/CXCR4 axis is a major determinant in the crosstalk between leukemic cells and BM stroma. In this study, we investigated the use of AMD11070, an orally available, small-molecule antagonist of CXCR4, as an ALL-sensitizing agent. This compound effectively blocked stromal-induced migration of human ALL cells in culture and disrupted pre-established adhesion to stroma. To examine how to optimally use this compound in vivo, several combinations with cytotoxic drugs were tested in a stromal co-culture system. The best treatment regimen was then tested in vivo. Mice transplanted with murine Bcr/Abl ALL cells survived significantly longer when treated with a combination of nilotinib and AMD11070. Similarly, immunocompromised mice transplanted with human ALL cells and treated with vincristine and AMD11070 had few circulating leukemic cells, normal spleens and reduced human CD19+ cells in the BM at the termination of the experiment. These results show that combined treatment with AMD11070 may be of significant benefit in eradicating residual leukemia cells at locations where they would otherwise be protected by stroma.",
"title": ""
},
{
"docid": "96e24fabd3567a896e8366abdfaad78e",
"text": "Interior permanent magnet synchronous motor (IPMSM) is usually applied to traction motor in the hybrid electric vehicle (HEV). All motors including IPMSM have different parameters and characteristics with various combinations of the number of poles and slots. The proper combination can improve characteristics of traction system ultimately. This paper deals with analysis of the characteristics of IPMSM for mild type HEV according to the combinations of number of poles and slots. The specific models with 16-pole/18-slot, 16-pole/24-slot and 12-pole/18-slot combinations are introduced. And the advantages and disadvantages of these three models are compared. The characteristics of each model are computed in d-q axis equivalent circuit analysis and finite element analysis. After then, the proper combination of the number of poles and slots for HEV traction motor is presented after comparing these three models.",
"title": ""
},
{
"docid": "6d5fb6a470fe80cbe0dac7c80f8fa9d8",
"text": "BACKGROUND\nThe Sexual Assault Resource Center (SARC) in Perth, Western Australia provides free 24-hour medical, forensic, and counseling services to persons aged over 13 years following sexual assault.\n\n\nOBJECTIVE\nThe aim of this research was to design a data management system that maintains accurate quality information on all sexual assault cases referred to SARC, facilitating audit and peer-reviewed research.\n\n\nMETHODS\nThe work to develop SARC Medical Services Clinical Information System (SARC-MSCIS) took place during 2007-2009 as a collaboration between SARC and Curtin University, Perth, Western Australia. Patient demographics, assault details, including injury documentation, and counseling sessions were identified as core data sections. A user authentication system was set up for data security. Data quality checks were incorporated to ensure high-quality data.\n\n\nRESULTS\nAn SARC-MSCIS was developed containing three core data sections having 427 data elements to capture patient's data. Development of the SARC-MSCIS has resulted in comprehensive capacity to support sexual assault research. Four additional projects are underway to explore both the public health and criminal justice considerations in responding to sexual violence. The data showed that 1,933 sexual assault episodes had occurred among 1881 patients between January 1, 2009 and December 31, 2015. Sexual assault patients knew the assailant as a friend, carer, acquaintance, relative, partner, or ex-partner in 70% of cases, with 16% assailants being a stranger to the patient.\n\n\nCONCLUSION\nThis project has resulted in the development of a high-quality data management system to maintain information for medical and forensic services offered by SARC. This system has also proven to be a reliable resource enabling research in the area of sexual violence.",
"title": ""
},
{
"docid": "505e80ac2fe0ee1a34c60279b90d0ca7",
"text": "In an effective e-learning game, the learner’s enjoyment acts as a catalyst to encourage his/her learning initiative. Therefore, the availability of a scale that effectively measures the enjoyment offered by e-learning games assist the game designer to understanding the strength and flaw of the game efficiently from the learner’s points of view. E-learning games are aimed at the achievement of learning objectives via the creation of a flow effect. Thus, this study is based on Sweetser’s & Wyeth’s framework to develop a more rigorous scale that assesses user enjoyment of e-learning games. The scale developed in the present study consists of eight dimensions: Immersion, social interaction, challenge, goal clarity, feedback, concentration, control, and knowledge improvement. Four learning games employed in a university’s online learning course ‘‘Introduction to Software Application” were used as the instruments of scale verification. Survey questionnaires were distributed to students taking the course and 166 valid samples were subsequently collected. The results showed that the validity and reliability of the scale, EGameFlow, were satisfactory. Thus, the measurement is an effective tool for evaluating the level of enjoyment provided by elearning games to their users. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f7e3d9070792af014b4b9ebaaf047e44",
"text": "Machine Learning algorithms are increasingly being used in recent years due to their flexibility in model fitting and increased predictive performance. However, the complexity of the models makes them hard for the data analyst to interpret the results and explain them without additional tools. This has led to much research in developing various approaches to understand the model behavior. In this paper, we present the Explainable Neural Network (xNN), a structured neural network designed especially to learn interpretable features. Unlike fully connected neural networks, the features engineered by the xNN can be extracted from the network in a relatively straightforward manner and the results displayed. With appropriate regularization, the xNN provides a parsimonious explanation of the relationship between the features and the output. We illustrate this interpretable feature–engineering property on simulated examples.",
"title": ""
},
{
"docid": "9d73ff3f8528bb412c585d802873fcb4",
"text": "In this work, we introduce a novel interpretation of residual networks showing they are exponential ensembles. This observation is supported by a large-scale lesion study that demonstrates they behave just like ensembles at test time. Subsequently, we perform an analysis showing these ensembles mostly consist of networks that are each relatively shallow. For example, contrary to our expectations, most of the gradient in a residual network with 110 layers comes from an ensemble of very short networks, i.e., only 10-34 layers deep. This suggests that in addition to describing neural networks in terms of width and depth, there is a third dimension: multiplicity, the size of the implicit ensemble. Ultimately, residual networks do not resolve the vanishing gradient problem by preserving gradient flow throughout the entire depth of the network – rather, they avoid the problem simply by ensembling many short networks together. This insight reveals that depth is still an open research question and invites the exploration of the related notion of multiplicity.",
"title": ""
}
] |
scidocsrr
|
0b466bd28d8b9f5aca8ba9471430ce34
|
Performance analysis on agriculture ontology using SPARQL query system
|
[
{
"docid": "c9dd964f5421171d4302d1b159c2b415",
"text": "The results of a multi-year research program to identify the factors associated with variations in subjective workload within and between different types of tasks are reviewed. Subjective evaluations of 10 workload-related factors were obtained from 16 different experiments. The experimental tasks included simple cognitive and manual control tasks, complex laboratory and supervisory control tasks, and aircraft simulation. Task-, behavior-, and subject-related correlates of subjective workload experiences varied as a function of difficulty manipulations within experiments, different sources of workload between experiments, and individual differences in workload definition. A multi-dimensional rating scale is proposed in which information about the magnitude and sources of six workload-related factors are combined to derive a sensitive and reliable estimate of workload. .",
"title": ""
}
] |
[
{
"docid": "4fc356024295824f6c68360bf2fcb860",
"text": "Detecting depression is a key public health challenge, as almost 12% of all disabilities can be attributed to depression. Computational models for depression detection must prove not only that can they detect depression, but that they can do it early enough for an intervention to be plausible. However, current evaluations of depression detection are poor at measuring model latency. We identify several issues with the currently popular ERDE metric, and propose a latency-weighted F1 metric that addresses these concerns. We then apply this evaluation to several models from the recent eRisk 2017 shared task on depression detection, and show how our proposed measure can better capture system differences.",
"title": ""
},
{
"docid": "80fc5a0c795deb1ec7a687c7f7b6c863",
"text": "Long non-coding RNAs (lncRNAs) have emerged as critical regulators of genes at epigenetic, transcriptional and post-transcriptional levels, yet what genes are regulated by a specific lncRNA remains to be characterized. To assess the effects of the lncRNA on gene expression, an increasing number of researchers profiled the genome-wide or individual gene expression level change after knocking down or overexpressing the lncRNA. Herein, we describe a curated database named LncRNA2Target, which stores lncRNA-to-target genes and is publicly accessible at http://www.lncrna2target.org. A gene was considered as a target of a lncRNA if it is differentially expressed after the lncRNA knockdown or overexpression. LncRNA2Target provides a web interface through which its users can search for the targets of a particular lncRNA or for the lncRNAs that target a particular gene. Both search types are performed either by browsing a provided catalog of lncRNA names or by inserting lncRNA/target gene IDs/names in a search box.",
"title": ""
},
{
"docid": "30eb03eca06dcc006a28b5e00431d9ed",
"text": "We present for the first time a μW-power convolutional neural network for seizure detection running on a low-power microcontroller. On a dataset of 22 patients a median sensitivity of 100% is achieved. With a false positive rate of 20.7 fp/h and a short detection delay of 3.4 s it is suitable for the application in an implantable closed-loop device.",
"title": ""
},
{
"docid": "0d0b9d20032feb4178a3c98f2787cb8d",
"text": "To address the problem of detecting malicious codes in malware and extracting the corresponding evidences in mobile devices, we construct a consortium blockchain framework, which is composed of a detecting consortium chain shared by test members and a public chain shared by users. Specifically, in view of different malware families in Android-based system, we perform feature modeling by utilizing statistical analysis method, so as to extract malware family features, including software package feature, permission and application feature, and function call feature. Moreover, for reducing false-positive rate and improving the detecting ability of malware variants, we design a multi-feature detection method of Android-based system for detecting and classifying malware. In addition, we establish a fact-base of distributed Android malicious codes by blockchain technology. The experimental results show that, compared with the previously published algorithms, the new proposed method can achieve higher detection accuracy in limited time with lower false-positive and false-negative rates.",
"title": ""
},
{
"docid": "653fee86af651e13e0d26fed35ef83e4",
"text": "Small ducted fan autonomous vehicles have potential for several applications, especially for missions in urban environments. This paper discusses the use of dynamic inversion with neural network adaptation to provide an adaptive controller for the GTSpy, a small ducted fan autonomous vehicle based on the Micro Autonomous Systems’ Helispy. This approach allows utilization of the entire low speed flight envelope with a relatively poorly understood vehicle. A simulator model is constructed from a force and moment analysis of the vehicle, allowing for a validation of the controller in preparation for flight testing. Data from flight testing of the system is provided.",
"title": ""
},
{
"docid": "c2bd13531b20a55fe5e52191160c5bdd",
"text": "The dual-tree complex wavelet transform ( CWT) is a relatively recent enhancement of the discrete wavelet transform (DWT) with important additional properties: It is nearly shift-invariant and directionally selective in two and higher dimensions. It achieves this with a redundancy factor of only2 for d-dimensional signals, which is substantially lower than the undecimated DWT. The multidimensional dual-tree CWT is non-separable but is based on a computationally efficient, separable filter bank. This tutorial discusses the theory behind the dual-tree transform, shows how complex wavelets with good properties can be designed, and illustrates a range of applications in signal and image processing.",
"title": ""
},
{
"docid": "f91a507a9cb7bdee2e8c3c86924ced8d",
"text": "a r t i c l e i n f o It is often stated that bullying is a \" group process \" , and many researchers and policymakers share the belief that interventions against bullying should be targeted at the peer-group level rather than at individual bullies and victims. There is less insight into what in the group level should be changed and how, as the group processes taking place at the level of the peer clusters or school classes have not been much elaborated. This paper reviews the literature on the group involvement in bullying, thus providing insight into the individuals' motives for participation in bullying, the persistence of bullying, and the adjustment of victims across different peer contexts. Interventions targeting the peer group are briefly discussed and future directions for research on peer processes in bullying are suggested. Bullying is a subtype of aggressive behavior, in which an individual or a group of individuals repeatedly attacks, humiliates, and/or excludes a relatively powerless person. The majority of studies on the topic have been conducted in schools, focusing on bullying among the concept of bullying is used to refer to peer-to-peer bullying among school-aged children and youth, when not otherwise mentioned. It is known that a sizable minority of primary and secondary school students is involved in peer-to-peer bullying either as perpetrators or victims — or as both, being both bullied themselves and harassing others. In WHO's Health Behavior in School-Aged Children survey (HBSC, see Craig & Harel, 2004), the average prevalence of victims across the 35 countries involved was 11%, whereas bullies represented another 11%. Children who report both bullying others and being bullied by others (so-called bully–victims) were not identified in the HBSC study, but other studies have shown that approximately 4–6% of the children can be classified as bully–victims (Haynie et al., 2001; Nansel et al., 2001). Bullying constitutes a serious risk for the psychosocial and academic adjustment of both victims",
"title": ""
},
{
"docid": "c42d1ee7a6b947e94eeb6c772e2b638f",
"text": "As mobile devices are equipped with more memory and computational capability, a novel peer-to-peer communication model for mobile cloud computing is proposed to interconnect nearby mobile devices through various short range radio communication technologies to form mobile cloudlets, where every mobile device works as either a computational service provider or a client of a service requester. Though this kind of computation offloading benefits compute-intensive applications, the corresponding service models and analytics tools are remaining open issues. In this paper we categorize computation offloading into three modes: remote cloud service mode, connected ad hoc cloudlet service mode, and opportunistic ad hoc cloudlet service mode. We also conduct a detailed analytic study for the proposed three modes of computation offloading at ad hoc cloudlet.",
"title": ""
},
{
"docid": "abef10b620026b2c054ca69a3c75f930",
"text": "The idea that general intelligence may be more variable in males than in females has a long history. In recent years it has been presented as a reason that there is little, if any, mean sex difference in general intelligence, yet males tend to be overrepresented at both the top and bottom ends of its overall, presumably normal, distribution. Clear analysis of the actual distribution of general intelligence based on large and appropriately population-representative samples is rare, however. Using two population-wide surveys of general intelligence in 11-year-olds in Scotland, we showed that there were substantial departures from normality in the distribution, with less variability in the higher range than in the lower. Despite mean IQ-scale scores of 100, modal scores were about 105. Even above modal level, males showed more variability than females. This is consistent with a model of the population distribution of general intelligence as a mixture of two essentially normal distributions, one reflecting normal variation in general intelligence and one refecting normal variation in effects of genetic and environmental conditions involving mental retardation. Though present at the high end of the distribution, sex differences in variability did not appear to account for sex differences in high-level achievement.",
"title": ""
},
{
"docid": "b0697c48ac1698d72f1a7fd5bf1f4c29",
"text": "In the design of compact, power dense electrical machines found in automotive and aerospace applications it may be preferable to accept a degree of eddy current loss within the winding to realise a low cost high integrity winding. Commonly adopted techniques for analysing AC effects in electrical machine windings tend only to be applicable where the conductor dimensions are smaller than the skin depth and for ideally transposed windings. Further the temperature variation of AC losses is often overlooked. In this paper a more general approach is proposed based on an established analytical model and is equally applicable to windings where the conductor size greatly exceeds the skin depth. The model is validated against test measurements and FE analysis on representative single layer winding arrangements. A case study is used to illustrate the application of the proposed analytical method to the coupled electromagnetic and thermal modelling of a typical slot winding.",
"title": ""
},
{
"docid": "ee9f06156ac87fed9bfed9dbf895221a",
"text": "The traffic on the roads is increasing day by day. There is dire need of developing an automation system that can effectively manage and control the traffic on roads. The traffic data of multiple vehicle types on roads is also important for taking various decisions related to traffic. A video based traffic data collection system for multiple vehicle types is helpful for monitoring vehicles under homogenous and heterogeneous traffic conditions. In this paper, we have studied different methods for the identification, classification and counting vehicles from online and offline videos in India as well as other countries. The paper also discusses the various applications of video based automatic traffic control system. The various challenges faced by the researchers for developing such systems are also discussed.",
"title": ""
},
{
"docid": "c33c481d999e3f4d3f8e3a664424f72b",
"text": "Privacy notices are the default mechanism used to inform users about the data collection and use practices of technologies (e.g., websites, mobile apps, Internet of Things devices) and processes with which they interact. The length of these policies and their often convoluted language have been shown to discourage most users from reading them. Recent progress in natural language processing and machine learning has opened the door to the development of technologies that are capable of automatically extracting statements (or “annotations”) from the text of privacy policies. These technologies could help users quickly identify those elements of a privacy notice they care about without requiring them to read the full text of the notice. In this article, we review the requirements associated with the development of Query Answering functionality that would enable users to ask questions about specific aspects of privacy notices (e.g. Does this app share my location with third parties? Am I able to review the information this website collects about me? Can I delete my account? For how long is my information going to be retained by this company?). We discuss different possible approaches to supporting such functionality and how they relate to recent advances in automatically annotating privacy notices. Initial results obtained with different machine learning/natural language processing techniques are presented, suggesting that Query Answering functionality could be a particularly promising approach to informing users about privacy practices. In particular, in contrast to automated annotation techniques that aim to extract detailed statements from the text of privacy notices, Query Answering functionality could be configured to return short text fragments extracted from privacy notices and rely on the user (rather than the computer) to interpret some of the finer nuances of the text found in these fragments. Such an approach could potentially prove more robust than fully automated annotation techniques, which at least at this time struggle with the interpretation of finer nuances. This article also includes a brief discussion of opportunities and challenges associated with possible extensions of Query Answering functionality in the form of privacy assistants capable of entertaining dialogues with users to clarify some of their questions and help them understand to what extent their concerns are explicitly addressed (or not) by the text of privacy notices. Such functionality could provide for yet greater robustness and usability than fully automated annotation techniques, and could eventually also leverage models of what the user already knows and/or cares about.",
"title": ""
},
{
"docid": "1349cdd5f181c2d6b958280a728d43b6",
"text": "Colormaps are a vital method for users to gain insights into data in a visualization. With a good choice of colormaps, users are able to acquire information in the data more effectively and efficiently. In this survey, we attempt to provide readers with a comprehensive review of colormap generation techniques and provide readers a taxonomy which is helpful for finding appropriate techniques to use for their data and applications. Specifically, we first briefly introduce the basics of color spaces including color appearance models. In the core of our paper, we survey colormap generation techniques, including the latest advances in the field by grouping these techniques into four classes: procedural methods, user-study based methods, rule-based methods, and data-driven methods; we also include a section on methods that are beyond pure data comprehension purposes. We then classify colormapping techniques into a taxonomy for readers to quickly identify the appropriate techniques they might use. Furthermore, a representative set of visualization techniques that explicitly discuss the use of colormaps is reviewed and classified based on the nature of the data in these applications. Our paper is also intended to be a reference of colormap choices for readers when they are faced with similar data and/or tasks.",
"title": ""
},
{
"docid": "6ea695012347ca76ccb3139c73a6b7b3",
"text": "Deep learning has improved state-of-the-art results in many important fields, and has been the subject of much research in recent years, leading to the development of several systems for facilitating deep learning. Current systems, however, mainly focus on model building and training phases, while the issues of data management, model sharing, and lifecycle management are largely ignored. Deep learning modeling lifecycle contains a rich set of artifacts, such as learned parameters and training logs, and frequently conducted tasks, e.g., to understand the model behaviors and to try out new models. Dealing with such artifacts and tasks is cumbersome and left to the users. To address these issues in a comprehensive manner, we propose ModelHub, which includes a novel model versioning system (dlv); a domain specific language for searching through model space (DQL); and a hosted service (ModelHub) to store developed models, explore existing models, enumerate new models and share models with others.",
"title": ""
},
{
"docid": "1b0cb70fb25d86443a01a313371a27ae",
"text": "We present a protocol for general state machine replication – a method that provides strong consistency – that has high performance in a wide-area network. In particular, our protocol Mencius has high throughput under high client load and low latency under low client load even under changing wide-area network environment and client load. We develop our protocol as a derivation from the well-known protocol Paxos. Such a development can be changed or further refined to take advantage of specific network or application requirements.",
"title": ""
},
{
"docid": "274ce66c0bcc77a1e4a858bef9e41111",
"text": "It is a timely issue to understand the impact of bilingualism upon brain structure in healthy aging and upon cognitive decline given evidence of its neuroprotective effects. Plastic changes induced by bilingualism were reported in young adults in the left inferior parietal lobule (LIPL) and its right counterpart (RIPL) (Mechelli et al., 2004). Moreover, both age of second language (L2) acquisition and L2 proficiency correlated with increased grey matter (GM) in the LIPL/RIPL. However it is unknown whether such findings replicate in older bilinguals. We examined this question in an aging bilingual population from Hong Kong. Results from our Voxel Based Morphometry study show that elderly bilinguals relative to a matched monolingual control group also have increased GM volumes in the inferior parietal lobules underlining the neuroprotective effect of bilingualism. However, unlike younger adults, age of L2 acquisition did not predict GM volumes. Instead, LIPL and RIPL appear differentially sensitive to the effects of L2 proficiency and L2 exposure with LIPL more sensitive to the former and RIPL more sensitive to the latter. Our data also intimate that such * Corresponding author. University Vita-Salute San Raffaele, Via Olgettina 58, 20132 Milan, Italy. Tel.: þ39 0226434888. E-mail addresses: abutalebi.jubin@hsr.it, jubin@hku.hk (J. Abutalebi).",
"title": ""
},
{
"docid": "a3bc8e58c397343e2d381c6f662be6ff",
"text": "Researchers have assumed that low self-esteem predicts deviance, but empirical results have been mixed. This article draws upon recent theoretical developments regarding contingencies of self-worth to clarify the self-esteem/deviance relation. It was predicted that self-esteem level would relate to deviance only when self-esteem was not contingent on workplace performance. In this manner, contingent self-esteem is a boundary condition for self-consistency/behavioral plasticity theory predictions. Using multisource data collected from 123 employees over 6 months, the authors examined the interaction between level (high/low) and type (contingent/noncontingent) of self-esteem in predicting workplace deviance. Results support the hypothesized moderating effects of contingent self-esteem; implications for self-esteem theories are discussed.",
"title": ""
},
{
"docid": "a8a446895fdb6a1ededc47446c908736",
"text": "Citeology is an interactive visualization that looks at the relationships between research publications through their use of citations. The sample corpus uses all 3,502 papers published at ACM CHI and UIST between 1982 and 2010, and the 11,699 citations between them. A connection is drawn between each paper and all papers which it referenced from the collection. For an individual paper, the resulting visualization represents a \"family tree\" of sorts, showing multiple generations of referenced papers which the target paper built upon, and all descendant generations of future papers.",
"title": ""
},
{
"docid": "25bb739c67fed1a4a0573ef1dff4d89e",
"text": "Symbolic execution is a well-known program analysis technique which represents program inputs with symbolic values instead of concrete, initialized, data and executes the program by manipulating program expressions involving the symbolic values. Symbolic execution has been proposed over three decades ago but recently it has found renewed interest in the research community, due in part to the progress in decision procedures, availability of powerful computers and new algorithmic developments. We provide here a survey of some of the new research trends in symbolic execution, with particular emphasis on applications to test generation and program analysis. We first describe an approach that handles complex programming constructs such as input recursive data structures, arrays, as well as multithreading. Furthermore, we describe recent hybrid techniques that combine concrete and symbolic execution to overcome some of the inherent limitations of symbolic execution, such as handling native code or availability of decision procedures for the application domain. We follow with a discussion of techniques that can be used to limit the (possibly infinite) number of symbolic configurations that need to be analyzed for the symbolic execution of looping programs. Finally, we give a short survey of interesting new applications, such as predictive testing, invariant inference, program repair, analysis of parallel numerical programs and differential symbolic execution.",
"title": ""
},
{
"docid": "e648aa29c191885832b4deee5af9b5b5",
"text": "Development of controlled release transdermal dosage form is a complex process involving extensive research. Transdermal patches have been developed to improve clinical efficacy of the drug and to enhance patient compliance by delivering smaller amount of drug at a predetermined rate. This makes evaluation studies even more important in order to ensure their desired performance and reproducibility under the specified environmental conditions. These studies are predictive of transdermal dosage forms and can be classified into following types:",
"title": ""
}
] |
scidocsrr
|
fc07b26aebac497666ed0e8d16b261c0
|
Gamification for Engaging Computer Science Students in Learning Activities: A Case Study
|
[
{
"docid": "9b13beaf2e5aecc256117fdd8ccf8368",
"text": "This paper examines the literature on computer games and serious games in regard to the potential positive impacts of gaming on users aged 14 years or above, especially with respect to learning, skill enhancement and engagement. Search terms identified 129 papers reporting empirical evidence about the impacts and outcomes of computer games and serious games with respect to learning and engagement and a multidimensional approach to categorizing games was developed. The findings revealed that playing computer games is linked to a range of perceptual, cognitive, behavioural, affective and motivational impacts and outcomes. The most frequently occurring outcomes and impacts were knowledge acquisition/content understanding and affective and motivational outcomes. The range of indicators and measures used in the included papers are discussed, together with methodological limitations and recommendations for further work in this area. 2012 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "e5a3119470420024b99df2d6eb14b966",
"text": "Why should wait for some days to get or receive the rules of play game design fundamentals book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This rules of play game design fundamentals is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?",
"title": ""
}
] |
[
{
"docid": "8f9d5cd416ac038a4cbdf64737039053",
"text": "This paper proposes a method to extract the feature points from faces automatically. It provides a feasible way to locate the positions of two eyeballs, near and far corners of eyes, midpoint of nostrils and mouth corners from face image. This approach would help to extract useful features on human face automatically and improve the accuracy of face recognition. The experiments show that the method presented in this paper could locate feature points from faces exactly and quickly.",
"title": ""
},
{
"docid": "21031b55206dd330852b8d11e8e6a84a",
"text": "To predict the most salient regions of complex natural scenes, saliency models commonly compute several feature maps (contrast, orientation, motion...) and linearly combine them into a master saliency map. Since feature maps have different spatial distribution and amplitude dynamic ranges, determining their contributions to overall saliency remains an open problem. Most state-of-the-art models do not take time into account and give feature maps constant weights across the stimulus duration. However, visual exploration is a highly dynamic process shaped by many time-dependent factors. For instance, some systematic viewing patterns such as the center bias are known to dramatically vary across the time course of the exploration. In this paper, we use maximum likelihood and shrinkage methods to dynamically and jointly learn feature map and systematic viewing pattern weights directly from eye-tracking data recorded on videos. We show that these weights systematically vary as a function of time, and heavily depend upon the semantic visual category of the videos being processed. Our fusion method allows taking these variations into account, and outperforms other stateof-the-art fusion schemes using constant weights over time. The code, videos and eye-tracking data we used for this study are available online.",
"title": ""
},
{
"docid": "555b07171f5305f7ae968d9a76d74ec3",
"text": "The production of lithium-ion (Li-ion) batteries has been continually increasing since their first introduction into the market in 1991 because of their excellent performance, which is related to their high specific energy, energy density, specific power, efficiency, and long life. Li-ion batteries were first used for consumer electronics products such as mobile phones, camcorders, and laptop computers, followed by automotive applications that emerged during the last decade and are still expanding, and finally industrial applications including energy storage. There are four promising cell chemistries considered for energy storage applications: 1) LiMn2O4/graphite cell chemistry uses low-cost materials that are naturally abundant; 2) LiNi1-X-Y2CoXAlYO2/graphite cell chemistry has high specific energy and long life; 3) LiFePO4/graphite (or carbon) cell chemistry has good safety characteristics; and 4) Li4Ti5O12 is used as the negative electrode material in Li-ion batteries with long life and good safety features. However, each of the cell chemistries has some disadvantages, and the development of these technologies is still in progress. Therefore, it is too early to predict which cell chemistry will be the main candidate for energy storage applications, and we have to remain vigilant with respect to trends in technological progress and also consider changes in economic and social conditions before this can be determined.",
"title": ""
},
{
"docid": "efb305d95cf7197877de0b2fb510f33a",
"text": "Drug-induced cardiotoxicity is emerging as an important issue among cancer survivors. For several decades, this topic was almost exclusively associated with anthracyclines, for which cumulative dose-related cardiac damage was the limiting step in their use. Although a number of efforts have been directed towards prediction of risk, so far no consensus exists on the strategies to prevent and monitor chemotherapy-related cardiotoxicity. Recently, a new dimension of the problem has emerged when drugs targeting the activity of certain tyrosine kinases or tumor receptors were recognized to carry an unwanted effect on the cardiovascular system. Moreover, the higher than expected incidence of cardiac dysfunction occurring in patients treated with a combination of old and new chemotherapeutics (e.g. anthracyclines and trastuzumab) prompted clinicians and researchers to find an effective approach to the problem. From the pharmacological standpoint, putative molecular mechanisms involved in chemotherapy-induced cardiotoxicity will be reviewed. From the clinical standpoint, current strategies to reduce cardiotoxicity will be critically addressed. In this perspective, the precise identification of the antitarget (i.e. the unwanted target causing heart damage) and the development of guidelines to monitor patients undergoing treatment with cardiotoxic agents appear to constitute the basis for the management of drug-induced cardiotoxicity.",
"title": ""
},
{
"docid": "715eaf7bca0a1b65b9fbd0dd05f9684e",
"text": "The recent proliferation of location-based services (LBSs) has necessitated the development of effective indoor positioning solutions. In such a context, wireless local area network (WLAN) positioning is a particularly viable solution in terms of hardware and installation costs due to the ubiquity of WLAN infrastructures. This paper examines three aspects of the problem of indoor WLAN positioning using received signal strength (RSS). First, we show that, due to the variability of RSS features over space, a spatially localized positioning method leads to improved positioning results. Second, we explore the problem of access point (AP) selection for positioning and demonstrate the need for further research in this area. Third, we present a kernelized distance calculation algorithm for comparing RSS observations to RSS training records. Experimental results indicate that the proposed system leads to a 17 percent (0.56 m) improvement over the widely used K-nearest neighbor and histogram-based methods",
"title": ""
},
{
"docid": "094a524941b9ce2e9d9620264fdfe44e",
"text": "Large graphs are getting increasingly popular and even indispensable in many applications, for example, in social media data, large networks, and knowledge bases. Efficient graph analytics thus becomes an important subject of study. To increase efficiency and scalability, in-memory computation and parallelism have been explored extensively to speed up various graph analytical workloads. In many graph analytical engines (e.g., Pregel, Neo4j, GraphLab), parallelism is achieved via one of the three concurrency control models, namely, bulk synchronization processing (BSP), asynchronous processing, and synchronous processing. Among them, synchronous processing has the potential to achieve the best performance due to fine-grained parallelism, while ensuring the correctness and the convergence of the computation, if an effective concurrency control scheme is used. This paper explores the topological properties of the underlying graph to design and implement a highly effective concurrency control scheme for efficient synchronous processing in an in-memory graph analytical engine. Our design uses a novel hybrid approach that combines 2PL (two-phase locking) with OCC (optimistic concurrency control), for high degree and low degree vertices in a graph respectively. Our results show that the proposed hybrid synchronous scheduler has significantly outperformed other synchronous schedulers in existing graph analytical engines, as well as BSP and asynchronous schedulers.",
"title": ""
},
{
"docid": "e0c83197770752c9fdfe5e51edcd3d46",
"text": "In the last decade, it has become obvious that Alzheimer's disease (AD) is closely linked to changes in lipids or lipid metabolism. One of the main pathological hallmarks of AD is amyloid-β (Aβ) deposition. Aβ is derived from sequential proteolytic processing of the amyloid precursor protein (APP). Interestingly, both, the APP and all APP secretases are transmembrane proteins that cleave APP close to and in the lipid bilayer. Moreover, apoE4 has been identified as the most prevalent genetic risk factor for AD. ApoE is the main lipoprotein in the brain, which has an abundant role in the transport of lipids and brain lipid metabolism. Several lipidomic approaches revealed changes in the lipid levels of cerebrospinal fluid or in post mortem AD brains. Here, we review the impact of apoE and lipids in AD, focusing on the major brain lipid classes, sphingomyelin, plasmalogens, gangliosides, sulfatides, DHA, and EPA, as well as on lipid signaling molecules, like ceramide and sphingosine-1-phosphate. As nutritional approaches showed limited beneficial effects in clinical studies, the opportunities of combining different supplements in multi-nutritional approaches are discussed and summarized.",
"title": ""
},
{
"docid": "9a4844cd2cc1d167bcb376bc7a3a0328",
"text": "BACKGROUND\nData on the potential behavioral effects of music therapy in autism are scarce.\n\n\nOBJECTIVE\nThe aim of this study was to investigate whether a musical training program based on interactive music therapy sessions could enhance the behavioral profile and the musical skills of young adults affected by severe autism.\n\n\nMETHODOLOGY\nYoung adults (N = 8) with severe (Childhood Autism Rating Scale >30) autism took part in a total of 52 weekly active music therapy sessions lasting 60 minutes. Each session consisted of a wide range of different musical activities including singing, piano playing, and drumming. Clinical rating scales included the Clinical Global Impression (CGI) scale and the Brief Psychiatric Rating Scale (BPRS). Musical skills-including singing a short or long melody, playing the C scale on a keyboard, music absorption, rhythm reproduction, and execution of complex rhythmic patterns-were rated on a 5-point Likert-type scale ranging from \"completely/entirely absent\" to \"completely/entirely present.\"\n\n\nRESULTS\nAt the end of the 52-week training period, significant improvements were found on both the CGI and BPRS scales. Similarly, the patients' musical skills significantly ameliorated as compared to baseline ratings.\n\n\nCONCLUSIONS\nOur pilot data seem to suggest that active music therapy sessions could be of aid in improving autistic symptoms, as well as personal musical skills in young adults with severe autism.",
"title": ""
},
{
"docid": "71759cdcf18dabecf1d002727eb9d8b8",
"text": "A commonly observed neural correlate of working memory is firing that persists after the triggering stimulus disappears. Substantial effort has been devoted to understanding the many potential mechanisms that may underlie memory-associated persistent activity. These rely either on the intrinsic properties of individual neurons or on the connectivity within neural circuits to maintain the persistent activity. Nevertheless, it remains unclear which mechanisms are at play in the many brain areas involved in working memory. Herein, we first summarize the palette of different mechanisms that can generate persistent activity. We then discuss recent work that asks which mechanisms underlie persistent activity in different brain areas. Finally, we discuss future studies that might tackle this question further. Our goal is to bridge between the communities of researchers who study either single-neuron biophysical, or neural circuit, mechanisms that can generate the persistent activity that underlies working memory.",
"title": ""
},
{
"docid": "90c249885f6974782c54561ff182e28c",
"text": "Article history: Received on: 18/11/2014 Revised on: 09/12/2014 Accepted on: 11/01/2015 Available online: 27/02/2015 In recent years, albumin nanoparticles have been widely studied for delivery of various active pharmaceuticals with enhanced accumulation at the site of inflammation. Albumin is a versatile carrier to prepare nanoparticles and nanospheres due to its easy availability in pure form, biodegradability non-toxic and non-immunogenic nature. The mechanism of action of Serratiopeptidase appears to be hydrolysis of histamine, bradykinin and serotonin. Serratiopeptidase also has a proteolytic and fibrinolytic effect. Protein i.e. bovine serum albumin was used to entrap serratiopeptidase enzyme. Protease activity of the enzyme was checked and method was validated to access the active enzyme concentration during formulation. Solvent desolvation method was used for the preparation of BSA nanoparticles. Effect of buffer pH was checked on the enzyme activity. Chloroform was selected and used as solvent for nanoparticle preparation. Effect of various variables such as concentration of BSA, agitation rate, glutarldehyde concentration, time of crosslinking etc. on the formulation was studied. Formed nanoparticles were characterized for drug content, in-vitro release, entrapment efficiency, particle size and size distribution. The formed serratiopeptidase loaded albumin nanoparticles may be used for the treatment of arthritis.",
"title": ""
},
{
"docid": "ced3a56c5469528e8fa5784dc0fff5d4",
"text": "This paper explores the relation between a set of behavioural information security governance factors and employees’ information security awareness. To enable statistical analysis between proposed relations, data was collected from two different samples in 24 organisations: 24 information security executives and 240 employees. The results reveal that having a formal unit with explicit responsibility for information security, utilizing coordinating committees, and sharing security knowledge through an intranet site significantly correlates with dimensions of employees’ information security awareness. However, regular identification of vulnerabilities in information systems and related processes is significantly negatively correlated with employees’ information security awareness, in particular managing passwords. The effect of behavioural information security governance on employee information security awareness is an understudied topic. Therefore, this study is explorative in nature and the results are preliminary. Nevertheless, the paper provides implications for both research and practice.",
"title": ""
},
{
"docid": "4fcfc5a8273ddbeff85a99189110482e",
"text": "Global information such as event-event association, and latent local information such as fine-grained entity types, are crucial to event classification. However, existing methods typically focus on sophisticated local features such as part-ofspeech tags, either fully or partially ignoring the aforementioned information. By contrast, this paper focuses on fully employing them for event classification. We notice that it is difficult to encode some global information such as eventevent association for previous methods. To resolve this problem, we propose a feasible approach which encodes global information in the form of logic using Probabilistic Soft Logic model. Experimental results show that, our proposed approach advances state-of-the-art methods, and achieves the best F1 score to date on the ACE data set.",
"title": ""
},
{
"docid": "7340823ae6afd072ab186ec8aaad0d44",
"text": "Blood flow measurement using Doppler ultrasound has become a useful tool for diagnosing cardiovascular diseases and as a physiological monitor. Recently, pocket-sized ultrasound scanners have been introduced for portable diagnosis. The present paper reports the implementation of a portable ultrasound pulsed-wave (PW) Doppler flowmeter using a smartphone. A 10-MHz ultrasonic surface transducer was designed for the dynamic monitoring of blood flow velocity. The directional baseband Doppler shift signals were obtained using a portable analog circuit system. After hardware processing, the Doppler signals were fed directly to a smartphone for Doppler spectrogram analysis and display in real time. To the best of our knowledge, this is the first report of the use of this system for medical ultrasound Doppler signal processing. A Couette flow phantom, consisting of two parallel disks with a 2-mm gap, was used to evaluate and calibrate the device. Doppler spectrograms of porcine blood flow were measured using this stand-alone portable device under the pulsatile condition. Subsequently, in vivo portable system verification was performed by measuring the arterial blood flow of a rat and comparing the results with the measurement from a commercial ultrasound duplex scanner. All of the results demonstrated the potential for using a smartphone as a novel embedded system for portable medical ultrasound applications.",
"title": ""
},
{
"docid": "7af729438f32c198d328a1ebc83d2eeb",
"text": "The development of natural language interfaces (NLI's) for databases has been a challenging problem in natural language processing (NLP) since the 1970's. The need for NLI's has become more pronounced due to the widespread access to complex databases now available through the Internet. A challenging problem for empirical NLP is the automated acquisition of NLI's from training examples. We present a method for integrating statistical and relational learning techniques for this task which exploits the strength of both approaches. Experimental results from three different domains suggest that such an approach is more robust than a previous purely logicbased approach. 1 I n t r o d u c t i o n We use the term semantic parsing to refer to the process of mapping a natural language sentence to a structured meaning representation. One interesting application of semantic parsing is building natural language interfaces for online databases. The need for such applications is growing since when information is delivered through the Internet, most users do not know the underlying database access language. An example of such an interface that we have developed is shown in Figure 1. Traditional (rationalist) approaches to constructing database interfaces require an expert to hand-craft an appropriate semantic parser (Woods, 1970; Hendrix et al., 1978). However, such hand-crafted parsers are time consllming to develop and suffer from problems with robustness and incompleteness even for domain specific applications. Nevertheless, very little research in empirical NLP has explored the task of automatically acquiring such interfaces from annotated training examples. The only exceptions of which we are aware axe a statistical approach to mapping airline-information queries into SQL presented in (Miller et al., 1996), a probabilistic decision-tree method for the same task described in (Kuhn and De Mori, 1995), and an approach using relational learning (a.k.a. inductive logic programming, ILP) to learn a logic-based semantic parser described in (Zelle and Mooney, 1996). The existing empirical systems for this task employ either a purely logical or purely statistical approach. The former uses a deterministic parser, which can suffer from some of the same robustness problems as rationalist methods. The latter constructs a probabilistic grammar, which requires supplying a sytactic parse tree as well as a semantic representation for each training sentence, and requires hand-crafting a small set of contextual features on which to condition the parameters of the model. Combining relational and statistical approaches can overcome the need to supply parse-trees and hand-crafted features while retaining the robustness of statistical parsing. The current work is based on the CHILL logic-based parser-acquisition framework (Zelle and Mooney, 1996), retaining access to the complete parse state for making decisions, but building a probabilistic relational model that allows for statistical parsing2 O v e r v i e w o f t h e A p p r o a c h This section reviews our overall approach using an interface developed for a U.S. Geography database (Geoquery) as a sample application (ZeUe and Mooney, 1996) which is available on the Web (see hl:tp://gvg, c s . u t e z a s , edu/users/n~./geo .html). 2.1 S e m a n t i c R e p r e s e n t a t i o n First-order logic is used as a semantic representation language. CHILL has also been applied to a restaurant database in which the logical form resembles SQL, and is translated",
"title": ""
},
{
"docid": "696c5d51a8dc80a84e1b917bf54e1946",
"text": "Compressor Engineering Group with Kobe Steel Ltd., in Hyogo, Japan. He is in charge of both oil-free and oil-flooded screw compressor engineering for process gas and industrial refrigeration. Previously in his 22 year career, Mr. Ohama was an engineer for oilflooded gas screw compressors and managed a screw compressor engineering group. He, along with his staff, developed the highpressure screw compressor “EH series” in 1997, which was the first in the world applied to 60 barg as a series. Mr. Ohama also participated in the Task Force for the preparation of API 619 Third Edition. Mr. Ohama graduated with a B.S. degree (Mechanical Engineering, 1979) and an M.S. degree (Mechanical Engineering, 1981) from the Saga University, Japan.",
"title": ""
},
{
"docid": "8eb99f7441bd77556d6ccf7d6fa22f26",
"text": "Recent developments in both heating and power sectors contribute to the creation of an integrated power system. Taking also into account the increased amount of distributed generation, current trends in power generation, transportation and consumption will be significantly affected. Linking components of this integrated system, such as heat pumps, can be controlled in different ways, to provide certain benefits to different parties. The scope of this paper is to provide a control algorithm for a residential heat pump, in order to minimize its cost, paid by the customer/owner, while maintaining a certain temperature and comfort level. A commercially available heat pump installed in a typical house with standard thermal insulation is considered. Simulation results conclude that the proposed controlling method succeeds in reducing the amount of money spent by the customer for residential heating purposes.",
"title": ""
},
{
"docid": "613ecbd58b0a67af5cbfbb777f511953",
"text": "An increasing number of everyday objects are now connected to the internet, collecting and sharing information about us: the \"Internet of Things\" (IoT). However, as the number of \"social\" objects increases, human concerns arising from this connected world are starting to become apparent. This paper presents the results of a preliminary qualitative study in which five participants lived with an ambiguous IoT device that collected and shared data about their activities at home for a week. In analyzing this data, we identify the nature of human and socio-technical concerns that arise when living with IoT technologies. Trust is identified as a critical factor - as trust in the entity/ies that are able to use their collected information decreases, users are likely to demand greater control over information collection. Addressing these concerns may support greater engagement of users with IoT technology. The paper concludes with a discussion of how IoT systems might be designed to better foster trust with their owners.",
"title": ""
},
{
"docid": "ab1e4a8b0a4d00af488923ea52053aee",
"text": "This paper describes Steve, an animated agent that helps students learn to perform physical, procedural tasks. The student and Steve cohabit a three-dimensional, simulated mock-up of the student's work environment. Steve can demonstrate how to perform tasks and can also monitor students while they practice tasks, providing assistance when needed. This paper describes Steve's architecture in detail, including perception, cognition, and motor control. The perception module monitors the state of the virtual world, maintains a coherent representation of it, and provides this information to the cognition and motor control modules. The cognition module interprets its perceptual input, chooses appropriate goals, constructs and executes plans to achieve those goals, and sends out motor commands. The motor control module implements these motor commands, controlling Steve's voice, locomotion, gaze, and gestures, and allowing Steve to manipulate objects in the virtual world.",
"title": ""
},
{
"docid": "8090f6eff6db1bb92599ecc26698d15f",
"text": "BACKGROUND\nSelf-compassion is a key psychological construct for assessing clinical outcomes in mindfulness-based interventions. The aim of this study was to validate the Spanish versions of the long (26 item) and short (12 item) forms of the Self-Compassion Scale (SCS).\n\n\nMETHODS\nThe translated Spanish versions of both subscales were administered to two independent samples: Sample 1 was comprised of university students (n = 268) who were recruited to validate the long form, and Sample 2 was comprised of Aragon Health Service workers (n = 271) who were recruited to validate the short form. In addition to SCS, the Mindful Attention Awareness Scale (MAAS), the State-Trait Anxiety Inventory-Trait (STAI-T), the Beck Depression Inventory (BDI) and the Perceived Stress Questionnaire (PSQ) were administered. Construct validity, internal consistency, test-retest reliability and convergent validity were tested.\n\n\nRESULTS\nThe Confirmatory Factor Analysis (CFA) of the long and short forms of the SCS confirmed the original six-factor model in both scales, showing goodness of fit. Cronbach's α for the 26 item SCS was 0.87 (95% CI = 0.85-0.90) and ranged between 0.72 and 0.79 for the 6 subscales. Cronbach's α for the 12-item SCS was 0.85 (95% CI = 0.81-0.88) and ranged between 0.71 and 0.77 for the 6 subscales. The long (26-item) form of the SCS showed a test-retest coefficient of 0.92 (95% CI = 0.89-0.94). The Intraclass Correlation (ICC) for the 6 subscales ranged from 0.84 to 0.93. The short (12-item) form of the SCS showed a test-retest coefficient of 0.89 (95% CI: 0.87-0.93). The ICC for the 6 subscales ranged from 0.79 to 0.91. The long and short forms of the SCS exhibited a significant negative correlation with the BDI, the STAI and the PSQ, and a significant positive correlation with the MAAS. The correlation between the total score of the long and short SCS form was r = 0.92.\n\n\nCONCLUSION\nThe Spanish versions of the long (26-item) and short (12-item) forms of the SCS are valid and reliable instruments for the evaluation of self-compassion among the general population. These results substantiate the use of this scale in research and clinical practice.",
"title": ""
},
{
"docid": "3311ef081d181ce715713dacf735d644",
"text": "The advent of multicore processors as the standard computing platform will force major changes in software design.",
"title": ""
}
] |
scidocsrr
|
2aef325c5a3f6f558048232c3f5d8b56
|
Blocks and Fuel: Frameworks for deep learning
|
[
{
"docid": "60f2baba7922543e453a3956eb503c05",
"text": "Pylearn2 is a machine learning research library. This does n t just mean that it is a collection of machine learning algorithms that share a comm n API; it means that it has been designed for flexibility and extensibility in ord e to facilitate research projects that involve new or unusual use cases. In this paper we give a brief history of the library, an overview of its basic philosophy, a summar y of the library’s architecture, and a description of how the Pylearn2 communi ty functions socially.",
"title": ""
}
] |
[
{
"docid": "2089f931cf6fca595898959cbfbca28a",
"text": "Continuum robotic manipulators articulate due to their inherent compliance. Tendon actuation leads to compression of the manipulator, extension of the actuators, and is limited by the practical constraint that tendons cannot support compression. In light of these observations, we present a new linear model for transforming desired beam configuration to tendon displacements and vice versa. We begin from first principles in solid mechanics by analyzing the effects of geometrically nonlinear tendon loads. These loads act both distally at the termination point and proximally along the conduit contact interface. The resulting model simplifies to a linear system including only the bending and axial modes of the manipulator as well as the actuator compliance. The model is then manipulated to form a concise mapping from beam configuration-space parameters to n redundant tendon displacements via the internal loads and strains experienced by the system. We demonstrate the utility of this model by implementing an optimal feasible controller. The controller regulates axial strain to a constant value while guaranteeing positive tendon forces and minimizing their magnitudes over a range of articulations. The mechanics-based model from this study provides insight as well as performance gains for this increasingly ubiquitous class of manipulators.",
"title": ""
},
{
"docid": "de37d1ba8d9c467b5059a02e2eb6ed6a",
"text": "Periodontal disease represents a group of oral inflammatory infections initiated by oral pathogens which exist as a complex biofilms on the tooth surface and cause destruction to tooth supporting tissues. The severity of this disease ranges from mild and reversible inflammation of the gingiva (gingivitis) to chronic destruction of connective tissues, the formation of periodontal pocket and ultimately result in loss of teeth. While human subgingival plaque harbors more than 500 bacterial species, considerable research has shown that Porphyromonas gingivalis, a Gram-negative anaerobic bacterium, is the major etiologic agent which contributes to chronic periodontitis. This black-pigmented bacterium produces a myriad of virulence factors that cause destruction to periodontal tissues either directly or indirectly by modulating the host inflammatory response. Here, this review provides an overview of P. gingivalis and how its virulence factors contribute to the pathogenesis with other microbiome consortium in oral cavity.",
"title": ""
},
{
"docid": "21b6a36a16147a8a7e434feca0df58e6",
"text": "We present Dejavu, a system that uses standard cell-phone sensors to provide accurate and energy-efficient outdoor localization suitable for car navigation. Our analysis shows that different road landmarks have a unique signature on cell-phone sensors; For example, going inside tunnels, moving over bumps, going up a bridge, and even potholes all affect the inertial sensors on the phone in a unique pattern. Dejavu employs a dead-reckoning localization approach and leverages these road landmarks, among other automatically discovered abundant virtual landmarks, to reset the accumulated error and achieve accurate localization. To maintain a low energy profile, Dejavu uses only energy-efficient sensors or sensors that are already running for other purposes.\n We present the design of Dejavu and how it leverages crowd-sourcing to automatically learn virtual landmarks and their locations. Our evaluation results from implementation on different android devices in both city and highway driving show that Dejavu can localize cell phones to within 8.4 m median error in city roads and 16.6 m on highways. Moreover, compared to GPS and other state-of-the-art systems, Dejavu can extend the battery lifetime by 347%, achieving even better localization results than GPS in the more challenging in-city driving conditions.",
"title": ""
},
{
"docid": "180672be0e49be493d9af3ef7b558804",
"text": "Causality is a very intuitive notion that is difficult to make precise without lapsing into tautology. Two ingredients are central to any definition: (1) a set of possible outcomes (counterfactuals) generated by a function of a set of ‘‘factors’’ or ‘‘determinants’’ and (2) a manipulation where one (or more) of the ‘‘factors’’ or ‘‘determinants’’ is changed. An effect is realized as a change in the argument of a stable function that produces the same change in the outcome for a class of interventions that change the ‘‘factors’’ by the same amount. The outcomes are compared at different levels of the factors or generating variables. Holding all factors save one at a constant level, the change in the outcome associated with manipulation of the varied factor is called a causal effect of the manipulated factor. This definition, or some version of it, goes back to Mill (1848) and Marshall (1890). Haavelmo’s (1943) made it more precise within the context of linear equations models. The phrase ‘ceteris paribus’ (everything else held constant) is a mainstay of economic analysis",
"title": ""
},
{
"docid": "853edc6c6564920d0d2b69e0e2a63ad0",
"text": "This study evaluates the environmental performance and discounted costs of the incineration and landfilling of municipal solid waste that is ready for the final disposal while accounting for existing waste diversion initiatives, using the life cycle assessment (LCA) methodology. Parameters such as changing waste generation quantities, diversion rates and waste composition were also considered. Two scenarios were assessed in this study on how to treat the waste that remains after diversion. The first scenario is the status quo, where the entire residual waste was landfilled whereas in the second scenario approximately 50% of the residual waste was incinerated while the remainder is landfilled. Electricity was produced in each scenario. Data from the City of Toronto was used to undertake this study. Results showed that the waste diversion initiatives were more effective in reducing the organic portion of the waste, in turn, reducing the net electricity production of the landfill while increasing the net electricity production of the incinerator. Therefore, the scenario that incorporated incineration performed better environmentally and contributed overall to a significant reduction in greenhouse gas emissions because of the displacement of power plant emissions; however, at a noticeably higher cost. Although landfilling proves to be the better financial option, it is for the shorter term. The landfill option would require the need of a replacement landfill much sooner. The financial and environmental effects of this expenditure have yet to be considered.",
"title": ""
},
{
"docid": "30a2235fdbe02f6c39d3ccb30792e2dc",
"text": "Text classification for companies is becoming more important in a world where an increasing amount of digital data are made available. The aim is to research whether five different machine learning algorithms can be used to automate the process of classification of invoice data and see which one gets the highest accuracy. Algorithms are in a later stage combined for an attempt to achieve higher results. N-grams are used, and results are compared in form of total accuracy of classification for each algorithm. A library in Python, called scikit-learn, implementing the chosen algorithms, was used. Data is collected and generated to represent data present on a real invoice where data has been extracted. Results from this thesis show that it is possible to use machine learning for this type of problem. The highest scoring algorithm (LinearSVC from scikit-learn) classifies 86% of all samples correctly. This is a margin of 16% above the acceptable level of 70%.",
"title": ""
},
{
"docid": "ba8e974e77d49749c6b8ad2ce950fb64",
"text": "We propose an approach to learning the semantics of images which allows us to automatically annotate an image with keywords and to retrieve images based on text queries. We do this using a formalism that models the generation of annotated images. We assume that every image is divided into regions, each described by a continuous-valued feature vector. Given a training set of images with annotations, we compute a joint probabilistic model of image features and words which allow us to predict the probability of generating a word given the image regions. This may be used to automatically annotate and retrieve images given a word as a query. Experiments show that our model significantly outperforms the best of the previously reported results on the tasks of automatic image annotation and retrieval.",
"title": ""
},
{
"docid": "5c5fa8db6eea04b2b0fa6db5c0b9f655",
"text": "Network intrusion detection systems identify malicious connections and thus help protect networks from attacks. Various data-driven approaches have been used in the development of network intrusion detection systems, which usually lead to either very complex systems or poor generalization ability due to the complexity of this challenge. This paper proposes a data-driven network intrusion detection system using fuzzy interpolation in an effort to address the aforementioned limitations. In particular, the developed system equipped with a sparse rule base not only guarantees the online performance of intrusion detection, but also allows the generation of security alerts from situations which are not directly covered by the existing knowledge base. The proposed system has been applied to a well-known data set for system validation and evaluation with competitive results generated.",
"title": ""
},
{
"docid": "fa7ec2419ffc22b1ee43694b5f4e21b9",
"text": "We consider the problem of finding outliers in large multivariate databases. Outlier detection can be applied during the data cleansing process of data mining to identify problems with the data itself, and to fraud detection where groups of outliers are often of particular interest. We use replicator neural networks (RNNs) to provide a measure of the outlyingness of data records. The performance of the RNNs is assessed using a ranked score measure. The effectiveness of the RNNs for outlier detection is demonstrated on two publicly available databases.",
"title": ""
},
{
"docid": "a74b091706f4aeb384d2bf3d477da67d",
"text": "Amazon's Echo and its conversational agent Alexa open exciting opportunities for understanding how people perceive and interact with virtual agents. Drawing from user reviews of the Echo posted to Amazon.com, this case study explores the degree to which user reviews indicate personification of the device, sociability level of interactions, factors linked with personification, and influences on user satisfaction. Results indicate marked variance in how people refer to the device, with over half using the personified name Alexa but most referencing the device with object pronouns. Degree of device personification is linked with sociability of interactions: greater personification co-occurs with more social interactions with the Echo. Reviewers mentioning multiple member households are more likely to personify the device than reviewers mentioning living alone. Even after controlling for technical issues, personification predicts user satisfaction with the Echo.",
"title": ""
},
{
"docid": "97490d6458ba9870ce22b3418c558c58",
"text": "The brain is expensive, incurring high material and metabolic costs for its size — relative to the size of the body — and many aspects of brain network organization can be mostly explained by a parsimonious drive to minimize these costs. However, brain networks or connectomes also have high topological efficiency, robustness, modularity and a 'rich club' of connector hubs. Many of these and other advantageous topological properties will probably entail a wiring-cost premium. We propose that brain organization is shaped by an economic trade-off between minimizing costs and allowing the emergence of adaptively valuable topological patterns of anatomical or functional connectivity between multiple neuronal populations. This process of negotiating, and re-negotiating, trade-offs between wiring cost and topological value continues over long (decades) and short (millisecond) timescales as brain networks evolve, grow and adapt to changing cognitive demands. An economical analysis of neuropsychiatric disorders highlights the vulnerability of the more costly elements of brain networks to pathological attack or abnormal development.",
"title": ""
},
{
"docid": "4ed28820ed65eb5183bdba6d4a7b1caf",
"text": "Children with red swollen eyes frequently present to emergency departments. Some patients will have orbital cellulitis, a condition that requires immediate diagnosis and treatment. Orbital cellulitis can be confused with the less severe, but more frequently encountered, periorbital cellulitis, which requires less aggressive management. Delayed recognition of the signs and symptoms of orbital cellulitis can lead to serious complications such as blindness, meningitis and cerebral abscess. This article describes the clinical features, epidemiology and outcomes of the condition, and discusses management and treatment. It also includes a case study.",
"title": ""
},
{
"docid": "59beebc51416063d00f6a1ff8032feff",
"text": "In movies, film stars portray another identity or obfuscate their identity with the help of silicone/latex masks. Such realistic masks are now easily available and are used for entertainment purposes. However, their usage in criminal activities to deceive law enforcement and automatic face recognition systems is also plausible. Therefore, it is important to guard biometrics systems against such realistic presentation attacks. This paper introduces the first-of-its-kind silicone mask attack database which contains 130 real and attacked videos to facilitate research in developing presentation attack detection algorithms for this challenging scenario. Along with silicone mask, there are several other presentation attack instruments that are explored in literature. The next contribution of this research is a novel multilevel deep dictionary learning-based presentation attack detection algorithm that can discern different kinds of attacks. An efficient greedy layer by layer training approach is formulated to learn the deep dictionaries followed by SVM to classify an input sample as genuine or attacked. Experimental are performed on the proposed SMAD database, some samples with real world silicone mask attacks, and four existing presentation attack databases, namely, replay-attack, CASIA-FASD, 3DMAD, and UVAD. The results show that the proposed algorithm yields better performance compared with state-of-the-art algorithms, in both intra-database and cross-database experiments.",
"title": ""
},
{
"docid": "3ee772cb68d01c6080459820ee451657",
"text": "We present a non-photorealistic rendering technique to transform color images and videos into painterly abstractions. It is based on a generalization of the Kuwahara filter that is adapted to the local shape of features, derived from the smoothed structure tensor. Contrary to conventional edge-preserving filters, our filter generates a painting-like flattening effect along the local feature directions while preserving shape boundaries. As opposed to conventional painting algorithms, it produces temporally coherent video abstraction without extra processing. The GPU implementation of our method processes video in real-time. The results have the clearness of cartoon illustrations but also exhibit directional information as found in oil paintings.",
"title": ""
},
{
"docid": "a8e8bbe19ed505b3e1042783e5e363d6",
"text": "We study the topology of e-mail networks with e-mail addresses as nodes and e-mails as links using data from server log files. The resulting network exhibits a scale-free link distribution and pronounced small-world behavior, as observed in other social networks. These observations imply that the spreading of e-mail viruses is greatly facilitated in real e-mail networks compared to random architectures.",
"title": ""
},
{
"docid": "6df55b88150f5d52aa30ab770f464546",
"text": "OBJECTIVES\nThe objective of this study has been to review the incidence of biological and technical complications in case of tooth-implant-supported fixed partial denture (FPD) treatments on the basis of survival data regarding clinical cases.\n\n\nMATERIAL AND METHODS\nBased on the treatment documentations of a Bundeswehr dental clinic (Cologne-Wahn German Air Force Garrison), the medical charts of 83 patients with tooth-implant-supported FPDs were completely recorded. The median follow-up time was 4.73 (time range: 2.2-8.3) years. In the process, survival curves according to Kaplan and Meier were applied in addition to frequency counts.\n\n\nRESULTS\nA total of 84 tooth-implant (83 patients) connected prostheses were followed (132 abutment teeth, 142 implant abutments (Branemark, Straumann). FPDs: the time-dependent illustration reveals that after 5 years, as many as 10% of the tooth-implant-supported FPDs already had to be subjected to a technical modification (renewal (n=2), reintegration (n=4), veneer fracture (n=5), fracture of frame (n=2)). In contrast to non-rigid connection of teeth and implants, technical modification measures were rarely required in case of tooth-implant-supported FPDs with a rigid connection. There was no statistical difference between technical complications and the used implant system. Abutment teeth and implants: during the observation period, none of the functionally loaded implants (n=142) had to be removed. Three of the overall 132 abutment teeth were lost because of periodontal inflammation. The time-dependent illustration reveals, that after 5 years as many as 8% of the abutment teeth already required corresponding therapeutic measures (periodontal treatment (5%), filling therapy (2.5%), endodontic treatment (0.5%)). After as few as 3 years, the connection related complications of implant abutments (abutment or occlusal screw loosening, loss of cementation) already had to be corrected in approximately 8% of the cases. In the utilization period there was no screw or abutment fracture.\n\n\nCONCLUSION\nTechnical complications of implant-supported FPDs are dependent on the different bridge configurations. When using rigid functional connections, similarly favourable values will be achieved as in case of solely implant-supported FPDs. In this study other characteristics like different fixation systems (screwed vs. cemented) or various implant systems had no significant effect to the rate of technical complications.",
"title": ""
},
{
"docid": "e9fa76fba0256cb99abf7992323a674b",
"text": "Identity formation in adolescence is closely linked to searching for and acquiring meaning in one's life. To date little is known about the manner in which these 2 constructs may be related in this developmental stage. In order to shed more light on their longitudinal links, we conducted a 3-wave longitudinal study, investigating how identity processes and meaning in life dimensions are interconnected across time, testing the moderating effects of gender and age. Participants were 1,062 adolescents (59.4% female), who filled in measures of identity and meaning in life at 3 measurement waves during 1 school year. Cross-lagged models highlighted positive reciprocal associations between (a) commitment processes and presence of meaning and (b) exploration processes and search for meaning. These results were not moderated by adolescents' gender or age. Strong identification with present commitments and reduced ruminative exploration helped adolescents in having a clear sense of meaning in their lives. We also highlighted the dual nature of search for meaning. This dimension was sustained by exploration in breadth and ruminative exploration, and it positively predicted all exploration processes. We clarified the potential for a strong sense of meaning to support identity commitments and that the process of seeking life meaning sustains identity exploration across time. (PsycINFO Database Record",
"title": ""
},
{
"docid": "951d3f81129ecafa2d271d4398d9b3e6",
"text": "The content-based image retrieval methods are developed to help people find what they desire based on preferred images instead of linguistic information. This paper focuses on capturing the image features representing details of the collar designs, which is important for people to choose clothing. The quality of the feature extraction methods is important for the queries. This paper presents several new methods for the collar-design feature extraction. A prototype of clothing image retrieval system based on relevance feedback approach and optimum-path forest algorithm is also developed to improve the query results and allows users to find clothing image of more preferred design. A series of experiments are conducted to test the qualities of the feature extraction methods and validate the effectiveness and efficiency of the RF-OPF prototype from multiple aspects. The evaluation scores of initial query results are used to test the qualities of the feature extraction methods. The average scores of all RF steps, the average numbers of RF iterations taken before achieving desired results and the score transition of RF iterations are used to validate the effectiveness and efficiency of the proposed RF-OPF prototype.",
"title": ""
},
{
"docid": "4663b254bc9c93d19ca1accb2c34ac5c",
"text": "Fog computing is an emerging paradigm that extends computation, communication, and storage facilities toward the edge of a network. Compared to traditional cloud computing, fog computing can support delay-sensitive service requests from end-users (EUs) with reduced energy consumption and low traffic congestion. Basically, fog networks are viewed as offloading to core computation and storage. Fog nodes in fog computing decide to either process the services using its available resource or send to the cloud server. Thus, fog computing helps to achieve efficient resource utilization and higher performance regarding the delay, bandwidth, and energy consumption. This survey starts by providing an overview and fundamental of fog computing architecture. Furthermore, service and resource allocation approaches are summarized to address several critical issues such as latency, and bandwidth, and energy consumption in fog computing. Afterward, compared to other surveys, this paper provides an extensive overview of state-of-the-art network applications and major research aspects to design these networks. In addition, this paper highlights ongoing research effort, open challenges, and research trends in fog computing.",
"title": ""
},
{
"docid": "e3acdb12bf902aeee1d6619fd1bd13cc",
"text": "The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, software abstraction levels, and hardware devices, yet are typically not suitable for rapid prototyping or application to problems in the domain of machine learning. In this paper, we describe a new Python package for the simulation of spiking neural networks, specifically geared toward machine learning and reinforcement learning. Our software, called BindsNET, enables rapid building and simulation of spiking networks and features user-friendly, concise syntax. BindsNET is built on the PyTorch deep neural networks library, facilitating the implementation of spiking neural networks on fast CPU and GPU computational platforms. Moreover, the BindsNET framework can be adjusted to utilize other existing computing and hardware backends; e.g., TensorFlow and SpiNNaker. We provide an interface with the OpenAI gym library, allowing for training and evaluation of spiking networks on reinforcement learning environments. We argue that this package facilitates the use of spiking networks for large-scale machine learning problems and show some simple examples by using BindsNET in practice.",
"title": ""
}
] |
scidocsrr
|
4db3054a18d1a277dd356c3839353b89
|
Are Mobile Payment and Banking the Killer Apps for Mobile Commerce?
|
[
{
"docid": "540a6dd82c7764eedf99608359776e66",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/aea.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
}
] |
[
{
"docid": "b22b0a553971d9d81a8196f40f97255c",
"text": "Latent fingerprints are routinely found at crime scenes due to the inadvertent contact of the criminals' finger tips with various objects. As such, they have been used as crucial evidence for identifying and convicting criminals by law enforcement agencies. However, compared to plain and rolled prints, latent fingerprints usually have poor quality of ridge impressions with small fingerprint area, and contain large overlap between the foreground area (friction ridge pattern) and structured or random noise in the background. Accordingly, latent fingerprint segmentation is a difficult problem. In this paper, we propose a latent fingerprint segmentation algorithm whose goal is to separate the fingerprint region (region of interest) from background. Our algorithm utilizes both ridge orientation and frequency features. The orientation tensor is used to obtain the symmetric patterns of fingerprint ridge orientation, and local Fourier analysis method is used to estimate the local ridge frequency of the latent fingerprint. Candidate fingerprint (foreground) regions are obtained for each feature type; an intersection of regions from orientation and frequency features localizes the true latent fingerprint regions. To verify the viability of the proposed segmentation algorithm, we evaluated the segmentation results in two aspects: a comparison with the ground truth foreground and matching performance based on segmented region.",
"title": ""
},
{
"docid": "9244acef01812d757639bd4f09631c22",
"text": "This paper describes the results of the first shared task on Multilingual Emoji Prediction, organized as part of SemEval 2018. Given the text of a tweet, the task consists of predicting the most likely emoji to be used along such tweet. Two subtasks were proposed, one for English and one for Spanish, and participants were allowed to submit a system run to one or both subtasks. In total, 49 teams participated in the English subtask and 22 teams submitted a system run to the Spanish subtask. Evaluation was carried out emoji-wise, and the final ranking was based on macro F-Score. Data and further information about this task can be found at https://competitions. codalab.org/competitions/17344.",
"title": ""
},
{
"docid": "274a88ca3f662b6250d856148389b078",
"text": "This paper introduces NFQ, an algorithm for efficient and effective training of a Q-value function represented by a multi-layer perceptron. Based on the principle of storing and reusing transition experiences, a model-free, neural network based Reinforcement Learning algorithm is proposed. The method is evaluated on three benchmark problems. It is shown empirically, that reasonably few interactions with the plant are needed to generate control policies of high quality.",
"title": ""
},
{
"docid": "9164dab8c4c55882f8caecc587c32eb1",
"text": "We suggest an approach to exploratory analysis of diverse types of spatiotemporal data with the use of clustering and interactive visual displays. We can apply the same generic clustering algorithm to different types of data owing to the separation of the process of grouping objects from the process of computing distances between the objects. In particular, we apply the densitybased clustering algorithm OPTICS to events (i.e. objects having spatial and temporal positions), trajectories of moving entities, and spatial distributions of events or moving entities in different time intervals. Distances are computed in a specific way for each type of objects; moreover, it may be useful to have several different distance functions for the same type of objects. Thus, multiple distance functions available for trajectories support different analysis tasks. We demonstrate the use of our approach by example of two datasets from the VAST Challenge 2008: evacuation traces (trajectories of moving entities) and landings and interdictions of migrant boats (events).",
"title": ""
},
{
"docid": "5373559ddf823478416e36c832c1375f",
"text": "We report a technique for two-photon fluorescence imaging with cellular resolution in awake, behaving mice with minimal motion artifact. The apparatus combines an upright, table-mounted two-photon microscope with a spherical treadmill consisting of a large, air-supported Styrofoam ball. Mice, with implanted cranial windows, are head restrained under the objective while their limbs rest on the ball's upper surface. Following adaptation to head restraint, mice maneuver on the spherical treadmill as their heads remain motionless. Image sequences demonstrate that running-associated brain motion is limited to approximately 2-5 microm. In addition, motion is predominantly in the focal plane, with little out-of-plane motion, making the application of a custom-designed Hidden-Markov-Model-based motion correction algorithm useful for postprocessing. Behaviorally correlated calcium transients from large neuronal and astrocytic populations were routinely measured, with an estimated motion-induced false positive error rate of <5%.",
"title": ""
},
{
"docid": "614539c43d5fa2986b9aab3a2562fd85",
"text": "Mobile devices such as smart phones are becoming popular, and realtime access to multimedia data in different environments is getting easier. With properly equipped communication services, users can easily obtain the widely distributed videos, music, and documents they want. Because of its usability and capacity requirements, music is more popular than other types of multimedia data. Documents and videos are difficult to view on mobile phones' small screens, and videos' large data size results in high overhead for retrieval. But advanced compression techniques for music reduce the required storage space significantly and make the circulation of music data easier. This means that users can capture their favorite music directly from the Web without going to music stores. Accordingly, helping users find music they like in a large archive has become an attractive but challenging issue over the past few years.",
"title": ""
},
{
"docid": "ab57df7702fa8589f7d462c80d9a2598",
"text": "The Internet of Things (IoT) allows machines and devices in the world to connect with each other and generate a huge amount of data, which has a great potential to provide useful knowledge across service domains. Combining the context of IoT with semantic technologies, we can build integrated semantic systems to support semantic interoperability. In this paper, we propose an integrated semantic service platform (ISSP) to support ontological models in various IoT-based service domains of a smart city. In particular, we address three main problems for providing integrated semantic services together with IoT systems: semantic discovery, dynamic semantic representation, and semantic data repository for IoT resources. To show the feasibility of the ISSP, we develop a prototype service for a smart office using the ISSP, which can provide a preset, personalized office environment by interpreting user text input via a smartphone. We also discuss a scenario to show how the ISSP-based method would help build a smart city, where services in each service domain can discover and exploit IoT resources that are wanted across domains. We expect that our method could eventually contribute to providing people in a smart city with more integrated, comprehensive services based on semantic interoperability.",
"title": ""
},
{
"docid": "bd25029684528da4a2cb8912464a567a",
"text": "Control over what is in focus and what is not in focus in an image is an important artistic tool. The range of depth in a 3D scene that is imaged in sufficient focus through an optics system, such as a camera lens, is called depth of field. Without depth of field, the entire scene appears completely in sharp focus, leading to an unnatural, overly crisp appearance. Current techniques for rendering depth of field in computer graphics are either slow or suffer from artifacts, or restrict the choice of point spread function (PSF). In this paper, we present a new image filter based on rectangle spreading which is constant time per pixel. When used in a layered depth of field framework, our filter eliminates the intensity leakage and depth discontinuity artifacts that occur in previous methods. We also present several extensions to our rectangle spreading method to allow flexibility in the appearance of the blur through control over the PSF.",
"title": ""
},
{
"docid": "acbf633cbf612cd0d203d9c191a156da",
"text": "In this work an efficient parallel implementation of the Chirp Scaling Algorithm for Synthetic Aperture Radar processing is presented. The architecture selected for the implementation is the general purpose graphic processing unit, as it is well suited for scientific applications and real-time implementation of algorithms. The analysis of a first implementation led to several improvements which resulted in an important speed-up. Details of the issues found are explained, and the performance improvement of their correction explicitly shown.",
"title": ""
},
{
"docid": "777998d8d239124de463ed28b0a1c27f",
"text": "The Scrum software development framework was designed for the hyperproductive state where productivity increases by 5-10 times over waterfall teams and many co-located teams have achieved this effect. In 2006, Xebia (The Netherlands) started localized projects with half Dutch and half Indian team members. After establishing a localized velocity of five times their waterfall competitors on the same project, they moved the Indian members of the team to India and showed stable velocity with fully distributed teams. The ability to achieve hyperproductivity with distributed, outsourced teams was shown to be a repeatable process and a fully distributed model is now the recommended standard when organizations have disciplined Scrum teams with full implementation of XP engineering practices inside the Scrum. Previous studies used overlapping time zones to ease communication and create a single distributed team. The goal of this report is to go one step further and show the same results with team members separated by the 12.5 hour time difference between India and San Francisco. If Scrum works without overlapping time zones then applying it to the mainstream offshoring practice in North America will be possible. In 2008, Xebia India started engagements with partners like TBD.com, a social networking site in San Francisco. TBD has an existing core team of developers doing Scrum with an established local velocity. Adding Xebia India developers to the San Francisco team with a Fully Distributed Scrum model achieved linear scalability with a globally distributed outsourced team.",
"title": ""
},
{
"docid": "a03d0772d8c3e1fd5c954df2b93757e3",
"text": "The tumor microenvironment is a complex system, playing an important role in tumor development and progression. Besides cellular stromal components, extracellular matrix fibers, cytokines, and other metabolic mediators are also involved. In this review we outline the potential role of hypoxia, a major feature of most solid tumors, within the tumor microenvironment and how it contributes to immune resistance and immune suppression/tolerance and can be detrimental to antitumor effector cell functions. We also outline how hypoxic stress influences immunosuppressive pathways involving macrophages, myeloid-derived suppressor cells, T regulatory cells, and immune checkpoints and how it may confer tumor resistance. Finally, we discuss how microenvironmental hypoxia poses both obstacles and opportunities for new therapeutic immune interventions.",
"title": ""
},
{
"docid": "415423f706491c5ec3df6a3b3bf48743",
"text": "The realm of human uniqueness steadily shrinks; reflecting this, other primates suffer from states closer to depression or anxiety than 'depressive-like' or 'anxiety-like behavior'. Nonetheless, there remain psychiatric domains unique to humans. Appreciating these continuities and discontinuities must inform the choice of neurobiological approach used in studying any animal model of psychiatric disorders. More fundamentally, the continuities reveal how aspects of psychiatric malaise run deeper than our species' history.",
"title": ""
},
{
"docid": "17dc9e16f2df9eb3c169ce521818de1f",
"text": "Acoustic Emission (AE) is a non-destructive testing (NDT) with potential applications for locating and monitoring fatigue cracks during structural health management and prognosis. To do this, a correlation between acoustic emission signal characteristics and crack growth behavior should be established. In this paper, a probabilistic model of fatigue crack length distribution based on acoustic emission is validated. Using the results from AE-based fatigue experiments the relationship between AE count rates and crack growth rates is reviewed for the effect of loading ratio. Predictions of crack growth rates based on AE count rates model show reasonable agreement with the actual crack growth rates from the test results. Bayesian regression analysis is performed to estimate the marginal distribution of the unknown parameters in the model. The results of the Bayesian regression analysis shows that the results are consistent with respect to changes in loading ratio. Additional experimental evidence might be required to further reduce the uncertainties of the proposed model and further study the effects of the changes in loading frequency, sample geometry and material.",
"title": ""
},
{
"docid": "1874bd466665e39dbb4bd28b2b0f0d6e",
"text": "Pattern recognition encompasses two fundamental tasks: description and classification. Given an object to analyze, a pattern recognition system first generates a description of it (i.e., the pattern) and then classifies the object based on that description (i.e., the recognition). Two general approaches for implementing pattern recognition systems, statistical and structural, employ different techniques for description and classification. Statistical approaches to pattern recognition use decision-theoretic concepts to discriminate among objects belonging to different groups based upon their quantitative features. Structural approaches to pattern recognition use syntactic grammars to discriminate among objects belonging to different groups based upon the arrangement of their morphological (i.e., shape-based or structural) features. Hybrid approaches to pattern recognition combine aspects of both statistical and structural pattern recognition. Structural pattern recognition systems are difficult to apply to new domains because implementation of both the description and classification tasks requires domain knowledge. Knowledge acquisition techniques necessary to obtain domain knowledge from experts are tedious and often fail to produce a complete and accurate knowledge base. Consequently, applications of structural pattern recognition have been primarily restricted to domains in which the set of useful morphological features has been established in the literature (e.g., speech recognition and character recognition) and the syntactic grammars can be composed by hand (e.g., electrocardiogram diagnosis). To overcome this limitation, a domain-independent approach to structural pattern recognition is needed that is capable of extracting morphological features and performing classification without relying on domain knowledge. A hybrid system that employs a statistical classification technique to perform discrimination based on structural features is a natural solution. While a statistical classifier is inherently domain independent, the domain knowledge necessary to support the description task can be eliminated with a set of generally-useful morphological features. Such a set of morphological features is suggested as the foundation for the development of a suite of structure detectors to perform generalized feature extraction for structural pattern recognition in time-series data. The ability of the suite of structure detectors to generate features useful for structural pattern recognition is evaluated by comparing the classification accuracies achieved when using the structure detectors versus commonly-used statistical feature extractors. Two real-world databases with markedly different characteristics and established ground truth serve as sources of data for the evaluation. The classification accuracies achieved using the features extracted by the structure detectors were consistently as good as or better than the classification accuracies achieved when using the features generated by the statistical feature extractors, thus demonstrating that the suite of structure detectors effectively performs generalized feature extraction for structural pattern recognition in time-series data.",
"title": ""
},
{
"docid": "a6e18aa7f66355fb8407798a37f53f45",
"text": "We review some of the recent advances in level-set methods and their applications. In particular, we discuss how to impose boundary conditions at irregular domains and free boundaries, as well as the extension of level-set methods to adaptive Cartesian grids and parallel architectures. Illustrative applications are taken from the physical and life sciences. Fast sweeping methods are briefly discussed.",
"title": ""
},
{
"docid": "ee5c970c96904c91f700f3b735071821",
"text": "A family of kernels for statistical learning is introduced that exploits the geometric structure of statistical models. The kernels are based on the heat equation on the Riemannian manifold defined by the Fisher information metric associated with a statistical family, and generalize the Gaussian kernel of Euclidean space. As an important special case, kernels based on the geometry of multinomial families are derived, leading to kernel-based learning algorithms that apply naturally to discrete data. Bounds on covering numbers and Rademacher averages for the kernels are proved using bounds on the eigenvalues of the Laplacian on Riemannian manifolds. Experimental results are presented for document classification, for which the use of multinomial geometry is natural and well motivated, and improvements are obtained over the standard use of Gaussian or linear kernels, which have been the standard for text classification. This research was partially supported by the Advanced Research and Development Activity in Information Technology (ARDA), contract number MDA904-00-C-2106, and by the National Science Foundation (NSF), grants CCR-0122581 and IIS-0312814. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of ARDA, NSF, or the U.S. government.",
"title": ""
},
{
"docid": "35258abbafac62dbfbd0be08617e95bf",
"text": "Code Reuse Attacks (CRAs) recently emerged as a new class of security exploits. CRAs construct malicious programs out of small fragments (gadgets) of existing code, thus eliminating the need for code injection. Existing defenses against CRAs often incur large performance overheads or require extensive binary rewriting and other changes to the system software. In this paper, we examine a signature-based detection of CRAs, where the attack is detected by observing the behavior of programs and detecting the gadget execution patterns. We first demonstrate that naive signature-based defenses can be defeated by introducing special “delay gadgets” as part of the attack. We then show how a software-configurable signature-based approach can be designed to defend against such stealth CRAs, including the attacks that manage to use longer-length gadgets. The proposed defense (called SCRAP) can be implemented entirely in hardware using simple logic at the commit stage of the pipeline. SCRAP is realized with minimal performance cost, no changes to the software layers and no implications on binary compatibility. Finally, we show that SCRAP generates no false alarms on a wide range of applications.",
"title": ""
},
{
"docid": "880a0dc7a717d9d68761232516b150b5",
"text": "A longstanding vision in distributed systems is to build reliable systems from unreliable components. An enticing formulation of this vision is Byzantine Fault-Tolerant (BFT) state machine replication, in which a group of servers collectively act as a correct server even if some of the servers misbehave or malfunction in arbitrary (“Byzantine”) ways. Despite this promise, practitioners hesitate to deploy BFT systems, at least partly because of the perception that BFT must impose high overheads.\n In this article, we present Zyzzyva, a protocol that uses speculation to reduce the cost of BFT replication. In Zyzzyva, replicas reply to a client's request without first running an expensive three-phase commit protocol to agree on the order to process requests. Instead, they optimistically adopt the order proposed by a primary server, process the request, and reply immediately to the client. If the primary is faulty, replicas can become temporarily inconsistent with one another, but clients detect inconsistencies, help correct replicas converge on a single total ordering of requests, and only rely on responses that are consistent with this total order. This approach allows Zyzzyva to reduce replication overheads to near their theoretical minima and to achieve throughputs of tens of thousands of requests per second, making BFT replication practical for a broad range of demanding services.",
"title": ""
},
{
"docid": "6da7bf2a501401d90dde033be91d9d32",
"text": "In this paper, we approach the problem of Inverse Reinforcement Learning (IRL) from a rather different perspective. Instead of trying to only mimic an expert as in traditional IRL, we present a method that can also utilise failed or bad demonstrations of a task. In particular, we propose a new IRL algorithm that extends the state-of-the-art method of Maximum Causal Entropy Inverse Reinforcement Learning to exploit such failed demonstrations. Furthermore, we present experimental results showing that our method can learn faster and better than its original counterpart.",
"title": ""
},
{
"docid": "6acd62836639178293b876ae5f6d7397",
"text": "This paper presents a procedure in building a Thai part-of-speech (POS) tagged corpus named ORCHID. It is a collaboration project between Communications Research Laboratory (CRL) of Japan and National Electronics and Computer Technology Center (NECTEC) of Thailand. We proposed a new tagset based on the previous research on Thai parts-of-speech for using in a multi -lingual machine translation project. We marked the corpus in three levels:paragraph, sentence and word. The corpus keeps text information in text information line and numbering line, which are necessary in retrieving process. Since there are no explicit word/sentence boundary, punctuation and inflection in Thai text, we have to separate a paragraph into sentences before tagging the POS. We applied a probabili stic trigram model for simultaneously word segmenting and POS tagging. Rule for syll able construction is additionally used to reduce the number of candidates for computing the probabilit y. The problems in POS assignment are formalized to reduce the ambiguity occurring in case of the similar POSs.",
"title": ""
}
] |
scidocsrr
|
e6afeb516afdfab0415da2f55987094c
|
Gratifications and social network service usage: The mediating role of online experience
|
[
{
"docid": "8dcb0f20c000a30c0d3330f6ac6b373b",
"text": "Although social networking sites (SNSs) have attracted increased attention and members in recent years, there has been little research on it: particularly on how a users’ extroversion or introversion can affect their intention to pay for these services and what other factors might influence them. We therefore proposed and tested a model that measured the users’ value and satisfaction perspectives by examining the influence of these factors in an empirical survey of 288 SNS members. At the same time, the differences due to their psychological state were explored. The causal model was validated using PLSGraph 3.0; six out of eight study hypotheses were supported. The results indicated that perceived value significantly influenced the intention to pay SNS subscription fees while satisfaction did not. Moreover, extroverts thought more highly of the social value of the SNS, while introverts placed more importance on its emotional and price value. The implications of these findings are discussed. Crown Copyright 2010 Published by Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "5931cb779b24065c5ef48451bc46fac4",
"text": "In order to provide a material that can facilitate the modeling and construction of a Furuta pendulum, this paper presents the deduction, step-by-step, of a Furuta pendulum mathematical model by using the Lagrange equations of motion. Later, a mechanical design of the Furuta pendulum is carried out via the software Solid Works and subsequently a prototype is built. Numerical simulations of the Furuta pendulum model are performed via Mat lab-Simulink. Furthermore, the Furuta pendulum prototype built is experimentally tested by using Mat lab-Simulink, Control Desk, and a DS1104 board from dSPACE.",
"title": ""
},
{
"docid": "8dd540b33035904f63c67b57d4c97aa3",
"text": "Wireless local area networks (WLANs) based on the IEEE 802.11 standards are one of today’s fastest growing technologies in businesses, schools, and homes, for good reasons. As WLAN deployments increase, so does the challenge to provide these networks with security. Security risks can originate either due to technical lapse in the security mechanisms or due to defects in software implementations. Standard Bodies and researchers have mainly used UML state machines to address the implementation issues. In this paper we propose the use of GSE methodology to analyse the incompleteness and uncertainties in specifications. The IEEE 802.11i security protocol is used as an example to compare the effectiveness of the GSE and UML models. The GSE methodology was found to be more effective in identifying ambiguities in specifications and inconsistencies between the specification and the state machines. Resolving all issues, we represent the robust security network (RSN) proposed in the IEEE 802.11i standard using different GSE models.",
"title": ""
},
{
"docid": "41c718697d19ee3ca0914255426a38ab",
"text": "Migraine is a debilitating neurological disorder that affects about 12% of the population. In the past decade, the role of the neuropeptide calcitonin gene-related peptide (CGRP) in migraine has been firmly established by clinical studies. CGRP administration can trigger migraines, and CGRP receptor antagonists ameliorate migraine. In this review, we will describe multifunctional activities of CGRP that could potentially contribute to migraine. These include roles in light aversion, neurogenic inflammation, peripheral and central sensitization of nociceptive pathways, cortical spreading depression, and regulation of nitric oxide production. Yet clearly there will be many other contributing genes that could act in concert with CGRP. One candidate is pituitary adenylate cyclase-activating peptide (PACAP), which shares some of the same actions as CGRP, including the ability to induce migraine in migraineurs and light aversive behavior in rodents. Interestingly, both CGRP and PACAP act on receptors that share an accessory subunit called receptor activity modifying protein-1 (RAMP1). Thus, comparisons between the actions of these two migraine-inducing neuropeptides, CGRP and PACAP, may provide new insights into migraine pathophysiology.",
"title": ""
},
{
"docid": "a16cf7d21a9195a8bf35b75df7782c09",
"text": "As system integration becomes an increasingly important challenge for complex real-time systems, there has been a significant demand for supporting real-time systems in virtualized environments. This paper presents RT-Xen, the first real-time hypervisor scheduling framework for Xen, the most widely used open-source virtual machine monitor (VMM). RT-Xen bridges the gap between real-time scheduling theory and Xen, whose wide-spread adoption makes it an attractive platform for integrating a broad range of real-time and embedded systems. Moreover, RT-Xen provides an open-source platform for researchers and integrators to develop and evaluate real-time scheduling techniques, which to date have been studied predominantly via analysis and simulations.\n Extensive experimental results demonstrate the feasibility, efficiency, and efficacy of fixed-priority hierarchical real-time scheduling in RT-Xen. RT-Xen instantiates a suite of fixed-priority servers (Deferrable Server, Periodic Server, Polling Server, and Sporadic Server). While the server algorithms are not new, this empirical study represents the first comprehensive experimental comparison of these algorithms within the same virtualization platform. Our empirical evaluation shows that RT-Xen can provide effective real-time scheduling to guest Linux operating systems at a 1ms quantum, while incurring only moderate overhead for all the fixed-priority server algorithms. While more complex algorithms such as Sporadic Server do incur higher overhead, none of the overhead differences among different server algorithms are significant. Deferrable Server generally delivers better soft real-time performance than the other server algorithms, while Periodic Server incurs high deadline miss ratios in overloaded situations.",
"title": ""
},
{
"docid": "482ff6c78f7b203125781f5947990845",
"text": "TH1 and TH17 cells mediate neuroinflammation in experimental autoimmune encephalomyelitis (EAE), a mouse model of multiple sclerosis. Pathogenic TH cells in EAE must produce the pro-inflammatory cytokine granulocyte-macrophage colony stimulating factor (GM-CSF). TH cell pathogenicity in EAE is also regulated by cell-intrinsic production of the immunosuppressive cytokine interleukin 10 (IL-10). Here we demonstrate that mice deficient for the basic helix-loop-helix (bHLH) transcription factor Bhlhe40 (Bhlhe40(-/-)) are resistant to the induction of EAE. Bhlhe40 is required in vivo in a T cell-intrinsic manner, where it positively regulates the production of GM-CSF and negatively regulates the production of IL-10. In vitro, GM-CSF secretion is selectively abrogated in polarized Bhlhe40(-/-) TH1 and TH17 cells, and these cells show increased production of IL-10. Blockade of IL-10 receptor in Bhlhe40(-/-) mice renders them susceptible to EAE. These findings identify Bhlhe40 as a critical regulator of autoreactive T-cell pathogenicity.",
"title": ""
},
{
"docid": "4bf9ec9d1600da4eaffe2bfcc73ee99f",
"text": "Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help companies focus on the most important information in their data warehouses. Nowadays, large amount of data and information are available, Data can now be stored in many different kinds of databases and information repositories, being available on the Internet. There is a need for powerful techniques for better interpretation of these data that exceeds the human's ability for comprehension and making decision in a better way. There are data mining, web mining and knowledge discovery tools and software packages such as WEKA Tool and RapidMiner tool. The work deals with analysis of WEKA, RapidMiner and NetTools spider tools KNIME and Orange. There are various tools available for data mining and web mining. Therefore awareness is required about the quantitative investigation of these tools. This paper focuses on various functional, practical, cognitive as well as analysis aspects that users may be looking for in the tools. Complete study addresses the usefulness and importance of these tools including various aspects. Analysis presents various benefits of these data mining tools along with desired aspects and the features of current tools. KEYWORDSData Mining, KDD, Data Mining Tools.",
"title": ""
},
{
"docid": "f87459e12d6dba8f3a04424c4db709f6",
"text": "The study of empathy, a translation of the term 'Einfühlung', originated in 19th century Germany in the sphere of aesthetics, and was followed by studies in psychology and then neuroscience. During the past decade the links between empathy and art have started to be investigated, but now from the neuroscientific perspective, and two different approaches have emerged. Recently, the primacy of the mirror neuron system and its association with automaticity and imitative, simulated movement has been envisaged. But earlier, a number of eminent art historians had pointed to the importance of cognitive responses to art; these responses might plausibly be subserved by alternative neural networks. Focusing here mainly on pictures depicting pain and evoking empathy, both approaches are considered by summarizing the evidence that either supports the involvement of the mirror neuron system, or alternatively suggests other neural networks are likely to be implicated. The use of such pictures in experimental studies exploring the underlying neural processes, however, raises a number of concerns, and suggests caution is exercised in drawing conclusions concerning the networks that might be engaged. These various networks are discussed next, taking into account the affective and sensory components of the pain experience, before concluding that both mirror neuron and alternative neural networks are likelyto be enlisted in the empathetic response to images of pain. A somewhat similar duality of spontaneous and cognitive processes may perhaps also be paralleled in the creation of such images. While noting that some have repudiated the neuroscientific approach to the subject, pictures are nevertheless shown here to represent an unusual but invaluable tool in the study of pain and empathy.",
"title": ""
},
{
"docid": "aab35e9350694f8ba5bdf31a19dcfe43",
"text": "Objective: To study the preliminary characteristics and pharmacognostic of Euphorbia hirta Linn a ruderal plant, harvested in Ivory Coast. Methods: The pharmacognostic study involved the performance of macroscopic and microscopic examinations of fresh and dried plant. The physico-chemical study involved the determination of moisture content, total ash, ashes sulfuric acid and hydrochloric acid insoluble ashes. Results: The pharmacognostic study has revealed a small herb with finely toothed leaves, asymmetrical, arranged in pairs, opposite a red vinous stem. The inflorescence is composed of small yellowish flowers unisexual, solitary at each node. The fruits are small capsules hairy three valves. The anatomical and histological section of the limbus showed a protruding cylindrical midrib on the underside and slightly depressed on the other. Medullary parenchyma reduced with large polygonal cells and lower collenchyme are cellulosic. Sclerenchyma sheathing the driver device consists of fibrovascular bundle. Secondary and arranged in radial alignment are wood, having vessels with lignified walls and bast with small cells with cellulose walls. The stem, circular, presented a less developed bark, an epidermis, a collenchyme and parenchyma. The developed central cylinder consists of a main parenchyma are observed wherein a centrifugal primary wood, the primary centripetal bast and sclerenchyma in radial alignment around the cambium and the secondary wood. Conclusion: pharmacognostic analysis and physico-chemical characteristics can help in the efficient use of this medicinal plant.",
"title": ""
},
{
"docid": "9bf8ebbb6bff9abcda3e8bb9d4fe5644",
"text": "In this paper, we propose an online learning algorithm based on a Rao-Blackwellized particle filter for spatial concept acquisition and mapping. We have proposed a nonparametric Bayesian spatial concept acquisition model (SpCoA). We propose a novel method (SpCoSLAM) integrating SpCoA and FastSLAM in the theoretical framework of the Bayesian generative model. The proposed method can simultaneously learn place categories and lexicons while incrementally generating an environmental map. Furthermore, the proposed method has scene image features and a language model added to SpCoA. In the experiments, we tested online learning of spatial concepts and environmental maps in a novel environment of which the robot did not have a map. Then, we evaluated the results of online learning of spatial concepts and lexical acquisition. The experimental results demonstrated that the robot was able to more accurately learn the relationships between words and the place in the environmental map incrementally by using the proposed method.",
"title": ""
},
{
"docid": "b4b6d9c35542b90eaee5de29664c86db",
"text": "In this paper, two simplified versions of the belief propagation algorithm for fast iterative decoding of low-density parity check codes on the additive white Gaussian noise channel are proposed. Both versions are implemented with real additions only, which greatly simplifies the decoding complexity of belief propagation in which products of probabilities have to be computed. Also, these two algorithms do not require any knowledge about the channel characteristics. Both algorithms yield a good performance–complexity tradeoff and can be efficiently implemented in software as well as in hardware, with possibly quantized received values.",
"title": ""
},
{
"docid": "ba3e9746291c2a355321125093b41c88",
"text": "Sentiment analysis of microblogs such as Twitter has recently gained a fair amount of attention. One of the simplest sentiment analysis approaches compares the words of a posting against a labeled word list, where each word has been scored for valence, — a “sentiment lexicon” or “affective word lists”. There exist several affective word lists, e.g., ANEW (Affective Norms for English Words) developed before the advent of microblogging and sentiment analysis. I wanted to examine how well ANEW and other word lists performs for the detection of sentiment strength in microblog posts in comparison with a new word list specifically constructed for microblogs. I used manually labeled postings from Twitter scored for sentiment. Using a simple word matching I show that the new word list may perform better than ANEW, though not as good as the more elaborate approach found in SentiStrength.",
"title": ""
},
{
"docid": "03e7d909183b66cc3b45eed6ac2de9dd",
"text": "A s the millennium draws to a close, it is apparent that one question towers above all others in the life sciences: How does the set of processes we call mind emerge from the activity of the organ we call brain? The question is hardly new. It has been formulated in one way or another for centuries. Once it became possible to pose the question and not be burned at the stake, it has been asked openly and insistently. Recently the question has preoccupied both the experts—neuroscientists, cognitive scientists and philosophers—and others who wonder about the origin of the mind, specifically the conscious mind. The question of consciousness now occupies center stage because biology in general and neuroscience in particular have been so remarkably successful at unraveling a great many of life’s secrets. More may have been learned about the brain and the mind in the 1990s—the so-called decade of the brain—than during the entire previous history of psychology and neuroscience. Elucidating the neurobiological basis of the conscious mind—a version of the classic mind-body problem—has become almost a residual challenge. Contemplation of the mind may induce timidity in the contemplator, especially when consciousness becomes the focus of the inquiry. Some thinkers, expert and amateur alike, believe the question may be unanswerable in principle. For others, the relentless and exponential increase in new knowledge may give rise to a vertiginous feeling that no problem can resist the assault of science if only the theory is right and the techniques are powerful enough. The debate is intriguing and even unexpected, as no comparable doubts have been raised over the likelihood of explaining how the brain is responsible for processes such as vision or memory, which are obvious components of the larger process of the conscious mind. The multimedia mind-show occurs constantly as the brain processes external and internal sensory events. As the brain answers the unasked question of who is experiencing the mindshow, the sense of self emerges. by Antonio R. Damasio",
"title": ""
},
{
"docid": "893a2f945ddd6991a6dcb160b4ec9188",
"text": "To increase the mechanical robustness of drive system and to make the drive cheaper elimination of the position sensor is highly encouraged. Sensors are not reliable in explosive environment like in chemical industries and may cause the EMI problem. This has made the speed and position sensorless drive very attractive. A speed and position estimator for PMSM drive has been investigated. In the algorithm the PMSM used as reference model. The adaptation mechanism uses a PI controller to process the error between reference model and adjustable model. The estimation algorithm used is independent of stator resistance, computationally less complex, free from integrator problem because back-emf estimation is not required and provides stable operation of drive system. So the performance at low and zero speed is also good. The estimation algorithm is implemented in MATLAB. Simulations results show the validity of MRAS estimator presented here.",
"title": ""
},
{
"docid": "697f49b15850b0df8645b0939d6decae",
"text": "In present age of computers, there are various resources for gathering information related to given query like Radio Stations, Television, Internet and many more. Among them, Internet is considered as major factor for obtaining any information about a given domain. When a user wants to find some information, he/she enters a query and results are produced via hyperlinks linked to various documents available on web. But the information that is retrieved to us may or may not be relevant. This irrelevance is caused due to huge collection of documents available on web. Traditional search engines are based on keyword based searching that is unable to transform raw data into knowledgeable representation data. It is a cumbersome task to extract relevant information from large collection of web documents. These shortcomings have led to the concept of Semantic Web (SW) and Ontology into existence. Semantic Web (SW) is a well defined portal that helps in extracting relevant information using many Information Retrieval (IR) techniques. Current Information Retrieval (IR) techniques are not so advanced that they can be able to exploit semantic knowledge within documents and give precise result. The terms, Information Retrieval (IR), Semantic Web (SW) and Ontology are used differently but they are interconnected with each other. Information Retrieval (IR) technology and Web based Indexing contributes to existence of Semantic Web. Use of Ontology also contributes in building new generation of webSemantic Web. With the help of ontologies, we can make content of web as it will be markup with the help of Semantic Web documents (SWD’s). Ontology is considered as backbone of Software system. It improves understanding between concepts used in Semantic Web (SW). So, there is need to build an ontology that uses well defined methodology and process of developing ontology is called Ontology Development.",
"title": ""
},
{
"docid": "b418734faef12396bbcef4df356c6fb6",
"text": "Active learning techniques were employed for classification of dialogue acts over two dialogue corpora, the English humanhuman Switchboard corpus and the Spanish human-machine Dihana corpus. It is shown clearly that active learning improves on a baseline obtained through a passive learning approach to tagging the same data sets. An error reduction of 7% was obtained on Switchboard, while a factor 5 reduction in the amount of labeled data needed for classification was achieved on Dihana. The passive Support Vector Machine learner used as baseline in itself significantly improves the state of the art in dialogue act classification on both corpora. On Switchboard it gives a 31% error reduction compared to the previously best reported result.",
"title": ""
},
{
"docid": "d603e92c3f3c8ab6a235631ee3a55d52",
"text": "This work focuses on algorithms which learn from examples to perform multiclass text and speech categorization tasks. We rst show how to extend the standard notion of classiication by allowing each instance to be associated with multiple labels. We then discuss our approach for multiclass multi-label text categorization which is based on a new and improved family of boosting algorithms. We describe in detail an implementation, called BoosTexter, of the new boosting algorithms for text categorization tasks. We present results comparing the performance of BoosTexter and a number of other text-categorization algorithms on a variety of tasks. We conclude by describing the application of our system to automatic call-type identiication from unconstrained spoken customer responses.",
"title": ""
},
{
"docid": "cb6c4f97fcefa003e890c8c4a97ff34b",
"text": "When interacting and communicating with virtual agents in immersive environments, the agents’ behavior should be believable and authentic. Thereby, one important aspect is a convincing auralization of their speech. In this work-in-progress paper a study design to evaluate the effect of adding directivity to speech sound source on the perceived social presence of a virtual agent is presented. Therefore, we describe the study design and discuss first results of a prestudy as well as consequential improvements of the design.",
"title": ""
},
{
"docid": "52d5dc571b13d47cc281504a0b890a67",
"text": "We replicated a controlled experiment first run in the early 1980’s to evaluate the effectiveness and efficiency of 50 student subjects who used three defect-detection techniques to observe failures and isolate faults in small C programs. The three techniques were code reading by stepwise abstraction, functional (black-box) testing, and structural (white-box) testing. Two internal replications showed that our relatively inexperienced subjects were similarly effective at observing failures and isolating faults with all three techniques. However, our subjects were most efficient at both tasks when they used functional testing. Some significant differences among the techniques in their effectiveness at isolating faults of different types were seen. These results suggest that inexperienced subjects can apply a formal verification technique (code reading) as effectively as an execution-based validation technique, but they are most efficient when using functional testing.",
"title": ""
},
{
"docid": "02e961880a7925eb9d41c372498cb8d0",
"text": "Since debt is typically riskier in recessions, transfers from equity holders to debt holders associated with each investment also tend to concentrate in recessions. Such systematic risk exposure of debt overhang has important implications for the investment and financing decisions of firms and on the ex ante costs of debt overhang. Using a calibrated dynamic capital structure/real option model, we show that the costs of debt overhang become significantly higher in the presence of macroeconomic risk. We also provide several new predictions that relate the cyclicality of a firm’s assets in place and growth options to its investment and capital structure decisions. We are grateful to Santiago Bazdresch, Bob Goldstein, David Mauer (WFA discussant), Erwan Morellec, Stew Myers, Chris Parsons, Michael Roberts, Antoinette Schoar, Neng Wang, Ivo Welch, and seminar participants at MIT, Federal Reserve Bank of Boston, Boston University, Dartmouth, University of Lausanne, University of Minnesota, the Third Risk Management Conference at Mont Tremblant, the Minnesota Corporate Finance Conference, and the WFA for their comments. MIT Sloan School of Management and NBER. Email: huichen@mit.edu. Tel. 617-324-3896. MIT Sloan School of Management. Email: manso@mit.edu. Tel. 617-253-7218.",
"title": ""
},
{
"docid": "1ba4e9a711294ddca1fb803fa057d8a4",
"text": "Motivation: The previously developed Search Ontology (SO) allows domain experts to formally specify domain concepts, search terms associated to a domain, and rules describing domain concepts. So far, Lucene search queries can be generated from information contained in the SO and can be used for querying literature data bases or PubMed. However, this is still insufficient, since these queries are not well suited for querying XML documents because they are not following their structure. However, in the medical domain, many information items are coded in XML. Thus, querying structured XML documents is crucial for retrieving similar cases or for identifying potential study participants. For example , information items of patients with a similar tumor classification documented in a certain section of the respective pathology report need to be retrieved. This requires a precise definition of queries. In this paper , we introduce a concept for the generation of such queries using a Search Ontology XML extension to enable semantic searches on struc-tured data. Results: For a gain of precision, the paragraph of a document need to be specified, in which a specific information item expressed in a query is expected to appear. The Search Ontology XML Extension (SOX) connects search terms to certain sections in XML documents. The extension consists of a class which represents the XML structure and a relation between search terms and this XML structure. This enables an automatic generation of XPath expressions, which makes an efficient and precise search of structured pathology reports in XML databases possible. The combination of standardized Electronic Health Records with an ontol-ogy based query method promises a gain of precision, a high degree of interoperability and long term durability of both, XML documents and queries on XML documents.",
"title": ""
}
] |
scidocsrr
|
38cf3ca9c7798b26c69d314cedab6cf5
|
LCI: a social channel analysis platform for live customer intelligence
|
[
{
"docid": "fb2287cb1c41441049288335f10fd473",
"text": "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly",
"title": ""
}
] |
[
{
"docid": "a7c79045bcbd9fac03015295324745e3",
"text": "Image saliency detection has recently witnessed rapid progress due to deep convolutional neural networks. However, none of the existing methods is able to identify object instances in the detected salient regions. In this paper, we present a salient instance segmentation method that produces a saliency mask with distinct object instance labels for an input image. Our method consists of three steps, estimating saliency map, detecting salient object contours and identifying salient object instances. For the first two steps, we propose a multiscale saliency refinement network, which generates high-quality salient region masks and salient object contours. Once integrated with multiscale combinatorial grouping and a MAP-based subset optimization framework, our method can generate very promising salient object instance segmentation results. To promote further research and evaluation of salient instance segmentation, we also construct a new database of 1000 images and their pixelwise salient instance annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks for salient region detection as well as on our new dataset for salient instance segmentation.",
"title": ""
},
{
"docid": "1d956bafdb6b7d4aa2afcfeb77ac8cbb",
"text": "In this paper, we propose a novel model for high-dimensional data, called the Hybrid Orthogonal Projection and Estimation (HOPE) model, which combines a linear orthogonal projection and a finite mixture model under a unified generative modeling framework. The HOPE model itself can be learned unsupervised from unlabelled data based on the maximum likelihood estimation as well as discriminatively from labelled data. More interestingly, we have shown the proposed HOPE models are closely related to neural networks (NNs) in a sense that each hidden layer can be reformulated as a HOPE model. As a result, the HOPE framework can be used as a novel tool to probe why and how NNs work, more importantly, to learn NNs in either supervised or unsupervised ways. In this work, we have investigated the HOPE framework to learn NNs for several standard tasks, including image recognition on MNIST and speech recognition on TIMIT. Experimental results have shown that the HOPE framework yields significant performance gains over the current state-of-the-art methods in various types of NN learning problems, including unsupervised feature learning, supervised or semi-supervised learning.",
"title": ""
},
{
"docid": "0ee891e2f75553262ebaaaf2be1d8e27",
"text": "How do you know when your core needs to change? And how do you determine what should replace it? From an in-depth study of 25 companies, the author, a strategy consultant, has discovered that it's possible to measure the vitality of a business's core. If it needs reinvention, he says, the best course is to mine hidden assets. Some of the 25 companies were in deep crisis when they began the process of redefining themselves. But, says Zook, management teams can learn to recognize early signs of erosion. He offers five diagnostic questions with which to evaluate the customers, key sources of differentiation, profit pools, capabilities, and organizational culture of your core business. The next step is strategic regeneration. In four-fifths of the companies Zook examined, a hidden asset was the centerpiece of the new strategy. He provides a map for identifying the hidden assets in your midst, which tend to fall into three categories: undervalued business platforms, untapped insights into customers, and underexploited capabilities. The Swedish company Dometic, for example, was manufacturing small absorption refrigerators for boats and RVs when it discovered a hidden asset: its understanding of, and access to, customers in the RV market. The company took advantage of a boom in that market to refocus on complete systems for live-in vehicles. The Danish company Novozymes, which produced relatively low-tech commodity enzymes such as those used in detergents, realized that its underutilized biochemical capability in genetic and protein engineering was a hidden asset and successfully refocused on creating bioengineered specialty enzymes. Your next core business is not likely to announce itself with fanfare. Use the author's tools to conduct an internal audit of possibilities and pinpoint your new focus.",
"title": ""
},
{
"docid": "ab4e2ab6b206fece59f40945c82d5cd7",
"text": "Knowledge distillation is effective to train small and generalisable network models for meeting the low-memory and fast running requirements. Existing offline distillation methods rely on a strong pre-trained teacher, which enables favourable knowledge discovery and transfer but requires a complex two-phase training procedure. Online counterparts address this limitation at the price of lacking a highcapacity teacher. In this work, we present an On-the-fly Native Ensemble (ONE) learning strategy for one-stage online distillation. Specifically, ONE trains only a single multi-branch network while simultaneously establishing a strong teacher onthe-fly to enhance the learning of target network. Extensive evaluations show that ONE improves the generalisation performance a variety of deep neural networks more significantly than alternative methods on four image classification dataset: CIFAR10, CIFAR100, SVHN, and ImageNet, whilst having the computational efficiency advantages.",
"title": ""
},
{
"docid": "4a27c9c13896eb50806371e179ccbf33",
"text": "A geographical information system (CIS) is proposed as a suitable tool for mapping the spatial distribution of forest fire danger. Using a region severely affected by forest fires in Central Spain as the study area, topography, meteorological data, fuel models and human-caused risk were mapped and incorporated within a GIS. Three danger maps were generated: probability of ignition, fuel hazard and human risk, and all of them were overlaid in an integrated fire danger map, based upon the criteria established by the Spanish Forest Service. CIS make it possible to improve our knowledge of the geographical distribution of fire danger, which is crucial for suppression planning (particularly when hotshot crews are involved) and for elaborating regional fire defence plans.",
"title": ""
},
{
"docid": "567445f68597ea8ff5e89719772819be",
"text": "We have developed an interactive pop-up book called Electronic Popables to explore paper-based computing. Our book integrates traditional pop-up mechanisms with thin, flexible, paper-based electronics and the result is an artifact that looks and functions much like an ordinary pop-up, but has added elements of dynamic interactivity. This paper introduces the book and, through it, a library of paper-based sensors and a suite of paper-electronics construction techniques. We also reflect on the unique and under-explored opportunities that arise from combining material experimentation, artistic design, and engineering.",
"title": ""
},
{
"docid": "cc2822b15ccf29978252b688111d58cd",
"text": "Today, even a moderately sized corporate intranet contains multiple firewalls and routers, which are all used to enforce various aspects of the global corporate security policy. Configuring these devices to work in unison is difficult, especially if they are made by different vendors. Even testing or reverse-engineering an existing configuration (say, when a new security administrator takes over) is hard. Firewall configuration files are written in low-level formalisms, whose readability is comparable to assembly code, and the global policy is spread over all the firewalls that are involved. To alleviate some of these difficulties, we designed and implemented a novel firewall analysis tool. Our software allows the administrator to easily discover and test the global firewall policy (either a deployed policy or a planned one). Our tool uses a minimal description of the network topology, and directly parses the various vendor-specific lowlevel configuration files. It interacts with the user through a query-and-answer session, which is conducted at a much higher level of abstraction. A typical question our tool can answer is “from which machines can our DMZ be reached, and with which services?”. Thus, our tool complements existing vulnerability analysis tools, as it can be used before a policy is actually deployed, it operates on a more understandable level of abstraction, and it deals with all the firewalls at once.",
"title": ""
},
{
"docid": "954d0ef5a1a648221ce8eb3f217f4071",
"text": "Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into different categories. With a focus on graph convolutional networks, we review alternative architectures that have recently been developed; these learning paradigms include graph attention networks, graph autoencoders, graph generative networks, and graph spatial-temporal networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes and benchmarks of the existing algorithms on different learning tasks. Finally, we propose potential research directions in this",
"title": ""
},
{
"docid": "13e61389de352298bf9581bc8a8714cc",
"text": "A bacterial gene (neo) conferring resistance to neomycin-kanamycin antibiotics has been inserted into SV40 hybrid plasmid vectors and introduced into cultured mammalian cells by DNA transfusion. Whereas normal cells are killed by the antibiotic G418, those that acquire and express neo continue to grow in the presence of G418. In the course of the selection, neo DNA becomes associated with high molecular weight cellular DNA and is retained even when cells are grown in the absence of G418 for extended periods. Since neo provides a marker for dominant selections, cell transformation to G418 resistance is an efficient means for cotransformation of nonselected genes.",
"title": ""
},
{
"docid": "6d262139067d030c3ebb1169e93c6422",
"text": "In this paper, we present a study on learning visual recognition models from large scale noisy web data. We build a new database called WebVision, which contains more than 2.4 million web images crawled from the Internet by using queries generated from the 1, 000 semantic concepts of the ILSVRC 2012 benchmark. Meta information along with those web images (e.g., title, description, tags, etc.) are also crawled. A validation set and test set containing human annotated images are also provided to facilitate algorithmic development. Based on our new database, we obtain a few interesting observations: 1) the noisy web images are sufficient for training a good deep CNN model for visual recognition; 2) the model learnt from our WebVision database exhibits comparable or even better generalization ability than the one trained from the ILSVRC 2012 dataset when being transferred to new datasets and tasks; 3) a domain adaptation issue (a.k.a., dataset bias) is observed, which means the dataset can be used as the largest benchmark dataset for visual domain adaptation. Our new WebVision database and relevant studies in this work would benefit the advance of learning state-of-the-art visual models with minimum supervision based on web data.",
"title": ""
},
{
"docid": "0af8bbdda9482f24dfdfc41046382e1b",
"text": "In this paper, we have examined the effectiveness of \"style matrix\" which is used in the works on style transfer and texture synthesis by Gatys et al. in the context of image retrieval as image features. A style matrix is presented by Gram matrix of the feature maps in a deep convolutional neural network. We proposed a style vector which are generated from a style matrix with PCA dimension reduction. In the experiments, we evaluate image retrieval performance using artistic images downloaded from Wikiarts.org regarding both artistic styles ans artists. We have obtained 40.64% and 70.40% average precision for style search and artist search, respectively, both of which outperformed the results by common CNN features. In addition, we found PCA-compression boosted the performance.",
"title": ""
},
{
"docid": "ce22073b8dbc3a910fa8811a2a8e5c87",
"text": "Ethernet is going to play a major role in automotive communications, thus representing a significant paradigm shift in automotive networking. Ethernet technology will allow for multiple in-vehicle systems (such as, multimedia/infotainment, camera-based advanced driver assistance and on-board diagnostics) to simultaneously access information over a single unshielded twisted pair cable. The leading technology for automotive applications is the IEEE Audio Video Bridging (AVB), which offers several advantages, such as open specification, multiple sources of electronic components, high bandwidth, the compliance with the challenging EMC/EMI automotive requirements, and significant savings on cabling costs, thickness and weight. This paper surveys the state of the art on Ethernet-based automotive communications and especially on the IEEE AVB, with a particular focus on the way to provide support to the so-called scheduled traffic, that is a class of time-sensitive traffic (e.g., control traffic) that is transmitted according to a time schedule.",
"title": ""
},
{
"docid": "dae40fa32526bf965bad70f98eb51bb7",
"text": "Weight pruning methods for deep neural networks (DNNs) have been investigated recently, but prior work in this area is mainly heuristic, iterative pruning, thereby lacking guarantees on the weight reduction ratio and convergence time. To mitigate these limitations, we present a systematic weight pruning framework of DNNs using the alternating direction method of multipliers (ADMM). We first formulate the weight pruning problem of DNNs as a nonconvex optimization problem with combinatorial constraints specifying the sparsity requirements, and then adopt the ADMM framework for systematic weight pruning. By using ADMM, the original nonconvex optimization problem is decomposed into two subproblems that are solved iteratively. One of these subproblems can be solved using stochastic gradient descent, the other can be solved analytically. Besides, our method achieves a fast convergence rate. The weight pruning results are very promising and consistently outperform the prior work. On the LeNet-5 model for the MNIST data set, we achieve 71.2× weight reduction without accuracy loss. On the AlexNet model for the ImageNet data set, we achieve 21× weight reduction without accuracy loss. When we focus on the convolutional layer pruning for computation reductions, we can reduce the total computation by five times compared with the prior work (achieving a total of 13.4× weight reduction in convolutional layers). Our models and codes are released at https://github.com/KaiqiZhang/admm-pruning.",
"title": ""
},
{
"docid": "4466e4022ddf949a57431f3684c4925f",
"text": "In this work, we propose a goal-driven collaborative task that contains vision, language, and action in a virtual environment as its core components. Specifically, we develop a collaborative ‘Image Drawing’ game between two agents, called CoDraw. Our game is grounded in a virtual world that contains movable clip art objects. Two players, Teller and Drawer, are involved. The Teller sees an abstract scene containing multiple clip arts in a semantically meaningful configuration, while the Drawer tries to reconstruct the scene on an empty canvas using available clip arts. The two players communicate via two-way communication using natural language. We collect the CoDraw dataset of ∼10K dialogs consisting of 138K messages exchanged between a Teller and a Drawer from Amazon Mechanical Turk (AMT). We analyze our dataset and present three models to model the players’ behaviors, including an attention model to describe and draw multiple clip arts at each round. The attention models are quantitatively compared to the other models to show how the conventional approaches work for this new task. We also present qualitative visualizations.",
"title": ""
},
{
"docid": "2f1e059a0c178b3703c31ad31761dadc",
"text": "This paper will serve as an introduction to the body of work on robust subspace recovery. Robust subspace recovery involves finding an underlying low-dimensional subspace in a data set that is possibly corrupted with outliers. While this problem is easy to state, it has been difficult to develop optimal algorithms due to its underlying nonconvexity. This work emphasizes advantages and disadvantages of proposed approaches and unsolved problems in the area.",
"title": ""
},
{
"docid": "3ba2ba9e2fc55476d86bcd8c857c9401",
"text": "While model queries are important components in modeldriven tool chains, they are still frequently implemented using traditional programming languages, despite the availability of model query languages due to performance and expressiveness issues. In the current paper, we propose EMF-IncQuery as a novel, graph-based query language for EMF models by adapting the query language of the Viatra2 model transformation framework to inherit its concise, declarative nature, but to properly tailor the new query language to the modeling specificities of EMF. The EMF-IncQuery language includes (i) structural restrictions for queries imposed by EMF models, (ii) syntactic sugar and notational shorthand in queries, (iii) true semantic extensions which introduce new query features, and (iv) a constraint-based static type checking method to detect violations of EMF-specific type inference rules.",
"title": ""
},
{
"docid": "ce5273747928d6e0f5adeb96ab6857a3",
"text": "We introduce a reinforcement learningbased approach to simultaneous machine translation—producing a translation while receiving input words— between languages with drastically different word orders: from verb-final languages (e.g., German) to verb-medial languages (English). In traditional machine translation, a translator must “wait” for source material to appear before translation begins. We remove this bottleneck by predicting the final verb in advance. We use reinforcement learning to learn when to trust predictions about unseen, future portions of the sentence. We also introduce an evaluation metric to measure expeditiousness and quality. We show that our new translation model outperforms batch and monotone translation strategies.",
"title": ""
},
{
"docid": "ee11c968b4280f6da0b1c0f4544bc578",
"text": "A report is presented of some results of an ongoing project using neural-network modeling and learning techniques to search for and decode nonlinear regularities in asset price movements. The author focuses on the case of IBM common stock daily returns. Having to deal with the salient features of economic data highlights the role to be played by statistical inference and requires modifications to standard learning techniques which may prove useful in other contexts.<<ETX>>",
"title": ""
},
{
"docid": "eeafde1980fc144f1dcef6f84068bbd4",
"text": "The Mobile Computing in a Fieldwork Environment (MCFE) project aims to develop context-aware tools for hand-held computers that will support the authoring, presentation and management of field notes. The project deliverables will be designed to support student fieldwork exercises and our initial targets are fieldwork training in archaeology and the environmental sciences. Despite this specialised orientation, we anticipate that many features of these tools will prove to be equally well suited to use in research data collection in these and other disciplines.",
"title": ""
}
] |
scidocsrr
|
3cbe70d18340a07f280a6b560299210a
|
Skeleton Based Shape Matching and Retrieval
|
[
{
"docid": "9e3de4720dade2bb73d78502d7cccc8b",
"text": "Skeletonization is a way to reduce dimensionality of digital objects. Here, we present an algorithm that computes the curve skeleton of a surface-like object in a 3D image, i.e., an object that in one of the three dimensions is at most twovoxel thick. A surface-like object consists of surfaces and curves crossing each other. Its curve skeleton is a 1D set centred within the surface-like object and with preserved topological properties. It can be useful to achieve a qualitative shape representation of the object with reduced dimensionality. The basic idea behind our algorithm is to detect the curves and the junctions between different surfaces and prevent their removal as they retain the most significant shape representation. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "46bbc38bc45d9998fcd517edd253c091",
"text": "Visual data analysis involves both open-ended and focused exploration. Manual chart specification tools support question answering, but are often tedious for early-stage exploration where systematic data coverage is needed. Visualization recommenders can encourage broad coverage, but irrelevant suggestions may distract users once they commit to specific questions. We present Voyager 2, a mixed-initiative system that blends manual and automated chart specification to help analysts engage in both open-ended exploration and targeted question answering. We contribute two partial specification interfaces: wildcards let users specify multiple charts in parallel, while related views suggest visualizations relevant to the currently specified chart. We present our interface design and applications of the CompassQL visualization query language to enable these interfaces. In a controlled study we find that Voyager 2 leads to increased data field coverage compared to a traditional specification tool, while still allowing analysts to flexibly drill-down and answer specific questions.",
"title": ""
},
{
"docid": "3744e835d66ba66e612984097a2337da",
"text": "Catastrophic forgetting is a well studied problem in artificial neural networks in which past representations are rapidly lost as new representations are constructed. We hypothesize that such forgetting occurs due to overlap in the hidden layers, as well as the global nature in which neurons encode information. We introduce a novel technique to mitigate forgetting which effectively minimizes activation overlapping by using online clustering to effectively select neurons in the feedforward and back-propagation phases. We demonstrate the memory retention properties of the proposed scheme using the MNIST digit recognition data set.",
"title": ""
},
{
"docid": "94f11255e531a47969ba18112bf22777",
"text": "Basic scientific interest in using a semiconducting electrode in molecule-based electronics arises from the rich electrostatic landscape presented by semiconductor interfaces. Technological interest rests on the promise that combining existing semiconductor (primarily Si) electronics with (mostly organic) molecules will result in a whole that is larger than the sum of its parts. Such a hybrid approach appears presently particularly relevant for sensors and photovoltaics. Semiconductors, especially Si, present an important experimental test-bed for assessing electronic transport behavior of molecules, because they allow varying the critical interface energetics without, to a first approximation, altering the interfacial chemistry. To investigate semiconductor-molecule electronics we need reproducible, high-yield preparations of samples that allow reliable and reproducible data collection. Only in that way can we explore how the molecule/electrode interfaces affect or even dictate charge transport, which may then provide a basis for models with predictive power.To consider these issues and questions we will, in this Progress Report, review junctions based on direct bonding of molecules to oxide-free Si.describe the possible charge transport mechanisms across such interfaces and evaluate in how far they can be quantified.investigate to what extent imperfections in the monolayer are important for transport across the monolayer.revisit the concept of energy levels in such hybrid systems.",
"title": ""
},
{
"docid": "ed46f9225b60c5f128257310cd1b27ed",
"text": "We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing. The paper describes how to create a dataset simulating various perturbations (occlusions and lighting conditions) from a single real-world image of the scene. A convolutional neural network is fine-tuned using this dataset to estimate the relative pose between two images of the same scene. The output of the network is then employed in a visual servoing control scheme. The method converges robustly even in difficult real-world settings with strong lighting variations and occlusions. A positioning error of less than one millimeter is obtained in experiments with a 6 DOF robot.",
"title": ""
},
{
"docid": "b882d6bc42e34506ba7ab26ed44d9265",
"text": "Production datacenters operate under various uncertainties such as tra c dynamics, topology asymmetry, and failures. Therefore, datacenter load balancing schemes must be resilient to these uncertainties; i.e., they should accurately sense path conditions and timely react to mitigate the fallouts. Despite signi cant e orts, prior solutions have important drawbacks. On the one hand, solutions such as Presto and DRB are oblivious to path conditions and blindly reroute at xed granularity. On the other hand, solutions such as CONGA and CLOVE can sense congestion, but they can only reroute when owlets emerge; thus, they cannot always react timely to uncertainties. To make things worse, these solutions fail to detect/handle failures such as blackholes and random packet drops, which greatly degrades their performance. In this paper, we introduce Hermes, a datacenter load balancer that is resilient to the aforementioned uncertainties. At its heart, Hermes leverages comprehensive sensing to detect path conditions including failures unattended before, and it reacts using timely yet cautious rerouting. Hermes is a practical edge-based solution with no switch modi cation. We have implemented Hermes with commodity switches and evaluated it through both testbed experiments and large-scale simulations. Our results show that Hermes achieves comparable performance to CONGA and Presto in normal cases, and well handles uncertainties: under asymmetries, Hermes achieves up to 10% and 20% better ow completion time (FCT) than CONGA and CLOVE; under switch failures, it outperforms all other schemes by over 32%.",
"title": ""
},
{
"docid": "35e4df3d3da5fee60235bf7680de7fd1",
"text": "Many people who would benefit from mental health services opt not to pursue them or fail to fully participate once they have begun. One of the reasons for this disconnect is stigma; namely, to avoid the label of mental illness and the harm it brings, people decide not to seek or fully participate in care. Stigma yields 2 kinds of harm that may impede treatment participation: It diminishes self-esteem and robs people of social opportunities. Given the existing literature in this area, recommendations are reviewed for ongoing research that will more comprehensively expand understanding of the stigma-care seeking link. Implications for the development of antistigma programs that might promote care seeking and participation are also reviewed.",
"title": ""
},
{
"docid": "6e1b95d0c2cff4c372f451c8636b973e",
"text": "Multiple sclerosis (MS) is a chronic inflammatory demyelinating and neurodegenerative disease of central nervous system that affects both white and gray matter. Idiopathic calcification of the basal ganglia is a rare neurodegenerative disorder of unknown cause that is characterized by sporadic or familial brain calcification. Concurrence of multiple sclerosis (MS) and idiopathic basal ganglia calcification (Fahr's disease) is very rare event. In this study, we describe a cooccurrence of idiopathic basal ganglia calcification with multiple sclerosis. The association between this disease and MS is unclear and also maybe probably coincidental.",
"title": ""
},
{
"docid": "63e8cf0d01b07bedb2cc0d182dff5e3e",
"text": "Machine Reading and Comprehension recently has drawn a fair amount of attention in the field of natural language processing. In this paper, we consider integrating side information to improve machine comprehension on answering cloze-style questions more precisely. To leverage the external information, we present a novel attention-based architecture which could feed the side information representations into word level embeddings to explore the comprehension performance. Our experiments show consistent improvements of our model over various baselines.",
"title": ""
},
{
"docid": "2bc11bc1f29594d60a5f110dc499888f",
"text": "Our previous research demonstrated high, sustained satiety effects of stabilized food foams relative to their non-aerated compositions. Here we test if the energy and macronutrients in a stabilized food foam are critical for its previously demonstrated satiating effects. In a randomized, crossover design, 72 healthy subjects consumed 400 mL of each of four foams, one per week over four weeks, 150 min after a standardized breakfast. Appetite ratings were collected for 180 min post-foam. The reference was a normal energy food foam (NEF1, 280 kJ/400 mL) similar to that used in our previous research. This was compared to a very low energy food foam (VLEF, 36 kJ/400 mL) and 2 alternative normal energy foams (NEF2 and NEF3) testing possible effects of compositional differences other than energy (i.e. emulsifier and carbohydrate source). Appetite ratings were quantified as area under the curve (AUC) and time to return to baseline (TTRTB). Equivalence to NEF1 was predefined as the 90% confidence interval of between-treatment differences in AUC being within -5 to +5 mm/min. All treatments similarly affected appetite ratings, with mean AUC for fullness ranging between 49.1 and 52.4 mm/min. VLEF met the statistical criterion for equivalence to NEF1 for all appetite AUC ratings, but NEF2 and NEF3 did not. For all foams the TTRTB for satiety and fullness were consistently between 150 and 180 min, though values were shortest for NEF2 and especially NEF3 foams for most appetite scales. In conclusion, the high, sustained satiating effects of these food foams are independent of energy and macronutrient content at the volumes tested.",
"title": ""
},
{
"docid": "b55eecaefdeeac014146dc9987d8f3c1",
"text": "RFID technologies have revolutionized the asset tracking industry, with applications ranging from automated checkout to monitoring the medication intakes of elderlies. In all these applications, fast, and in some cases energy efficient, tag reading is desirable, especially with increasing tag numbers. In practice, tag reading protocols face many problems. A key one being tag collision, which occurs when multiple tags reply simultaneously to a reader. As a result, an RFID reader experiences low tag reading performance, and wastes valuable energy. Therefore, it is important that RFID application developers are aware of current tag reading protocols. To this end, this paper surveys, classifies, and compares state-of-the-art tag reading protocols. Moreover, it presents research directions for existing and future tag reading protocols.",
"title": ""
},
{
"docid": "95a7892f685321d9c4608fbdc67b08aa",
"text": "In order to identify and explore the strengths and weaknesses of business intelligence (BI) initiatives, managers in charge need to assess the maturity of their BI efforts. For this, a wide range of maturity models has been developed, but these models often focus on technical details and do not address the potential value proposition of BI. Based on an extensive literature review and an empirical study, we develop and evaluate a theoretical model of impact-oriented BI maturity. Building on established IS theories, the model integrates BI deployment, BI usage, individual impact, and organizational performance. This conceptualization helps to refocus the topic of BI maturity to business needs and can be used as a theoretical foundation for future research.",
"title": ""
},
{
"docid": "12818095167dbf85d5d717121f00f533",
"text": "Sarmento, H, Figueiredo, A, Lago-Peñas, C, Milanovic, Z, Barbosa, A, Tadeu, P, and Bradley, PS. Influence of tactical and situational variables on offensive sequences during elite football matches. J Strength Cond Res 32(8): 2331-2339, 2018-This study examined the influence of tactical and situational variables on offensive sequences during elite football matches. A sample of 68 games and 1,694 offensive sequences from the Spanish La Liga, Italian Serie A, German Bundesliga, English Premier League, and Champions League were analyzed using χ and logistic regression analyses. Results revealed that counterattacks (odds ratio [OR] = 1.44; 95% confidence interval [CI]: 1.13-1.83; p < 0.01) and fast attacks (OR = 1.43; 95% CI: 1.11-1.85; p < 0.01) increased the success of an offensive sequence by 40% compared with positional attacks. The chance of an offensive sequence ending effectively in games from the Spanish, Italian, and English Leagues were higher than that in the Champions League. Offensive sequences that started in the preoffensive or offensive zones were more successful than those started in the defensive zones. An increase of 1 second in the offensive sequence duration and an extra pass resulted in a decrease of 2% (OR = 0.98; 95% CI: 0.98-0.99; p < 0.001) and 7% (OR = 0.93; 95% CI: 0.91-0.96; p < 0.001), respectively, in the probability of its success. These findings could assist coaches in designing specific training situations that improve the effectiveness of the offensive process.",
"title": ""
},
{
"docid": "afb0d6a917fd0c19aaaa045c145a60d3",
"text": "This paper proposes a new approach to using machine learning to detect grasp poses on novel objects presented in clutter. The input to our algorithm is a point cloud and the geometric parameters of the robot hand. The output is a set of hand poses that are expected to be good grasps. There are two main contributions. First, we identify a set of necessary conditions on the geometry of a grasp that can be used to generate a set of grasp hypotheses. This helps focus grasp detection away from regions where no grasp can exist. Second, we show how geometric grasp conditions can be used to generate labeled datasets for the purpose of training the machine learning algorithm. This enables us to generate large amounts of training data and it grounds our training labels in grasp mechanics. Overall, our method achieves an average grasp success rate of 88% when grasping novels objects presented in isolation and an average success rate of 73% when grasping novel objects presented in dense clutter. This system is available as a ROS package at http://wiki.ros.org/agile_grasp.",
"title": ""
},
{
"docid": "7010278254ee0fadb7b59cb05169578a",
"text": "INTRODUCTION\nLumbar disc herniation (LDH) is a common condition in adults and can impose a heavy burden on both the individual and society. It is defined as displacement of disc components beyond the intervertebral disc space. Various conservative treatments have been recommended for the treatment of LDH and physical therapy plays a major role in the management of patients. Therapeutic exercise is effective for relieving pain and improving function in individuals with symptomatic LDH. The aim of this systematic review is to evaluate the effectiveness of motor control exercise (MCE) for symptomatic LDH.\n\n\nMETHODS AND ANALYSIS\nWe will include all clinical trial studies with a concurrent control group which evaluated the effect of MCEs in patients with symptomatic LDH. We will search PubMed, SCOPUS, PEDro, SPORTDiscus, CINAHL, CENTRAL and EMBASE with no restriction of language. Primary outcomes of this systematic review are pain intensity and functional disability and secondary outcomes are functional tests, muscle thickness, quality of life, return to work, muscle endurance and adverse events. Study selection and data extraction will be performed by two independent reviewers. The assessment of risk of bias will be implemented using the PEDro scale. Publication bias will be assessed by funnel plots, Begg's and Egger's tests. Heterogeneity will be evaluated using the I2 statistic and the χ2 test. In addition, subgroup analyses will be conducted for population and the secondary outcomes. All meta-analyses will be performed using Stata V.12 software.\n\n\nETHICS AND DISSEMINATION\nNo ethical concerns are predicted. The systematic review findings will be published in a peer-reviewed journal and will also be presented at national/international academic and clinical conferences.\n\n\nTRIAL REGISTRATION NUMBER\nCRD42016038166.",
"title": ""
},
{
"docid": "1d82d994635a0bd0137febd74b8c3835",
"text": "research A. Agrawal J. Basak V. Jain R. Kothari M. Kumar P. A. Mittal N. Modani K. Ravikumar Y. Sabharwal R. Sureka Marketing decisions are typically made on the basis of research conducted using direct mailings, mall intercepts, telephone interviews, focused group discussion, and the like. These methods of marketing research can be time-consuming and expensive, and can require a large amount of effort to ensure accurate results. This paper presents a novel approach for conducting online marketing research based on several concepts such as active learning, matched control and experimental groups, and implicit and explicit experiments. These concepts, along with the opportunity provided by the increasing numbers of online shoppers, enable rapid, systematic, and cost-effective marketing research.",
"title": ""
},
{
"docid": "c63dcdd615007dfddca77e7bdf52c0eb",
"text": "Essential tremor (ET) is a common movement disorder but its pathogenesis remains poorly understood. This has limited the development of effective pharmacotherapy. The current therapeutic armamentaria for ET represent the product of careful clinical observation rather than targeted molecular modeling. Here we review their pharmacokinetics, metabolism, dosing, and adverse effect profiles and propose a treatment algorithm. We also discuss the concept of medically refractory tremor, as therapeutic trials should be limited unless invasive therapy is contraindicated or not desired by patients.",
"title": ""
},
{
"docid": "161c79eeb01624c497446cb2c51f3893",
"text": "In this article, results of a German nationwide survey (KFN schools survey 2007/2008) are presented. The controlled sample of 44,610 male and female ninth-graders was carried out in 2007 and 2008 by the Criminological Research Institute of Lower Saxony (KFN). According to a newly developed screening instrument (KFN-CSAS-II), which was presented to every third juvenile participant (N = 15,168), 3% of the male and 0.3% of the female students are diagnosed as dependent on video games. The data indicate a clear dividing line between extensive gaming and video game dependency (VGD) as a clinically relevant phenomenon. VGD is accompanied by increased levels of psychological and social stress in the form of lower school achievement, increased truancy, reduced sleep time, limited leisure activities, and increased thoughts of committing suicide. In addition, it becomes evident that personal risk factors are crucial for VGD. The findings indicate the necessity of additional research as well as the respective measures in the field of health care policies.",
"title": ""
},
{
"docid": "17cfb720c78e6d028f7578f2c5bdcf13",
"text": "Driver's drowsiness and fatigue have been major causes of the serious traffic accidents, which make this an area of great socioeconomic concern. This paper describes the design of ECG (Electrocardiogram) sensor with conductive fabric electrodes and PPG (Photoplethysmogram) sensor to obtain physiological signals for car driver's health condition monitoring. ECG and PPG signals are transmitted to base station connected to the server PC via personal area network for practical test. Intelligent health condition monitoring system is designed at the server to analyze the PPG and ECG signals. Our purpose for intelligent health condition monitoring system is managed to process HRV signals analysis derived from the physiological signals in time and frequency domain and to evaluate the driver's drowsiness status.",
"title": ""
},
{
"docid": "89e91d9c74421124c19ea573eef15b0c",
"text": "A cavity-backed triangular-complimentary-split-ring-slot (TCSRS) antenna based on substrate integrated waveguide (SIW) is proposed in this communication. Proposed antenna element is designed and implemented at 28 and 45 GHz for the fifth generation (5G) of wireless communications. The principle of the proposed antenna element is investigated first then the arrays with two and four elements are designed for high-gain operation. Antennas prototype along with their arrays are fabricated using standard printed circuit board (PCB) process at both frequencies-28 and 45 GHz. Measured result shows that the 16.67% impedance bandwidth at 28 GHz and 22.2% impedance bandwidth at 45 GHz is achieved, while maintaining the same substrate height at both frequencies. The measured peak gains of the 2 × 2 antenna array at 30 or 50 GHz are 13.5 or 14.4 dBi, respectively.",
"title": ""
}
] |
scidocsrr
|
c89b73da9165dc72761c751635e2c6ae
|
Defending Web Servers with Feints, Distraction and Obfuscation
|
[
{
"docid": "90bb9a4740e9fa028932b68a34717b43",
"text": "Recently, the increase of interconnectivity has led to a rising amount of IoT enabled devices in botnets. Such botnets are currently used for large scale DDoS attacks. To keep track with these malicious activities, Honeypots have proven to be a vital tool. We developed and set up a distributed and highly-scalable WAN Honeypot with an attached backend infrastructure for sophisticated processing of the gathered data. For the processed data to be understandable we designed a graphical frontend that displays all relevant information that has been obtained from the data. We group attacks originating in a short period of time in one source as sessions. This enriches the data and enables a more in-depth analysis. We produced common statistics like usernames, passwords, username/password combinations, password lengths, originating country and more. From the information gathered, we were able to identify common dictionaries used for brute-force login attacks and other more sophisticated statistics like login attempts per session and attack efficiency.",
"title": ""
},
{
"docid": "fb7807c7f28d0e768b6a8570d89b3b02",
"text": "This paper presents a summary of research findings for a new reacitve phishing investigative technique using Web bugs and honeytokens. Phishing has become a rampant problem in today 's society and has cost financial institutions millions of dollars per year. Today's reactive techniques against phishing usually involve methods that simply minimize the damage rather than attempting to actually track down a phisher. Our research objective is to track down a phisher to the IP address of the phisher's workstation rather than innocent machines used as intermediaries. By using Web bugs and honeytokens on the fake Web site forms the phisher presents, one can log accesses to the honeytokens by the phisher when the attacker views the results of the forms. Research results to date are presented in this paper",
"title": ""
},
{
"docid": "4c165c15a3c6f069f702a54d0dab093c",
"text": "We propose a simple method for improving the security of hashed passwords: the maintenance of additional ``honeywords'' (false passwords) associated with each user's account. An adversary who steals a file of hashed passwords and inverts the hash function cannot tell if he has found the password or a honeyword. The attempted use of a honeyword for login sets off an alarm. An auxiliary server (the ``honeychecker'') can distinguish the user password from honeywords for the login routine, and will set off an alarm if a honeyword is submitted.",
"title": ""
}
] |
[
{
"docid": "5efebde0526dbb7015ecef066b76d1a9",
"text": "Recent advances in mixed-reality technologies have renewed interest in alternative modes of communication for human-robot interaction. However, most of the work in this direction has been confined to tasks such as teleoperation, simulation or explication of individual actions of a robot. In this paper, we will discuss how the capability to project intentions affect the task planning capabilities of a robot. Specifically, we will start with a discussion on how projection actions can be used to reveal information regarding the future intentions of the robot at the time of task execution. We will then pose a new planning paradigm - projection-aware planning - whereby a robot can trade off its plan cost with its ability to reveal its intentions using its projection actions. We will demonstrate each of these scenarios with the help of a joint human-robot activity using the HoloLens.",
"title": ""
},
{
"docid": "5da030b3e27cae63acd86c7fb9c4153d",
"text": "This work deals with the design and implementation prototype of a real time maximum power point tracker (MPPT) for photovoltaic panel (PV), aiming to improve energy transfer efficiency. This paper describes also the charging process of lead- acid batteries integrating the MPPT algorithm making an charging autonomous system that can be used to feed any autonomous application. The photovoltaic system exhibits a non-linear i-v characteristic and its maximum power point varies with solar insolation and temperature. To control the maximum transfer power from a PV panel the Perturbation and Observation (P&O) MPPT algorithm is executed by a simple microcontroller ATMEL ATTINY861V using the PV voltage and current information and controlling the duty cycle of a pulse width modulation (PWM) signal applied in to a DC/DC converter. The schematic and design of the single-ended primary inductance converter (SEPIC) is presented. This DC/DC converter is chosen because the input voltage can be higher or lower than the output voltage witch presents obvious design advantages. With the P&O MPPT algorithm implemented and executed by the microcontroller, the different charging stages of a lead-acid battery are described, showed and executed Finally, experimental results of the performance of the designed P&O MPPT algorithm are presented and compared with the results achieved with the direct connection of the PV panel to the battery.",
"title": ""
},
{
"docid": "0dd0f44e59c1ee1e04d1e675dfd0fd9c",
"text": "An important first step to successful global marketing is to understand the similarities and dissimilarities of values between cultures. This task is particularly daunting for companies trying to do business with China because of the scarcity of research-based information. This study uses updated values of Hofstede’s (1980) cultural model to compare the effectiveness of Pollay’s advertising appeals between the U.S. and China. Nine of the twenty hypotheses predicting effective appeals based on cultural dimensions were supported. An additional hypothesis was significant, but in the opposite direction as predicted. These findings suggest that it would be unwise to use Hofstede’s cultural dimensions as a sole predictor for effective advertising appeals. The Hofstede dimensions may lack the currency and fine grain necessary to effectively predict the success of the various advertising appeals. Further, the effectiveness of advertising appeals may be moderated by other factors, such as age, societal trends, political-legal environment and product usage.",
"title": ""
},
{
"docid": "273bb44ed02076008d5d2835baed9494",
"text": "Modeling informal inference in natural language is very challenging. With the recent availability of large annotated data, it has become feasible to train complex models such as neural networks to perform natural language inference (NLI), which have achieved state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform NLI from the data? If not, how can NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we aim to answer these questions by enriching the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models with external knowledge further improve the state of the art on the Stanford Natural Language Inference (SNLI) dataset.",
"title": ""
},
{
"docid": "79cffed53f36d87b89577e96a2b2e713",
"text": "Human pose estimation has made significant progress during the last years. However current datasets are limited in their coverage of the overall pose estimation challenges. Still these serve as the common sources to evaluate, train and compare different models on. In this paper we introduce a novel benchmark \"MPII Human Pose\" that makes a significant advance in terms of diversity and difficulty, a contribution that we feel is required for future developments in human body models. This comprehensive dataset was collected using an established taxonomy of over 800 human activities [1]. The collected images cover a wider variety of human activities than previous datasets including various recreational, occupational and householding activities, and capture people from a wider range of viewpoints. We provide a rich set of labels including positions of body joints, full 3D torso and head orientation, occlusion labels for joints and body parts, and activity labels. For each image we provide adjacent video frames to facilitate the use of motion information. Given these rich annotations we perform a detailed analysis of leading human pose estimation approaches and gaining insights for the success and failures of these methods.",
"title": ""
},
{
"docid": "bf0d5ee15b213c47d9d4a6a95d19e14a",
"text": "We propose a new objective for network research: to build a fundamentally different sort of network that can assemble itself given high level instructions, reassemble itself as requirements change, automatically discover when something goes wrong, and automatically fix a detected problem or explain why it cannot do so.We further argue that to achieve this goal, it is not sufficient to improve incrementally on the techniques and algorithms we know today. Instead, we propose a new construct, the Knowledge Plane, a pervasive system within the network that builds and maintains high-level models of what the network is supposed to do, in order to provide services and advice to other elements of the network. The knowledge plane is novel in its reliance on the tools of AI and cognitive systems. We argue that cognitive techniques, rather than traditional algorithmic approaches, are best suited to meeting the uncertainties and complexity of our objective.",
"title": ""
},
{
"docid": "79fdfee8b42fe72a64df76e64e9358bc",
"text": "An algorithm is described to solve multiple-phase optimal control problems using a recently developed numerical method called the Gauss pseudospectral method. The algorithm is well suited for use in modern vectorized programming languages such as FORTRAN 95 and MATLAB. The algorithm discretizes the cost functional and the differential-algebraic equations in each phase of the optimal control problem. The phases are then connected using linkage conditions on the state and time. A large-scale nonlinear programming problem (NLP) arises from the discretization and the significant features of the NLP are described in detail. A particular reusable MATLAB implementation of the algorithm, called GPOPS, is applied to three classical optimal control problems to demonstrate its utility. The algorithm described in this article will provide researchers and engineers a useful software tool and a reference when it is desired to implement the Gauss pseudospectral method in other programming languages.",
"title": ""
},
{
"docid": "bb98b9a825a4c7d0f3d4b06fafb8ff37",
"text": "The tremendous evolution of programmable graphics hardware has made high-quality real-time volume graphics a reality. In addition to the traditional application of rendering volume data in scientific visualization, the interest in applying these techniques for real-time rendering of atmospheric phenomena and participating media such as fire, smoke, and clouds is growing rapidly. This course covers both applications in scientific visualization, e.g., medical volume data, and real-time rendering, such as advanced effects and illumination in computer games, in detail. Course participants will learn techniques for harnessing the power of consumer graphics hardware and high-level shading languages for real-time rendering of volumetric data and effects. Beginning with basic texture-based approaches including hardware ray casting, the algorithms are improved and expanded incrementally, covering local and global illumination, scattering, pre-integration, implicit surfaces and non-polygonal isosurfaces, transfer function design, volume animation and deformation, dealing with large volumes, high-quality volume clipping, rendering segmented volumes, higher-order filtering, and non-photorealistic volume rendering. Course participants are provided with documented source code covering details usually omitted in publications.",
"title": ""
},
{
"docid": "da61794b9ffa1f6f4bc39cef9655bf77",
"text": "This manuscript analyzes the effects of design parameters, such as aspect ratio, doping concentration and bias, on the performance of a general CMOS Hall sensor, with insight on current-related sensitivity, power consumption, and bandwidth. The article focuses on rectangular-shaped Hall probes since this is the most general geometry leading to shape-independent results. The devices are analyzed by means of 3D-TCAD simulations embedding galvanomagnetic transport model, which takes into account the Lorentz force acting on carriers due to a magnetic field. Simulation results define a set of trade-offs and design rules that can be used by electronic designers to conceive their own Hall probes.",
"title": ""
},
{
"docid": "34bbc3054be98f2cc0edc25a00fe835d",
"text": "The increasing prevalence of co-processors such as the Intel Xeon Phi, has been reshaping the high performance computing (HPC) landscape. The Xeon Phi comes with a large number of power efficient CPU cores, but at the same time, it's a highly memory constraint environment leaving the task of memory management entirely up to application developers. To reduce programming complexity, we are focusing on application transparent, operating system (OS) level hierarchical memory management.\n In particular, we first show that state of the art page replacement policies, such as approximations of the least recently used (LRU) policy, are not good candidates for massive many-cores due to their inherent cost of remote translation lookaside buffer (TLB) invalidations, which are inevitable for collecting page usage statistics. The price of concurrent remote TLB invalidations grows rapidly with the number of CPU cores in many-core systems and outpace the benefits of the page replacement algorithm itself. Building upon our previous proposal, per-core Partially Separated Page Tables (PSPT), in this paper we propose Core-Map Count based Priority (CMCP) page replacement policy, which exploits the auxiliary knowledge of the number of mapping CPU cores of each page and prioritizes them accordingly. In turn, it can avoid TLB invalidations for page usage statistic purposes altogether. Additionally, we describe and provide an implementation of the experimental 64kB page support of the Intel Xeon Phi and reveal some intriguing insights regarding its performance. We evaluate our proposal on various applications and find that CMCP can outperform state of the art page replacement policies by up to 38%. We also show that the choice of appropriate page size depends primarily on the degree of memory constraint in the system.",
"title": ""
},
{
"docid": "a448b5e4e4bd017049226f06ce32fa9d",
"text": "We present an approach to accelerating a wide variety of image processing operators. Our approach uses a fully-convolutional network that is trained on input-output pairs that demonstrate the operator’s action. After training, the original operator need not be run at all. The trained network operates at full resolution and runs in constant time. We investigate the effect of network architecture on approximation accuracy, runtime, and memory footprint, and identify a specific architecture that balances these considerations. We evaluate the presented approach on ten advanced image processing operators, including multiple variational models, multiscale tone and detail manipulation, photographic style transfer, nonlocal dehazing, and nonphoto- realistic stylization. All operators are approximated by the same model. Experiments demonstrate that the presented approach is significantly more accurate than prior approximation schemes. It increases approximation accuracy as measured by PSNR across the evaluated operators by 8.5 dB on the MIT-Adobe dataset (from 27.5 to 36 dB) and reduces DSSIM by a multiplicative factor of 3 com- pared to the most accurate prior approximation scheme, while being the fastest. We show that our models general- ize across datasets and across resolutions, and investigate a number of extensions of the presented approach.",
"title": ""
},
{
"docid": "5fcda05ef200cd326ecb9c2412cf50b3",
"text": "OBJECTIVE\nPalpable lymph nodes are common due to the reactive hyperplasia of lymphatic tissue mainly connected with local inflammatory process. Differential diagnosis of persistent nodular change on the neck is different in children, due to higher incidence of congenital abnormalities and infectious diseases and relative rarity of malignancies in that age group. The aim of our study was to analyse the most common causes of childhood cervical lymphadenopathy and determine of management guidelines on the basis of clinical examination and ultrasonographic evaluation.\n\n\nMATERIAL AND METHODS\nThe research covered 87 children with cervical lymphadenopathy. Age, gender and accompanying diseases of the patients were assessed. All the patients were diagnosed radiologically on the basis of ultrasonographic evaluation.\n\n\nRESULTS\nReactive inflammatory changes of bacterial origin were observed in 50 children (57.5%). Fever was the most common general symptom accompanying lymphadenopathy and was observed in 21 cases (24.1%). The ultrasonographic evaluation revealed oval-shaped lymph nodes with the domination of long axis in 78 patients (89.66%). The proper width of hilus and their proper vascularization were observed in 75 children (86.2%). Some additional clinical and laboratory tests were needed in the patients with abnormal sonographic image.\n\n\nCONCLUSIONS\nUltrasonographic imaging is extremely helpful in diagnostics, differentiation and following the treatment of childhood lymphadenopathy. Failure of regression after 4-6 weeks might be an indication for a diagnostic biopsy.",
"title": ""
},
{
"docid": "8c9c9ad5e3d19b56a096e519cc6e3053",
"text": "Cebocephaly and sirenomelia are uncommon birth defects. Their association is extremely rare; however, the presence of spina bifida with both conditions is not unexpected. We report on a female still-birth with cebocephaly, alobar holoprosencephaly, cleft palate, lumbar spina bifida, sirenomelia, a single umbilical artery, and a 46,XX karyotype, but without maternal diabetes mellitus. Our case adds to the examples of overlapping cephalic and caudal defects, possibly related to vulnerability of the midline developmental field or axial mesodermal dysplasia spectrum.",
"title": ""
},
{
"docid": "f945b645e492e2b5c6c2d2d4ea6c57ae",
"text": "PURPOSE\nThe aim of this review was to look at relevant data and research on the evolution of ventral hernia repair.\n\n\nMETHODS\nResources including books, research, guidelines, and online articles were reviewed to provide a concise history of and data on the evolution of ventral hernia repair.\n\n\nRESULTS\nThe evolution of ventral hernia repair has a very long history, from the recognition of ventral hernias to its current management, with significant contributions from different authors. Advances in surgery have led to more cases of ventral hernia formation, and this has required the development of new techniques and new materials for ventral hernia management. The biocompatibility of prosthetic materials has been important in mesh development. The functional anatomy and physiology of the abdominal wall has become important in ventral hernia management. New techniques in abdominal wall closure may prevent or reduce the incidence of ventral hernia in the future.\n\n\nCONCLUSION\nThe management of ventral hernia is continuously evolving as it responds to new demands and new technology in surgery.",
"title": ""
},
{
"docid": "68689ad05be3bf004120141f0534fd2b",
"text": "A group of 156 first year medical students completed measures of emotional intelligence (EI) and physician empathy, and a scale assessing their feelings about a communications skills course component. Females scored significantly higher than males on EI. Exam performance in the autumn term on a course component (Health and Society) covering general issues in medicine was positively and significantly related to EI score but there was no association between EI and exam performance later in the year. High EI students reported more positive feelings about the communication skills exercise. Females scored higher than males on the Health and Society component in autumn, spring and summer exams. Structural equation modelling showed direct effects of gender and EI on autumn term exam performance, but no direct effects other than previous exam performance on spring and summer term performance. EI also partially mediated the effect of gender on autumn term exam performance. These findings provide limited evidence for a link between EI and academic performance for this student group. More extensive work on associations between EI, academic success and adjustment throughout medical training would clearly be of interest. 2005 Elsevier Ltd. All rights reserved. 0191-8869/$ see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.paid.2005.04.014 q Ethical approval from the College of Medicine and Veterinary Medicine was sought and received for this investigation. Student information was gathered and used in accordance with the Data Protection Act. * Corresponding author. Tel.: +44 131 65",
"title": ""
},
{
"docid": "848d1bcf05598dbd654ca9835a076ee9",
"text": "0377-2217/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.ejor.2010.11.018 ⇑ Corresponding authors at: Salford Business School M5 4WT, UK. Tel.: +44 0161 2954124; fax: +44 0161 2 010 62794461; fax: +86 010 62786911 (D.-H. Zhou). E-mail addresses: W.Wang@salford.ac.uk (W. Wan (D.-H. Zhou). Remaining useful life (RUL) is the useful life left on an asset at a particular time of operation. Its estimation is central to condition based maintenance and prognostics and health management. RUL is typically random and unknown, and as such it must be estimated from available sources of information such as the information obtained in condition and health monitoring. The research on how to best estimate the RUL has gained popularity recently due to the rapid advances in condition and health monitoring techniques. However, due to its complicated relationship with observable health information, there is no such best approach which can be used universally to achieve the best estimate. As such this paper reviews the recent modeling developments for estimating the RUL. The review is centred on statistical data driven approaches which rely only on available past observed data and statistical models. The approaches are classified into two broad types of models, that is, models that rely on directly observed state information of the asset, and those do not. We systematically review the models and approaches reported in the literature and finally highlight future research challenges. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c02a1c89692d88671f4be454345f3fa3",
"text": "In this study, the resonant analysis and modeling of the microstrip-fed stepped-impedance (SI) slot antenna are presented by utilizing the transmission-line and lumped-element circuit topologies. This study analyzes the SI-slot antenna and systematically summarizes its frequency response characteristics, such as the resonance condition, spurious response, and equivalent circuit. Design formulas with respect to the impedance ratio of the SI slot antenna were analytically derived. The antenna designers can predict the resonant modes of the SI slot antenna without utilizing expensive EM-simulation software.",
"title": ""
},
{
"docid": "701cad5b373f3dbc0497c23057c55c8f",
"text": "In this paper, we focus on the problem of answer triggering addressed by Yang et al. (2015), which is a critical component for a real-world question answering system. We employ a hierarchical gated recurrent neural tensor (HGRNT) model to capture both the context information and the deep interactions between the candidate answers and the question. Our result on F value achieves 42.6%, which surpasses the baseline by over 10 %.",
"title": ""
},
{
"docid": "293e2cd2647740bb65849fed003eb4ac",
"text": "In this paper we apply the Local Binary Pattern on Three Orthogonal Planes (LBP-TOP) descriptor to the field of human action recognition. A video sequence is described as a collection of spatial-temporal words after the detection of space-time interest points and the description of the area around them. Our contribution has been in the description part, showing LBP-TOP to be a promising descriptor for human action classification purposes. We have also developed several extensions to the descriptor to enhance its performance in human action recognition, showing the method to be computationally efficient.",
"title": ""
}
] |
scidocsrr
|
2e903936eca92d9eb6a1aee2132efe43
|
Design, development and evaluation of a highly versatile robot platform for minimally invasive single-port surgery
|
[
{
"docid": "8bb465b2ec1f751b235992a79c6f7bf1",
"text": "Continuum robotics has rapidly become a rich and diverse area of research, with many designs and applications demonstrated. Despite this diversity in form and purpose, there exists remarkable similarity in the fundamental simplified kinematic models that have been applied to continuum robots. However, this can easily be obscured, especially to a newcomer to the field, by the different applications, coordinate frame choices, and analytical formalisms employed. In this paper we review several modeling approaches in a common frame and notational convention, illustrating that for piecewise constant curvature, they produce identical results. This discussion elucidates what has been articulated in different ways by a number of researchers in the past several years, namely that constant-curvature kinematics can be considered as consisting of two separate submappings: one that is general and applies to all continuum robots, and another that is robot-specific. These mappings are then developed both for the singlesection and for the multi-section case. Similarly, we discuss the decomposition of differential kinematics (the robot’s Jacobian) into robot-specific and robot-independent portions. The paper concludes with a perspective on several of the themes of current research that are shaping the future of continuum robotics.",
"title": ""
}
] |
[
{
"docid": "b923f9c9669f5bf015fc7977e7987a37",
"text": "Artificial Intelligence (AI) has come of age. Two thousand and six marked the fiftieth anniversary of the Dartmouth Conference, where the term Artificial Intelligence was accepted as the official label for a new discipline that seemed to hold great promise in the pursuit of understanding the human mind. AI, as the nascent discipline came to be known in public and academic discourse, has accomplished a lot during this period, breaking new ground and providing deep insights into our minds, our technologies, and the relationship between them. But AI has also failed significantly, making false promises and often manifesting a kind of unbridled enthusiasm that is emblematic of Hollywood-style projects. This chapter seeks to capture both of these aspects: AI’s successes, accomplishments, and contributions to science, technology, and intellectual inquiry, on one hand, and its failures, fallacies, and shortcomings, on the other. The history of AI can, furthermore, be reviewed from different perspectives— humanistic, cognitive, sociological, and philosophical, among others. This review examines AI from two key perspectives—scientific and engineering. The former represents AI claims about the human mind and the nature of intelligence; the latter embodies the wide array of computer systems that are built by AI practitioners or by others who have, or claim to have, taken inspiration from ideas in AI in order to solve a practical problem in an area of application. Ideally, the scientific face should guide the engineering one and the engineering face would provide support and substance to its scientific counterpart. In reality, however, that relationship is not as straightforward as it should be, turning AI into a schizophrenic Janus. The way AI practitioners “talk” about these two faces complicates the situation even further, as we shall see. This review seeks to provide a balanced portrait of the two faces. Currently, we are witnessing a resurgence of interest in, and application of, AI in areas such as education, video gaming, financial forecasting, medical diagnosis, health and elderly care, data mining, self-aware computing, and the Semantic Web. The list is illustrative, but it clearly indicates the broad range of application domains that draw on AI techniques for ideas and solutions. As a general heuristic, whenever one encounters something qualified as “smart” or “intelligent”—as in CHAPTER 5",
"title": ""
},
{
"docid": "68651d6e68de08701f36907adda152ba",
"text": "This article presents a case involving a 16-year-old boy who came to the Tripler Army Medical Center Oral and Maxillofacial Surgery with a central giant cell granuloma (CGCG) on the anterior mandible. Initial management consisted of surgical curettage and intralesional injection of corticosteroids. Upon completion of steroid therapy, there was clinical and radiographic evidence of remission; however, radiographic evidence of lesion recurrence was seen at a six-month follow-up visit. The CGCG was retreated with curettage and five months of systemic injections of calcitonin, both of which failed. The lesion was most likely an aggressive form of CGCG that progressed despite conservative therapy, with destruction of hard and soft tissues, root resorption, tooth displacement, and paraesthesia in the anterior mandible. The authors present a treatment algorithm with comprehensive management involving surgical resection, reconstruction, orthodontics, and orthognathic surgery with prosthodontic considerations.",
"title": ""
},
{
"docid": "6b467ec8262144150b17cedb3d96edcb",
"text": "We describe a new method of measuring surface currents using an interferometric synthetic aperture radar. An airborne implementation has been tested over the San Francisco Bay near the time of maximum tidal flow, resulting in a map of the east-west component of the current. Only the line-of-sight component of velocity is measured by this technique. Where the signal-to-noise ratio was strongest, statistical fluctuations of less than 4 cm s−1 were observed for ocean patches of 60×60 m.",
"title": ""
},
{
"docid": "d6d9cb649294de96ea2bfe18753559df",
"text": "Since health care on foods is drawing people's attention recently, a system that can record everyday meals easily is being awaited. In this paper, we propose an automatic food image recognition system for recording people's eating habits. In the proposed system, we use the Multiple Kernel Learning (MKL) method to integrate several kinds of image features such as color, texture and SIFT adaptively. MKL enables to estimate optimal weights to combine image features for each category. In addition, we implemented a prototype system to recognize food images taken by cellular-phone cameras. In the experiment, we have achieved the 61.34% classification rate for 50 kinds of foods. To the best of our knowledge, this is the first report of a food image classification system which can be applied for practical use.",
"title": ""
},
{
"docid": "5837606de41a0ed39c093d8f65a9176c",
"text": "Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paper proposes a model for generating head tilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, \"Geminoid F\", a typical humanoid robot with less facial degrees of freedom, \"Robovie R2\", and a robot with a 3-axis rotatable neck and movable lips, \"Telenoid R2\"). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping people's original motions without gaze information. We also find that an upwards motion of a robot's face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping people's original motions with gaze information in terms of perceived naturalness.",
"title": ""
},
{
"docid": "8830eb9ac71b112c6061e64446e396ab",
"text": "BACKGROUND\nLabia minora reduction (labioplasty, labiaplasty) is the most common female genital aesthetic procedure. The majority of labia reductions are performed by trimming the labial edges. Many of these women present with (1) asymmetry; (2) scalloping of the labial edges with wide, occasionally painful scars; and (3) abrupt termination and distortion of the clitoral hood at its normal junctions with the clitoral frenula and the upper labium. Reconstruction can usually be performed with wedge excisions, labial YV advancement, and touch-up trimming. Reconstruction of a labial amputation, however, required the development of a new clitoral hood flap.\n\n\nMETHODS\nTwenty-four clitoral hood flaps were performed on 17 patients from June of 2006 through May of 2010. An island clitoral hood flap randomly based on the dartos fascia of the lower clitoral hood and medial labium majus is transposed to the ipsilateral labial defect to reconstruct a labium. Of the 10 patients with unilateral flaps, nine of the patients had previous bilateral labial reductions. Reconstruction of the opposite side in these nine women was performed using one or a combination of the following: wedge excisions, YV advancement flaps, or controlled touch-up trimming.\n\n\nRESULTS\nAll 24 flaps survived, with four minor complications. Five patients underwent revision of a total of seven flaps, but only two were for complications. As experience increased, revisions for aesthetic improvement became less common.\n\n\nCONCLUSION\nReconstruction of labia minora defects secondary to trimming labia reductions is very successful using a combination of clitoral hood flaps, wedge excisions, and YV advancements.",
"title": ""
},
{
"docid": "a9c07fb7a8ca7115bfc5591aa082e1ef",
"text": "In this paper we introduce a variant of Memory Networks (Weston et al., 2015b) that needs significantly less supervision to perform question and answering tasks. The original model requires that the sentences supporting the answer be explicitly indicated during training. In contrast, our approach only requires the answer to the question during training. We apply the model to the synthetic bAbI tasks, showing that our approach is competitive with the supervised approach, particularly when trained on a sufficiently large amount of data. Furthermore, it decisively beats other weakly supervised approaches based on LSTMs. The approach is quite general and can potentially be applied to many other tasks that require capturing long-term dependencies.",
"title": ""
},
{
"docid": "84f688155a92ed2196974d24b8e27134",
"text": "My sincere thanks to Donald Norman and David Rumelhart for their support of many years. I also wish to acknowledge the help of The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the sponsoring agencies. Approved for public release; distribution unlimited. Reproduction in whole or in part is permitted for any purpose of the United States Government Requests for reprints should be sent to the",
"title": ""
},
{
"docid": "f0781a7d1e3ade0f020066ba4d451eb1",
"text": "Integration of multi-mode multi-band transceivers on a single chip will enable low-cost millimeter-wave systems for next-generation automotive radar sensors. The first dual-band millimeter-wave transceiver operating in the 22-29-GHz and 77-81-GHz short-range automotive radar bands is designed and implemented in 0.18-¿ m SiGe BiCMOS technology with fT/fmax of 200/180 GHz. The transceiver chip includes a dual-band low noise amplifier, a shared downconversion chain, dual-band pulse formers, power amplifiers, a dual-band frequency synthesizer and a high-speed highly-programmable baseband pulse generator. The transceiver achieves 35/31-dB receive gain, 4.5/8-dB double side-band noise figure, >60/30-dB cross-band isolation, -114/-100.4-dBc/Hz phase noise at 1-MHz offset, and 14.5/10.5-dBm transmit power in the 24/79-GHz bands. Radar functionality is also demonstrated using a loopback measurement. The 3.9 × 1.9-mm2 24/79-GHz transceiver chip consumes 0.51/0.615 W.",
"title": ""
},
{
"docid": "723f047858910cd7a73d18e8697bb242",
"text": "This paper presents a new technique for the measurement of integrated circuit (IC) conducted emissions. In particular, the spectrum of interfering current flowing through an IC port is detected by using a transverse electromagnetic mode (TEM) cell. A structure composed of a matched TEM cell with inside a transmission line is considered. The structure is excited by an interfering source connected to one end of the transmission line. The relationship between the current spectrum of the source and the spectrum of the RF power delivered to the TEM mode of the cell is derived. This relationship is evaluated for one specific structure and the experimental validation is shown. Results of conducted emission measurement performed by using such a technique are shown as well and compared with those derived by using the magnetic probe method.",
"title": ""
},
{
"docid": "e6d05a96665c2651c0b31f1bff67f04d",
"text": "Detecting the neural processes like axons and dendrites needs high quality SEM images. This paper proposes an approach using perceptual grouping via a graph cut and its combinations with Convolutional Neural Network (CNN) to achieve improved segmentation of SEM images. Experimental results demonstrate improved computational efficiency with linear running time.",
"title": ""
},
{
"docid": "cae906033391328e9875b0a05c9d3772",
"text": "Software tools for Business Process Reengineering (BPR) promise to reduce cost and improve quality of projects. This paper discusses the contribution of BPR tools in BPR projects and identi®es critical factors for their success. A model was built based on previous research on tool success. The analysis of empirical data shows that BPR tools are related to effectiveness rather than ef®ciency of the projects. Process visualization and process analysis features are key to BPR tool competence. Also success factors for BPR tools are different from those for CASE tools. # 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "3fa30df910c964bb2bf27a885aa59495",
"text": "In an Intelligent Environment, he user and the environment work together in a unique manner; the user expresses what he wishes to do, and the environment recognizes his intentions and helps out however it can. If well-implemented, such an environment allows the user to interact with it in the manner that is most natural for him personally. He should need virtually no time to learn to use it and should be more productive once he has. But to implement a useful and natural Intelligent Environment, he designers are faced with a daunting task: they must design a software system that senses what its users do, understands their intentions, and then responds appropriately. In this paper we argue that, in order to function reasonably in any of these ways, an Intelligent Environment must make use of declarative representations of what the user might do. We present our evidence in the context of the Intelligent Classroom, a facility that aids a speaker in this way and uses its understanding to produce a video of his presentation.",
"title": ""
},
{
"docid": "d5f905fb66ba81ecde0239a4cc3bfe3f",
"text": "Bidirectional path tracing (BDPT) can render highly realistic scenes with complicated lighting scenarios. The Light Vertex Cache (LVC) based BDPT method by Davidovic et al. [Davidovič et al. 2014] provided good performance on scenes with simple materials in a progressive rendering scenario. In this paper, we propose a new bidirectional path tracing formulation based on the LVC approach that handles scenes with complex, layered materials efficiently on the GPU. We achieve coherent material evaluation while conserving GPU memory requirements using sorting. We propose a modified method for selecting light vertices using the contribution importance which improves the image quality for a given amount of work. Progressive rendering can empower artists in the production pipeline to iterate and preview their work quickly. We hope the work presented here will enable the use of GPUs in the production pipeline with complex materials and complicated lighting scenarios.",
"title": ""
},
{
"docid": "75e8548bfc5bf9cd213f1caacedb1593",
"text": "Time-resolved magnetic resonance angiography (TR MRA) is a promising less invasive technique for the diagnosis of intracranial vascular lesions and hypervascular tumors. Similar to 4-dimensional computed tomographic angiography obtaining high frame rate images, TR MRA utilizes acceleration techniques to acquire sequential arterial and venous phase images for identifying, localizing, and classifying vascular lesions. Because of the good agreement with digital subtraction angiography for grading brain arteriovenous malformations with the Spetzler-Martin classification and the good sensitivity for visualizing arteriovenous fistulas, studies have suggested that TR MRA could serve as a screening or routine follow-up tool for diagnosing intracranial vascular disorders. In this pictorial essay, we report on the use of TR MRA at 3.0 T to diagnose intracranial vascular lesions and hypervascular tumors, employing DSA as the reference technique.",
"title": ""
},
{
"docid": "515519cc7308477e1c38a74c4dd720f0",
"text": "The objective of cosmetic surgery is increased patient self-esteem and confidence. Most patients undergoing a procedure report these results post-operatively. The success of any procedure is measured in patient satisfaction. In order to optimize patient satisfaction, literature suggests careful pre-operative patient preparation including a discussion of the risks, benefits, limitations and expected results for each procedure undertaken. As a general rule, the patients that are motivated to surgery by a desire to align their outward appearance to their body-image tend to be the most satisfied. There are some psychiatric conditions that can prevent a patient from being satisfied without regard aesthetic success. The most common examples are minimal defect/Body Dysmorphic Disorder, the patient in crisis, the multiple revision patient, and loss of identity. This paper will familiarize the audience with these conditions, symptoms and related illnesses. Case examples are described and then explored in terms of the conditions presented. A discussion of the patient’s motivation for surgery, goals pertaining to specific attributes, as well as an evaluation of the patient’s understanding of the risks, benefits, and limitations of the procedure can help the physician determine if a patient is capable of being satisfied with a cosmetic plastic surgery procedure. Plastic surgeons can screen patients suffering from these conditions relatively easily, as psychiatry is an integral part of medical school education. If a psychiatric referral is required, then the psychiatrist needs to be aware of the nuances of each of these conditions.",
"title": ""
},
{
"docid": "91eecde9d0e3b67d7af0194782923ead",
"text": "The burden of entry into mobile crowdsensing (MCS) is prohibitively high for human-subject researchers who lack a technical orientation. As a result, the benefits of MCS remain beyond the reach of research communities (e.g., psychologists) whose expertise in the study of human behavior might advance applications and understanding of MCS systems. This paper presents Sensus, a new MCS system for human-subject studies that bridges the gap between human-subject researchers and MCS methods. Sensus alleviates technical burdens with on-device, GUI-based design of sensing plans, simple and efficient distribution of sensing plans to study participants, and uniform participant experience across iOS and Android devices. Sensing plans support many hardware and software sensors, automatic deployment of sensor-triggered surveys, and double-blind assignment of participants within randomized controlled trials. Sensus offers these features to study designers without requiring knowledge of markup and programming languages. We demonstrate the feasibility of using Sensus within two human-subject studies, one in psychology and one in engineering. Feedback from non-technical users indicates that Sensus is an effective and low-burden system for MCS-based data collection and analysis.",
"title": ""
},
{
"docid": "16e4db0d3bcf56a097652b2197ff0adb",
"text": "a r t i c l e i n f o Enterprise adoption of information technology (IT) innovations has been a topic of tremendous interest to both practitioners and researchers. The study of technological, managerial, strategic, and economic factors as well as adoption processes and contexts has led the field to become a rich tapestry of many theoretical and conceptual foundations. This paper provides a comprehensive multidisciplinary classification and analysis of the scholarly development of the enterprise-level IT innovation adoption literature by examining articles over the past three decades (1977–2008). We identify 472 articles and classify them by functional discipline, publication, research methodology, and IT type. The paper applies text analytic methods to this document repository to (1) identify salient adoption determinants and their relationships, (2) discover research trends and patterns across disciplines, and (3) suggest potential areas for future research in IT innovation adoption at the enterprise level. Adoption of information technology (IT) innovations has been a topic of significant interest to researchers and practitioners over the past three decades (e.g. [18,53,56]). Broadly, there are two complementary perspectives on IT innovation adoption: the first and more frequently examined perspective includes the adoption of IT innovations by the individual user. Often referred to as the bottom–up view, individual IT innovation adoption research has focused on user characteristics, behavioral motivation, and contextual elements. Enterprise IT innovation adoption on the other hand focuses on the firm and firm-level characteristics. 1 This perspective has gained particular interest due to enterprises' increasing dependence on IT as well as some highly publicized successes and failures over the past two decades. Consequently enterprise-level IT innovation adoption studies have focused on why, how, and under what conditions enterprises have succeeded or failed in adopting and implementing IT innovations. These issues have been examined for a wide range of different IT innovations, including enterprise information systems, electronic commerce, database management systems, network and telecommunications infrastructure, computer hardware, enterprise architecture components, and business productivity applications, among many others. As a result, previous studies have identified drivers and inhibitors, explored the influence of important technological, individual, organizational, strategic, economic, and managerial , and environmental factors, and examined key processes and stages associated with the adoption of IT innovations. Because IT touches upon virtually all aspects of an enterprise's value chain, researchers have drawn on theories, frameworks and models from a variety of complementary academic reference disciplines such as information systems, …",
"title": ""
},
{
"docid": "83fd45840bc26b95365688f959226625",
"text": "Big Data concern large-volume, growing data sets that are complex and have multiple autonomous sources. Earlier technologies were not able to handle storage and processing of huge data thus Big Data concept comes into existence. This is a tedious job for users to identify accurate data from huge unstructured data. So, there should be some mechanism which classify unstructured data into organized form which helps user to easily access required data. Classification techniques over big transactional database provide required data to the users from large datasets more simple way. There are two main classification techniques, supervised and unsupervised. In this paper we focused on to study of different supervised classification techniques. Further this paper shows application of each technique and their advantages and limitations.",
"title": ""
},
{
"docid": "85b95ad66c0492661455281177004b9e",
"text": "Although relatively small in size and power output, automotive accessory motors play a vital role in improving such critical vehicle characteristics as drivability, comfort, and, most importantly, fuel economy. This paper describes a design method and experimental verification of a novel technique for torque ripple reduction in stator claw-pole permanent-magnet (PM) machines, which are a promising technology prospect for automotive accessory motors.",
"title": ""
}
] |
scidocsrr
|
58809bd46bc8f4656fa7a1c4495936fc
|
Designing of ORBAC Model For Secure Domain Environments
|
[
{
"docid": "8f7428569e1d3036cdf4842d48b56c22",
"text": "This paper describes a unified model for role-based access control (RBAC). RBAC is a proven technology for large-scale authorization. However, lack of a standard model results in uncertainty and confusion about its utility and meaning. The NIST model seeks to resolve this situation by unifying ideas from prior RBAC models, commercial products and research prototypes. It is intended to serve as a foundation for developing future standards. RBAC is a rich and open-ended technology which is evolving as users, researchers and vendors gain experience with it. The NIST model focuses on those aspects of RBAC for which consensus is available. It is organized into four levels of increasing functional capabilities called flat RBAC, hierarchical RBAC, constrained RBAC and symmetric RBAC. These levels are cumulative and each adds exactly one new requirement. An alternate approach comprising flat and hierarchical RBAC in an ordered sequence and two unordered features—constraints and symmetry—is also presented. The paper furthermore identifies important attributes of RBAC not included in the NIST model. Some are not suitable for inclusion in a consensus document. Others require further work and agreement before standardization is feasible.",
"title": ""
}
] |
[
{
"docid": "6992762ad22f9e33db6ded9430e06848",
"text": "Solution M and C are strictly dominated and hence cannot receive positive probability in any Nash equilibrium. Given that only L and R receive positive probability, T cannot receive positive probability either. So, in any Nash equilibrium player 1 must play B with probability one. Given that, any probability distribution over L and R is a best response for player 2. In other words, the set of Nash equilibria is given by",
"title": ""
},
{
"docid": "0dd43aa274838165077dc766ecdf3d83",
"text": "Seeds play essential roles in plant life cycle and germination is a complex process which is associated with different phases of water imbibition. Upon imbibition, seeds begin utilization of storage substances coupled with metabolic activity and biosynthesis of new proteins. Regeneration of organelles and emergence of radicals lead to the establishment of seedlings. All these activities are regulated in coordinated manners. Translation is the requirement of germination of seeds via involvements of several proteins like beta-amylase, starch phosphorylase. Some important proteins involved in seed germination are discussed in this review. In the past decade, several proteomic studies regarding seed germination of various species such as rice, Arabidopsis have been conducted. We face A paucity of proteomic data with respect to woody plants e.g. Fagus, Pheonix etc. With particular reference to Cyclobalnopsis gilva, a woody plant having low seed germination rate, no proteomic studies have been conducted. The review aims to reveal the complex seed germination mechanisms from woody and herbaceous plants that will help in understanding different seed germination phases and the involved proteins in C. gilva.",
"title": ""
},
{
"docid": "4b930300b13c954ad8a158517ebb8109",
"text": "Under partial shading conditions, multiple peaks are observed in the power-voltage (P- V) characteristic curve of a photovoltaic (PV) array, and the conventional maximum power point tracking (MPPT) algorithms may fail to track the global maximum power point (GMPP). Therefore, this paper proposes a modified incremental conductance (Inc Cond) algorithm that is able to track the GMPP under partial shading conditions and load variation. A novel algorithm is introduced to modulate the duty cycle of the dc-dc converter in order to ensure fast MPPT process. Simulation and hardware implementation are carried out to evaluate the effectiveness of the proposed algorithm under partial shading and load variation. The results show that the proposed algorithm is able to track the GMPP accurately under different types of partial shading conditions, and the response during variation of load and solar irradiation are faster than the conventional Inc Cond algorithm. Hence, the effectiveness of the proposed algorithm under partial shading condition and load variation is validated in this paper.",
"title": ""
},
{
"docid": "65dfecb5e0f4f658a19cd87fb94ff0ae",
"text": "Although deep learning has produced dazzling successes for applications of image, speech, and video processing in the past few years, most trainings are with suboptimal hyper-parameters, requiring unnecessarily long training times. Setting the hyper-parameters remains a black art that requires years of experience to acquire. This report proposes several efficient ways to set the hyper-parameters that significantly reduce training time and improves performance. Specifically, this report shows how to examine the training validation/test loss function for subtle clues of underfitting and overfitting and suggests guidelines for moving toward the optimal balance point. Then it discusses how to increase/decrease the learning rate/momentum to speed up training. Our experiments show that it is crucial to balance every manner of regularization for each dataset and architecture. Weight decay is used as a sample regularizer to show how its optimal value is tightly coupled with the learning rates and momentums.",
"title": ""
},
{
"docid": "f835e60133415e3ec53c2c9490048172",
"text": "Probabilistic databases have received considerable attention recently due to the need for storing uncertain data produced by many real world applications. The widespread use of probabilistic databases is hampered by two limitations: (1) current probabilistic databases make simplistic assumptions about the data (e.g., complete independence among tuples) that make it difficult to use them in applications that naturally produce correlated data, and (2) most probabilistic databases can only answer a restricted subset of the queries that can be expressed using traditional query languages. We address both these limitations by proposing a framework that can represent not only probabilistic tuples, but also correlations that may be present among them. Our proposed framework naturally lends itself to the possible world semantics thus preserving the precise query semantics extant in current probabilistic databases. We develop an efficient strategy for query evaluation over such probabilistic databases by casting the query processing problem as an inference problem in an appropriately constructed probabilistic graphical model. We present several optimizations specific to probabilistic databases that enable efficient query evaluation. We validate our approach by presenting an experimental evaluation that illustrates the effectiveness of our techniques at answering various queries using real and synthetic datasets.",
"title": ""
},
{
"docid": "24a4fb7f87d6ee75aa26aeb6b77f68bb",
"text": "Networked learning is much more ambitious than previous approaches of ICT-support in education. It is therefore more difficult to evaluate the effectiveness and efficiency of the networked learning activities. Evaluation of learners’ interactions in networked learning environments is a difficult, resource and expertise demanding task. Educators participating in online learning environments, have very little support by integrated tools to evaluate students’ activities and identify learners’ online browsing behavior and interactions. As a consequence, educators are in need for non-intrusive and automatic ways to get feedback from learners’ progress in order to better follow their learning process and appraise the online course effectiveness. They also need specialized tools for authoring, delivering, gathering and analysing data for evaluating the learning effectiveness of networked learning courses. Thus, the aim of this paper is to propose a new set of services for the evaluator and lecturer so that he/she can easily evaluate the learners’ progress and produce evaluation reports based on learners’ behaviour within a Learning Management System. These services allow the evaluator to easily track down the learners’ online behavior at specific milestones set up, gather feedback in an automatic way and present them in a comprehensive way. The innovation of the proposed set of services lies on the effort to adopt/adapt some of the web usage mining techniques combining them with the use of semantic description of networked learning tasks",
"title": ""
},
{
"docid": "2a1bee8632e983ca683cd5a9abc63343",
"text": "Phrase browsing techniques use phrases extracted automatically from a large information collection as a basis for browsing and accessing it. This paper describes a case study that uses an automatically constructed phrase hierarchy to facilitate browsing of an ordinary large Web site. Phrases are extracted from the full text using a novel combination of rudimentary syntactic processing and sequential grammar induction techniques. The interface is simple, robust and easy to use.\nTo convey a feeling for the quality of the phrases that are generated automatically, a thesaurus used by the organization responsible for the Web site is studied and its degree of overlap with the phrases in the hierarchy is analyzed. Our ultimate goal is to amalgamate hierarchical phrase browsing and hierarchical thesaurus browsing: the latter provides an authoritative domain vocabulary and the former augments coverage in areas the thesaurus does not reach.",
"title": ""
},
{
"docid": "b2895d35c6ffddfb9adc7c1d88cef793",
"text": "We develop algorithms for a stochastic appointment sequencing and scheduling problem with waiting time, idle time, and overtime costs. Scheduling surgeries in an operating room motivates the work. The problem is formulated as an integer stochastic program using sample average approximation. A heuristic solution approach based on Benders’ decomposition is developed and compared to exact methods and to previously proposed approaches. Extensive computational testing based on real data shows that the proposed methods produce good results compared to previous approaches. In addition we prove that the finite scenario sample average approximation problem is NP-complete.",
"title": ""
},
{
"docid": "b6daaad245ea5a6f8bf0c6280a80705c",
"text": "Human, Homo sapiens, female orgasm is not necessary for conception; hence it seems reasonable to hypothesize that orgasm is an adaptation for manipulating the outcome of sperm competition resulting from facultative polyandry. If heritable differences in male viability existed in the evolutionary past, selection could have favoured female adaptations (e.g. orgasm) that biased sperm competition in favour of males possessing heritable fitness indicators. Accumulating evidence suggests that low fluctuating asymmetry is a sexually selected male feature in a variety of species, including humans, possibly because it is a marker of genetic quality. Based on these notions, the proportion of a woman’s copulations associated with orgasm is predicted to be associated with her partner’s fluctuating asymmetry. A questionnaire study of 86 sexually active heterosexual couples supported this prediction. Women with partners possessing low fluctuating asymmetry and their partners reported significantly more copulatory female orgasms than were reported by women with partners possessing high fluctuating asymmetry and their partners, even with many potential confounding variables controlled. The findings are used to examine hypotheses for female orgasm other than selective sperm retention. i 1995 The Association for the Study of Animal Behaviour The human female orgasm has attracted great interest from many evolutionary behavioy-al scientists. Several hypotheses propose that female orgasm is an adaptation. First, human female orgasm has been claimed to create and maintain the pair bond between male and female by promoting female intimacy through sexual pleasure (e.g. Morris 1967; Eibl-Eibesfeldt 1989). Second, a number of evolutionists have suggested that human female orgasm functions in selective bonding with males by promoting affiliation primarily with males who are willing to invest time or material resources in the female (Alexander 1979; Alcock 1987) and/or males of genotypic quality (Smith 1984; Alcock 1987). Third, female orgasm has been said to motivate a female to pursue multiple males to prevent male infanticide of the female’s offspring and/or to gain material benefits from multiple mates (Hrdy 1981). Fourth, Morris (1967) proposed that human female orgasm functions to induce fatigue, sleep and a prone position, and thereby passively acts to retain sperm. Correspondence: R. Thornhill, Department of Biology, University of New Mexico, Albuquerque, NM 87131. 1091, U.S.A. (email: rthorn@unm.edu). Additional adaptational hypotheses suggest a more active process by which orgasm retains sperm. The ‘upsuck’ hypothesis proposes that orgasm actively retains sperm by sucking sperm into the uterus (Fox et al. 1970; see also Singer 1973). Smith (1984) modified this hypothesis into one based on sire choice; he argued that the evolved function of female orgasm is control over paternity of offspring by assisting the sperm of preferred sires and handicapping the sperm of non-preferred mates. Also, Baker & Bellis (1993; see also Baker et al. 1989) speculated that timing of the human female orgasm plays a role in sperm retention. Baker & Bellis (1993) showed that orgasm occurring near the time of male ejaculation results in greater sperm retention, as does orgasm up to 45 min. after ejaculation. Orgasm occurring more than a minute before male ejaculation appears not to enhance sperm retention. Baker & Bellis (1993) furthermore argued that orgasms occurring at one time may hinder retention of sperm from subsequent copulations up to 8 days later. In addition, a number of theorists have argued that human female orgasm has not been selected for because of its own functional significance and 0003-3472/95/121601+ 15 $12.00/O D 1995 The Association for the Study of Animal Behaviour",
"title": ""
},
{
"docid": "e2de032eac6b4a8f6c816d6eb85b41ef",
"text": "Terrestrial habitats surrounding wetlands are critical to the management of natural resources. Although the protection of water resources from human activities such as agriculture, silviculture, and urban development is obvious, it is also apparent that terrestrial areas surrounding wetlands are core habitats for many semiaquatic species that depend on mesic ecotones to complete their life cycle. For purposes of conservation and management, it is important to define core habitats used by local breeding populations surrounding wetlands. Our objective was to provide an estimate of the biologically relevant size of core habitats surrounding wetlands for amphibians and reptiles. We summarize data from the literature on the use of terrestrial habitats by amphibians and reptiles associated with wetlands (19 frog and 13 salamander species representing 1363 individuals; 5 snake and 28 turtle species representing more than 2245 individuals). Core terrestrial habitat ranged from 159 to 290 m for amphibians and from 127 to 289 m for reptiles from the edge of the aquatic site. Data from these studies also indicated the importance of terrestrial habitats for feeding, overwintering, and nesting, and, thus, the biological interdependence between aquatic and terrestrial habitats that is essential for the persistence of populations. The minimum and maximum values for core habitats, depending on the level of protection needed, can be used to set biologically meaningful buffers for wetland and riparian habitats. These results indicate that large areas of terrestrial habitat surrounding wetlands are critical for maintaining biodiversity. Criterios Biológicos para Zonas de Amortiguamiento Alrededor de Hábitats de Humedales y Riparios para Anfibios y Reptiles Resumen: Los hábitats terrestres que rodean humedales son críticos para el manejo de recursos naturales. Aunque la protección de recursos acuáticos contra actividades humanas como agricultura, silvicultura y desarrollo urbano es obvia, también es aparente que las áreas terrestres que rodean a humedales son hábitat núcleo para muchas especies semiacuáticas que dependen de los ecotonos mésicos para completar sus ciclos de vida. Para propósitos de conservación y manejo, es importante definir los hábitats núcleo utilizados por las poblaciones reproductivas locales alrededor de humedales. Nuestro objetivo fue proporcionar una estimación del tamaño biológicamente relevante de los hábitats núcleo alrededor de humedales para anfibios y reptiles. Resumimos datos de la literatura sobre el uso de hábitats terrestres por anfibios y reptiles asociados con humedales (19 especies de ranas y 13 de salamandras, representando a 1363 individuos; 5 especies de serpientes y 28 de tortugas representando a más de 2245 individuos). Los hábitats núcleo terrestres variaron de 159 a 290 m para anfibios y de 127 a 289 para reptiles desde el borde del sitio acuático. Datos de estos estudios también indicaron la importancia de los hábitats terrestres para alimentación, hibernación y anidación, y, por lo tanto, que la interdependencia biológica entre hábitats acuáticos y terrestres es esencial para la persistencia de poblaciones. Dependiendo del nivel de protección requerida, se pueden utilizar los valores mínimos y máximos de hábitats núcleo para definir amortiguamientos biológicamente significativos para hábitats de humedales y riparios. Estos resultados indican que extensas áreas de hábitats terrestres que rodean humedales son críticas para el mantenimiento de la biodiversidad. Paper submitted November 24, 2002; revised manuscript accepted January 28, 2003. 1220 Buffer Zones for Wetlands and Riparian Habitats Semlitsch & Bodie Conservation Biology Volume 17, No. 5, October 2003 Introduction Terrestrial habitats surrounding wetlands are critical for the management of water and wildlife resources. It is well established that these terrestrial habitats are the sites of physical and chemical filtration processes that protect water resources (e.g., drinking water, fisheries) from siltation, chemical pollution, and increases in water temperature caused by human activities such as agriculture, silviculture, and urban development (e.g., Lowrance et al. 1984; Forsythe & Roelle 1990). It is generally acknowledged that terrestrial buffers or riparian strips 30–60 m wide will effectively protect water resources (e.g., Lee & Samuel 1976; Phillips 1989; Hartman & Scrivener 1990; Davies & Nelson 1994; Brosofske et al. 1997). However, terrestrial habitats surrounding wetlands are important to more than just the protection of water resources. They are also essential to the conservation and management of semiaquatic species. In the last few years, a number of studies have documented the use of terrestrial habitats adjacent to wetlands by a broad range of taxa, including mammals, birds, reptiles, and amphibians ( e.g., Rudolph & Dickson 1990; McComb et al. 1993; Darveau et al. 1995; Spackman & Hughes 1995; Hodges & Krementz 1996; Semlitsch 1998; Bodie 2001; Darveau et al. 2001 ). These studies have shown the close dependence of semiaquatic species, such as amphibians and reptiles, on terrestrial habitats for critical life-history functions. For example, amphibians, such as frogs and salamanders, breed and lay eggs in wetlands during short breeding seasons lasting only a few days or weeks and during the remainder of the year emigrate to terrestrial habitats to forage and overwinter (e.g., Madison 1997; Richter et al. 2001). Reptiles, such as turtles and snakes, often live and forage in aquatic habitats most of the year but emigrate to upland habitats to nest or overwinter (e.g., Gibbons et al. 1977; Semlitsch et al. 1988; Burke & Gibbons 1995; Bodie 2001). The biological importance of these habitats in maintaining biodiversity is obvious, yet criteria by which to define habitats and regulations to protect them are ambiguous or lacking (Semlitsch & Bodie 1998; Semlitsch & Jensen 2001). More importantly, a serious gap is created in biodiversity protection when regulations or ordinances, especially those of local or state governments, have been set based on criteria to protect water resources alone, without considering habitats critical to wildlife species. Further, the aquatic and terrestrial habitats needed to carry out life-history functions are essential and are defined here as “core habitats.” No summaries of habitat use by amphibians and reptiles exist to estimate the biologically relevant size of core habitats surrounding wetlands that are needed to protect biodiversity. For conservation and management, it is important to define and distinguish core habitats used by local breeding populations surrounding wetlands. For example, adult frogs, salamanders, and turtles are generally philopatric to individual wetlands and migrate annually between aquatic and terrestrial habitats to forage, reproduce, and overwinter ( e.g., Burke & Gibbons 1995; Semlitsch 1998). The amount of terrestrial habitats used during migrations to and from wetlands and for foraging defines the terrestrial core habitat of a population. This aggregation of breeding adults constitutes a local population centered on a single wetland or wetland complex. Local populations are connected by dispersal and are part of a larger metapopulation, which extends across the landscape (Pulliam 1988; Marsh & Trenham 2001). Annual migrations centered on a single wetland or wetland complex are biologically different than dispersal to new breeding sites. It is thought that dispersal among populations is achieved primarily by juveniles for amphibians ( e.g., Gill 1978; Breden 1987; Berven & Grudzien 1990) or by males for turtles (e.g., Morreale et al. 1984). Dispersal by juvenile amphibians tends to be unidirectional and longer in distance than the annual migratory movements of breeding adults ( e.g., Breden 1987; Seburn et al. 1997 ). Thus, habitats adjacent to wetlands can serve as stopping points and corridors for dispersal to other nearby wetlands. Ultimately, conservation and management plans must consider both local and landscape dynamics (Semlitsch 2000), but core habitats for local populations need to be defined before issues of connectivity at the metapopulation level are considered.",
"title": ""
},
{
"docid": "db26d71ec62388e5367eb0f2bb45ad40",
"text": "The linear programming (LP) is one of the most popular necessary optimization tool used for data analytics as well as in various scientific fields. However, the current state-of-art algorithms suffer from scalability issues when processing Big Data. For example, the commercial optimization software IBM CPLEX cannot handle an LP with more than hundreds of thousands variables or constraints. Existing algorithms are fundamentally hard to scale because they are inevitably too complex to parallelize. To address the issue, we study the possibility of using the Belief Propagation (BP) algorithm as an LP solver. BP has shown remarkable performances on various machine learning tasks and it naturally lends itself to fast parallel implementations. Despite this, very little work has been done in this area. In particular, while it is generally believed that BP implicitly solves an optimization problem, it is not well understood under what conditions the solution to a BP converges to that of a corresponding LP formulation. Our efforts consist of two main parts. First, we perform a theoretic study and establish the conditions in which BP can solve LP [1,2]. Although there has been several works studying the relation between BP and LP for certain instances, our work provides a generic condition unifying all prior works for generic LP. Second, utilizing our theoretical results, we develop a practical BP-based parallel algorithms for solving generic LPs, and it shows 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm [3, 4]. As a result of the study, the PIs have published two conference papers [1,3] and two follow-up journal papers [3,4] are under submission. We refer the readers to our published work [1,3] for details. Introduction: The main goal of our research is to develop a distributed and parallel algorithm for large-scale linear optimization (or programming). Considering the popularity and importance of linear optimizations in various fields, the proposed method has great potentials applicable to various big data analytics. Our approach is based on the Belief Propagation (BP) algorithm, which has shown remarkable performances on various machine learning tasks and naturally lends itself to fast parallel implementations. Our key contributions are summarized below: 1) We establish key theoretic foundations in the area of Belief Propagation. In particular, we show that BP converges to the solution of LP if some sufficient conditions are satisfied. Our DISTRIBUTION A. Approved for public release: distribution unlimited. conditions not only cover various prior studies including maximum weight matching, mincost network flow, shortest path, etc., but also discover new applications such as vertex cover and traveling salesman. 2) While the theoretic study provides understanding of the nature of BP, it falls short in slow convergence speed, oscillation and wrong convergence. To make BP-based algorithms more practical, we design a BP-based framework which uses BP as a ‘weight transformer’ to resolve the convergence issue of BP. We refer the readers to our published work [1, 3] for details. The rest of the report contains a summary of our work appeared in UAI (Uncertainty in Artificial Intelligence) and IEEE Conference in Big Data [1,3] and follow up work [2,4] under submission to major journals. Experiment: We first establish theoretical conditions when Belief Propagation (BP) can solve Linear Programming (LP), and second provide a practical distributed/parallel BP-based framework solving generic optimizations. We demonstrate the wide-applicability of our approach via popular combinatorial optimizations including maximum weight matching, shortest path, traveling salesman, cycle packing and vertex cover. Results and Discussion: Our contribution consists of two parts: Study 1 [1,2] looks at the theoretical conditions that BP converges to the solution of LP. Our theoretical result unify almost all prior result about BP for combinatorial optimization. Furthermore, our conditions provide a guideline for designing distributed algorithm for combinatorial optimization problems. Study 2 [3,4] focuses on building an optimal framework based on the theory of Study 1 for boosting the practical performance of BP. Our framework is generic, thus, it can be easily extended to various optimization problems. We also compare the empirical performance of our framework to other heuristics and state of the art algorithms for several combinatorial optimization problems. -------------------------------------------------------Study 1 -------------------------------------------------------We first introduce the background for our contributions. A joint distribution of � (binary) variables � = [��] ∈ {0,1}� is called graphical model (GM) if it factorizes as follows: for � = [��] ∈ {0,1}�, where ψψ� ,�� are some non-negative functions so called factors; � is a collection of subsets (each αα� is a subset of {1,⋯ ,�} with |��| ≥ 2; �� is the projection of � onto dimensions included in αα. Assignment �∗ is called maximum-a-posteriori (MAP) assignment if �∗maximizes the probability. The following figure depicts the graphical relation between factors � and variables �. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 1: Factor graph for the graphical model with factors αα1 = {1,3},�2 = {1,2,4},�3 = {2,3,4} Now we introduce the algorithm, (max-product) BP, for approximating MAP assignment in a graphical model. BP is an iterative procedure; at each iteration �, there are four messages between each variable �� and every associated αα ∈ ��, where ��: = {� ∈ �:� ∈ �}. Then, messages are updated as follows: Finally, given messages, BP marginal beliefs are computed as follows: Then, BP outputs the approximated MAP assignment ��� = [��] as Now, we are ready to introduce the main result of Study 1. Consider the following GM: for � = [��] ∈ {0,1}� and � = [��] ∈ ��, where the factor function ψψαα for αα ∈ � is defined as for some matrices ��,�� and vectors ��,��. Consider the Linear Programming (LP) corresponding the above GM: One can easily observe that the MAP assignments for GM corresponds to the (optimal) solution of the above LP if the LP has an integral solution �∗ ∈ {0,1}�. The following theorem is our main result of Study 1 which provide sufficient conditions so that BP can indeed find the LP solution DISTRIBUTION A. Approved for public release: distribution unlimited. Theorem 1 can be applied to several combinatorial optimization problems including matching, network flow, shortest path, vertex cover, etc. See [1,2] for the detailed proof of Theorem 1 and its applications to various combinatorial optimizations including maximum weight matching, min-cost network flow, shortest path, vertex cover and traveling salesman. -------------------------------------------------------Study 2 -------------------------------------------------------Study 2 mainly focuses on providing a distributed generic BP-based combinatorial optimization solver which has high accuracy and low computational complexity. In summary, the key contributions of Study 2 are as follows: 1) Practical BP-based algorithm design: To the best of our knowledge, this paper is the first to propose a generic concept for designing BP-based algorithms that solve large-scale combinatorial optimization problems. 2) Parallel implementation: We also demonstrate that the algorithm is easily parallelizable. For the maximum weighted matching problem, this translates to 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm. 3) Extensive empirical evaluation: We evaluate our algorithms on three different combinatorial optimization problems on diverse synthetic and real-world data-sets. Our evaluation shows that the framework shows higher accuracy compared to other known heuristics. Designing a BP-based algorithm for some problem is easy in general. However (a) it might diverge or converge very slowly, (b) even if it converges quickly, the BP decision might be not correct, and (c) even worse, BP might produce an infeasible solution, i.e., it does not satisfy the constraints of the problem. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 2: Overview of our generic BP-based framework To address these issues, we propose a generic BP-based framework that provides highly accurate approximate solutions for combinatorial optimization problems. The framework has two steps, as shown in Figure 2. In the first phase, it runs a BP algorithm for a fixed number of iterations without waiting for convergence. Then, the second phase runs a known heuristic using BP beliefs instead of the original weights to output a feasible solution. Namely, the first and second phases are respectively designed for ‘BP weight transforming’ and ‘post-processing’. Note that our evaluation mainly uses the maximum weight matching problem. The formal description of the maximum weight matching (MWM) problem is as follows: Given a graph � = (�,�) and edge weights � = [��] ∈ �|�|, it finds a set of edges such that each vertex is connected to at most one edge in the set and the sum of edge weights in the set is maximized. The problem is formulated as the following IP (Integer Programming): where δδ(�) is the set of edges incident to vertex � ∈ �. In the following paragraphs, we describe the two phases in more detail in reverse order. We first describe the post-processing phase. As we mentioned, one of the main issue of a BP-based algorithm is that the decision on BP beliefs might give an infeasible solution. To resolve the issue, we use post-processing by utilizing existing heuristics to the given problem that find a feasible solution. Applying post-processing ensures that the solution is at least feasible. In addition, our key idea is to replace the original weights by the logarithm of BP beliefs, i.e. function of (3). After th",
"title": ""
},
{
"docid": "b6634563103c752e961f6ff32759922b",
"text": "Among several biometric traits possible to be used for people identification, fingerprint is still the most used. Current automated fingerprint identification systems are based on ridge patterns and minutiae, classified as first and second level features, respectively. However, the development of new fingerprint sensors and the growing demand for more secure systems are leading to the use of additional discriminative fingerprint characteristics known as third level features, such as the sweat pores. Recent researches on fingerprint recognition have focused on fingerprint fragments, in which methods based only on first and second level features tend to obtain low recognition rates. This paper proposes a robust method developed for fingerprint recognition from fingerprint fragments based on ridges and sweat pores. We have extended a ridgebased fingerprint recognition method previously proposed in the literature, based on Hough Transform, by incorporating sweat pores information in the matching step. Experimental results showed that although the reduction of Equal Error Rate is modest, a significant improvement was observed when analyzing the FMR100 and FMR1000 metrics, which are more suitable for high security applications. For these two metrics, the proposed approach obtained a reduction superior to 10% of the rates, when compared to the original ridge-based approach. Keywords-biometrics; fingerprints; ridges; sweat pores",
"title": ""
},
{
"docid": "d5e5d79b8a06d4944ee0c3ddcd84ce4c",
"text": "Recent years have observed a significant progress in information retrieval and natural language processing with deep learning technologies being successfully applied into almost all of their major tasks. The key to the success of deep learning is its capability of accurately learning distributed representations (vector representations or structured arrangement of them) of natural language expressions such as sentences, and effectively utilizing the representations in the tasks. This tutorial aims at summarizing and introducing the results of recent research on deep learning for information retrieval, in order to stimulate and foster more significant research and development work on the topic in the future.\n The tutorial mainly consists of three parts. In the first part, we introduce the fundamental techniques of deep learning for natural language processing and information retrieval, such as word embedding, recurrent neural networks, and convolutional neural networks. In the second part, we explain how deep learning, particularly representation learning techniques, can be utilized in fundamental NLP and IR problems, including matching, translation, classification, and structured prediction. In the third part, we describe how deep learning can be used in specific application tasks in details. The tasks are search, question answering (from either documents, database, or knowledge base), and image retrieval.",
"title": ""
},
{
"docid": "c182be9222690ffe1c94729b2b79d8ed",
"text": "A balanced level of muscle strength between the different parts of the scapular muscles is important in optimizing performance and preventing injuries in athletes. Emerging evidence suggests that many athletes lack balanced strength in the scapular muscles. Evidence-based recommendations are important for proper exercise prescription. This study determines scapular muscle activity during strengthening exercises for scapular muscles performed at low and high intensities (Borg CR10 levels 3 and 8). Surface electromyography (EMG) from selected scapular muscles was recorded during 7 strengthening exercises and expressed as a percentage of the maximal EMG. Seventeen women (aged 24-55 years) without serious disorders participated. Several of the investigated exercises-press-up, prone flexion, one-arm row, and prone abduction at Borg 3 and press-up, push-up plus, and one-arm row at Borg 8-predominantly activated the lower trapezius over the upper trapezius (activation difference [Δ] 13-30%). Likewise, several of the exercises-push-up plus, shoulder press, and press-up at Borg 3 and 8-predominantly activated the serratus anterior over the upper trapezius (Δ18-45%). The middle trapezius was activated over the upper trapezius by one-arm row and prone abduction (Δ21-30%). Although shoulder press and push-up plus activated the serratus anterior over the lower trapezius (Δ22-33%), the opposite was true for prone flexion, one-arm row, and prone abduction (Δ16-54%). Only the press-up and push-up plus activated both the lower trapezius and the serratus anterior over the upper trapezius. In conclusion, several of the investigated exercises both at low and high intensities predominantly activated the serratus anterior and lower and middle trapezius, respectively, over the upper trapezius. These findings have important practical implications for exercise prescription for optimal shoulder function. For example, both workers with neck pain and athletes at risk of shoulder impingement (e.g., overhead sports) should perform push-up plus and press-ups to specifically strengthen the serratus anterior and lower trapezius.",
"title": ""
},
{
"docid": "95df9ceddf114060d981415c0b1d6125",
"text": "This paper presents a comparative study of different neural network models for forecasting the weather of Vancouver, British Columbia, Canada. For developing the models, we used one year’s data comprising of daily maximum and minimum temperature, and wind-speed. We used Multi-Layered Perceptron (MLP) and an Elman Recurrent Neural Network (ERNN), which were trained using the one-step-secant and LevenbergMarquardt algorithms. To ensure the effectiveness of neurocomputing techniques, we also tested the different connectionist models using a different training and test data set. Our goal is to develop an accurate and reliable predictive model for weather analysis. Radial Basis Function Network (RBFN) exhibits a good universal approximation capability and high learning convergence rate of weights in the hidden and output layers. Experimental results obtained have shown RBFN produced the most accurate forecast model as compared to ERNN and MLP networks.",
"title": ""
},
{
"docid": "14b15f15cb7dbb3c19a09323b4b67527",
"text": " Establishing mechanisms for sharing knowledge and technology among experts in different fields related to automated de-identification and reversible de-identification Providing innovative solutions for concealing, or removal of identifiers while preserving data utility and naturalness Investigating reversible de-identification and providing a thorough analysis of security risks of reversible de-identification Providing a detailed analysis of legal, ethical and social repercussion of reversible/non-reversible de-identification Promoting and facilitating the transfer of knowledge to all stakeholders (scientific community, end-users, SMEs) through workshops, conference special sessions, seminars and publications",
"title": ""
},
{
"docid": "141e3ad8619577140f02a1038981ecb2",
"text": "Sponges are sessile benthic filter-feeding animals, which harbor numerous microorganisms. The enormous diversity and abundance of sponge associated bacteria envisages sponges as hot spots of microbial diversity and dynamics. Many theories were proposed on the ecological implications and mechanism of sponge-microbial association, among these, the biosynthesis of sponge derived bioactive molecules by the symbiotic bacteria is now well-indicated. This phenomenon however, is not exhibited by all marine sponges. Based on the available reports, it has been well established that the sponge associated microbial assemblages keep on changing continuously in response to environmental pressure and/or acquisition of microbes from surrounding seawater or associated macroorganisms. In this review, we have discussed nutritional association of sponges with its symbionts, interaction of sponges with other eukaryotic organisms, dynamics of sponge microbiome and sponge-specific microbial symbionts, sponge-coral association etc.",
"title": ""
},
{
"docid": "69f3a41f7250377b2d99aa61249db37e",
"text": "In this paper, a fuzzy ontology and its application to news summarization are presented. The fuzzy ontology with fuzzy concepts is an extension of the domain ontology with crisp concepts. It is more suitable to describe the domain knowledge than domain ontology for solving the uncertainty reasoning problems. First, the domain ontology with various events of news is predefined by domain experts. The document preprocessing mechanism will generate the meaningful terms based on the news corpus and the Chinese news dictionary defined by the domain expert. Then, the meaningful terms will be classified according to the events of the news by the term classifier. The fuzzy inference mechanism will generate the membership degrees for each fuzzy concept of the fuzzy ontology. Every fuzzy concept has a set of membership degrees associated with various events of the domain ontology. In addition, a news agent based on the fuzzy ontology is also developed for news summarization. The news agent contains five modules, including a retrieval agent, a document preprocessing mechanism, a sentence path extractor, a sentence generator, and a sentence filter to perform news summarization. Furthermore, we construct an experimental website to test the proposed approach. The experimental results show that the news agent based on the fuzzy ontology can effectively operate for news summarization.",
"title": ""
},
{
"docid": "063389c654f44f34418292818fc781e7",
"text": "In a cross-disciplinary study, we carried out an extensive literature review to increase understanding of vulnerability indicators used in the disciplines of earthquakeand flood vulnerability assessments. We provide insights into potential improvements in both fields by identifying and comparing quantitative vulnerability indicators grouped into physical and social categories. Next, a selection of indexand curve-based vulnerability models that use these indicators are described, comparing several characteristics such as temporal and spatial aspects. Earthquake vulnerability methods traditionally have a strong focus on object-based physical attributes used in vulnerability curve-based models, while flood vulnerability studies focus more on indicators applied to aggregated land-use classes in curve-based models. In assessing the differences and similarities between indicators used in earthquake and flood vulnerability models, we only include models that separately assess either of the two hazard types. Flood vulnerability studies could be improved using approaches from earthquake studies, such as developing object-based physical vulnerability curve assessments and incorporating time-of-the-day-based building occupation patterns. Likewise, earthquake assessments could learn from flood studies by refining their selection of social vulnerability indicators. Based on the lessons obtained in this study, we recommend future studies for exploring risk assessment methodologies across different hazard types.",
"title": ""
},
{
"docid": "255f4d19d89e9ff7acb6cca900fe9ed6",
"text": "Objectives: Burnout syndrome (B.S.) affects millions of workers around the world, having a significant impact on their quality of life and the services they provide. It’s a psycho-social phenomenon, which can be handled through emotional management and psychological help. Emotional Intelligence (E.I) is very important to emotional management. This paper aims to investigate the relationship between Burnout syndrome and Emotional Intelligence in health professionals occupied in the sector of rehabilitation. Methods: The data were collected from a sample of 148 healthcare professionals, workers in the field of rehabilitation, who completed Maslach Burnout Inventory questionnaire, Trait Emotional Intelligence Que-Short Form questionnaire and a questionnaire collecting demographic data as well as personal and professional information. Simple linear regression and multiple regression analyses were conducted to analyze the data. Results: The results indicated that there is a positive relationship between Emotional Intelligence and Burnout syndrome as Emotional Intelligence acts protectively against Burnout syndrome and even reduces it. In particular, it was found that the higher the Emotional Intelligence, the lower the Burnout syndrome. Also, among all factors of Emotional Intelligence, “Emotionality”, seems to influence Burnout syndrome the most, as, the higher the rate of Emotionality, the lower the rate of Burnout. At the same time, evidence was found on the variability of Burnout syndrome through various models of explanation and correlation between Burnout syndrome and Emotional Intelligence and also, Burnout syndrome and Emotional Intelligence factors. Conclusion: Employers could focus on building emotional relationships with their employees, especially in the health care field. Furthermore, they could also promote some experimental seminars, sponsored by public or private institutions, in order to enhance Emotional Intelligence and to improve the workers’ quality of life and the quality of services they provide.",
"title": ""
}
] |
scidocsrr
|
25a9fcd7e030a3b979d6ddde84567765
|
A Cascade Model for Externalities in Sponsored Search
|
[
{
"docid": "c77fad43abe34ecb0a451a3b0b5d684e",
"text": "Search engine click logs provide an invaluable source of relevance information, but this information is biased. A key source of bias is presentation order: the probability of click is influenced by a document's position in the results page. This paper focuses on explaining that bias, modelling how probability of click depends on position. We propose four simple hypotheses about how position bias might arise. We carry out a large data-gathering effort, where we perturb the ranking of a major search engine, to see how clicks are affected. We then explore which of the four hypotheses best explains the real-world position effects, and compare these to a simple logistic regression model. The data are not well explained by simple position models, where some users click indiscriminately on rank 1 or there is a simple decay of attention over ranks. A â cascade' model, where users view results from top to bottom and leave as soon as they see a worthwhile document, is our best explanation for position bias in early ranks",
"title": ""
}
] |
[
{
"docid": "cfc15ed25912ac84f7c9afef93c4a0d6",
"text": "Lactate is an essential component of carbon metabolism in mammals. Recently, lactate was shown to signal through the G protein coupled receptor 81 (GPR81) and to thus modulate inflammatory processes. This study demonstrates that lactate inhibits pro-inflammatory signaling in a GPR81-independent fashion. While lipopolysaccharide (LPS) triggered expression of IL-6 and IL-12 p40, and CD40 in bone marrow-derived macrophages, lactate was able to abrogate these responses in a dose dependent manner in Gpr81-/- cells as well as in wild type cells. Macrophage activation was impaired when glycolysis was blocked by chemical inhibitors. Remarkably, lactate was found to inhibit LPS-induced glycolysis in wild type as well as in Gpr81-/- cells. In conclusion, our study suggests that lactate can induce GPR81-independent metabolic changes that modulate macrophage pro-inflammatory activation.",
"title": ""
},
{
"docid": "da694b74b3eaae46d15f589e1abef4b8",
"text": "Impaired water quality caused by human activity and the spread of invasive plant and animal species has been identified as a major factor of degradation of coastal ecosystems in the tropics. The main goal of this study was to evaluate the performance of AnnAGNPS (Annualized NonPoint Source Pollution Model), in simulating runoff and soil erosion in a 48 km watershed located on the Island of Kauai, Hawaii. The model was calibrated and validated using 2 years of observed stream flow and sediment load data. Alternative scenarios of spatial rainfall distribution and canopy interception were evaluated. Monthly runoff volumes predicted by AnnAGNPS compared well with the measured data (R 1⁄4 0.90, P < 0.05); however, up to 60% difference between the actual and simulated runoff were observed during the driest months (May and July). Prediction of daily runoff was less accurate (R 1⁄4 0.55, P < 0.05). Predicted and observed sediment yield on a daily basis was poorly correlated (R 1⁄4 0.5, P < 0.05). For the events of small magnitude, the model generally overestimated sediment yield, while the opposite was true for larger events. Total monthly sediment yield varied within 50% of the observed values, except for May 2004. Among the input parameters the model was most sensitive to the values of ground residue cover and canopy cover. It was found that approximately one third of the watershed area had low sediment yield (0e1 t ha 1 y ), and presented limited erosion threat. However, 5% of the area had sediment yields in excess of 5 t ha 1 y . Overall, the model performed reasonably well, and it can be used as a management tool on tropical watersheds to estimate and compare sediment loads, and identify ‘‘hot spots’’ on the landscape. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ec5095df6250a8f6cdf088f730dfbd5e",
"text": "Canine atopic dermatitis (CAD) is a multifaceted disease associated with exposure to various offending agents such as environmental and food allergens. The diagnosis of this condition is difficult because none of the typical signs are pathognomonic. Sets of criteria have been proposed but are mainly used to include dogs in clinical studies. The goals of the present study were to characterize the clinical features and signs of a large population of dogs with CAD, to identify which of these characteristics could be different in food-induced atopic dermatitis (FIAD) and non-food-induced atopic dermatitis (NFIAD) and to develop criteria for the diagnosis of this condition. Using simulated annealing, selected criteria were tested on a large and geographically widespread population of pruritic dogs. The study first described the signalment, history and clinical features of a large population of CAD dogs, compared FIAD and NFIAD dogs and confirmed that both conditions are clinically indistinguishable. Correlations of numerous clinical features with the diagnosis of CAD are subsequently calculated, and two sets of criteria associated with sensitivity and specificity ranging from 80% to 85% and from 79% to 85%, respectively, are proposed. It is finally demonstrated that these new sets of criteria provide better sensitivity and specificity, when compared to Willemse and Prélaud criteria. These criteria can be applied to both FIAD and NFIAD dogs.",
"title": ""
},
{
"docid": "44fdf1c17ebda2d7b2967c84361a5d9a",
"text": "A high-efficiency power amplifier (PA) is important in a Megahertz wireless power transfer (WPT) system. It is attractive to apply the Class-E PA for its simple structure and high efficiency. However, the conventional design for Class-E PA can only ensure a high efficiency for a fixed load. It is necessary to develop a high-efficiency Class-E PA for a wide-range load in WPT systems. A novel design method for Class-E PA is proposed to achieve this objective in this paper. The PA achieves high efficiency, above 80%, for a load ranging from 10 to 100 Ω at 6.78 MHz in the experiment.",
"title": ""
},
{
"docid": "c0b40058d003cdaa80d54aa190e48bc2",
"text": "Visual tracking plays an important role in many computer vision tasks. A common assumption in previous methods is that the video frames are blur free. In reality, motion blurs are pervasive in the real videos. In this paper we present a novel BLUr-driven Tracker (BLUT) framework for tracking motion-blurred targets. BLUT actively uses the information from blurs without performing debluring. Specifically, we integrate the tracking problem with the motion-from-blur problem under a unified sparse approximation framework. We further use the motion information inferred by blurs to guide the sampling process in the particle filter based tracking. To evaluate our method, we have collected a large number of video sequences with significatcant motion blurs and compared BLUT with state-of-the-art trackers. Experimental results show that, while many previous methods are sensitive to motion blurs, BLUT can robustly and reliably track severely blurred targets.",
"title": ""
},
{
"docid": "855b0ea3809c527e6a12f1a57b719a2c",
"text": "Pycnogenol (PYC) is a patented mix of bioflavonoids with potent anti-oxidant and anti-inflammatory properties. Previously, we showed that PYC administration to rats within hours after a controlled cortical impact (CCI) injury significantly protects against the loss of several synaptic proteins in the hippocampus. Here, we investigated the effects of PYC on CA3-CA1 synaptic function following CCI. Adult Sprague-Dawley rats received an ipsilateral CCI injury followed 15 min later by intravenous injection of saline vehicle or PYC (10 mg/kg). Hippocampal slices from the injured (ipsilateral) and uninjured (contralateral) hemispheres were prepared at seven and fourteen days post-CCI for electrophysiological analyses of CA3-CA1 synaptic function and induction of long-term depression (LTD). Basal synaptic strength was impaired in slices from the ipsilateral, relative to the contralateral, hemisphere at seven days post-CCI and susceptibility to LTD was enhanced in the ipsilateral hemisphere at both post-injury timepoints. No interhemispheric differences in basal synaptic strength or LTD induction were observed in rats treated with PYC. The results show that PYC preserves synaptic function after CCI and provides further rationale for investigating the use of PYC as a therapeutic in humans suffering from neurotrauma.",
"title": ""
},
{
"docid": "37f8103f3698463cb4ebce428a65e635",
"text": "Until recently, social media was seen to promote democratic discourse on social and political issues. However, this powerful communication platform has come under scrutiny for allowing hostile actors to exploit online discussions in an attempt to manipulate public opinion. A case in point is the ongoing U.S. Congress investigation of Russian interference in the 2016 U.S. election campaign, with Russia accused of, among other things, using trolls (malicious accounts created for the purpose of manipulation) and bots (automated accounts) to spread misinformation and politically biased information. In this study, we explore the effects of this manipulation campaign, taking a closer look at users who re-shared the posts produced on Twitter by the Russian troll accounts publicly disclosed by U.S. Congress investigation. We collected a dataset with over 43 million elections-related posts shared on Twitter between September 16 and November 9, 2016 by about 5.7 million distinct users. This dataset includes accounts associated with the identified Russian trolls. We use label propagation to infer the users' ideology based on the news sources they shared, to classify a large number of them as liberal or conservative with precision and recall above 90%. Conservatives retweeted Russian trolls significantly more often than liberals and produced 36 times more tweets. Additionally, most of the troll content originated in, and was shared by users from Southern states. Using state-of-the-art bot detection techniques, we estimated that about 4.9% and 6.2% of liberal and conservative users respectively were bots. Text analysis on the content shared by trolls reveals that they had a mostly conservative, pro-Trump agenda. Although an ideologically broad swath of Twitter users were exposed to Russian trolls in the period leading up to the 2016 U.S. Presidential election, it was mainly conservatives who helped amplify their message.",
"title": ""
},
{
"docid": "64efa01068b761d29c3b402d35c524db",
"text": "Inferring the interactions between different brain areas is an important step towards understanding brain activity. Most often, signals can only be measured from some specific brain areas (e.g., cortex in the case of scalp electroencephalograms). However, those signals may be affected by brain areas from which no measurements are available (e.g., deeper areas such as hippocampus). In this paper, the latter are described as hidden variables in a graphical model; such model quantifies the statistical structure in the neural recordings, conditioned on hidden variables, which are inferred in an automated fashion from the data. As an illustration, electroencephalograms (EEG) of Alzheimer’s disease patients are considered. It is shown that the number of hidden variables in AD EEG is not significantly different from healthy EEG. However, there are fewer interactions between the brain areas, conditioned on those hidden variables. Explanations for these observations are suggested.",
"title": ""
},
{
"docid": "919342b88482e827c3923d66e0c50cb7",
"text": "Scoring sentences in documents given abstract summaries created by humans is important in extractive multi-document summarization. In this paper, we formulate extractive summarization as a two step learning problem building a generative model for pattern discovery and a regression model for inference. We calculate scores for sentences in document clusters based on their latent characteristics using a hierarchical topic model. Then, using these scores, we train a regression model based on the lexical and structural characteristics of the sentences, and use the model to score sentences of new documents to form a summary. Our system advances current state-of-the-art improving ROUGE scores by ∼7%. Generated summaries are less redundant and more coherent based upon manual quality evaluations.",
"title": ""
},
{
"docid": "6e77ad5dc5100163107bb568d26e7fac",
"text": "ReTest is a novel testing tool for Java applications with a graphical user interface (GUI), combining monkey testing and difference testing. Since this combination sidesteps the oracle problem, it enables the generation of GUI-based regression tests. ReTest makes use of evolutionary computing (EC), particularly a genetic algorithm (GA), to optimize these tests towards code coverage. While this is indeed a desirable goal in terms of software testing and potentially finds many bugs, it lacks one major ingredient: human behavior. Consequently, human testers often find the results less reasonable and difficult to interpret. This thesis proposes a new approach to improve the initial population of the GA with the aid of machine learning (ML), forming an ML-technique enhancedEC (MLEC) algorithm. In order to do so, existing tests are exploited to extract information on how human testers use the given GUI. The obtained data is then utilized to train an artificial neural network (ANN), which ranks the available GUI actions respectively their underlying GUI components at runtime—reducing the gap between manually created and automatically generated regression tests. Although the approach is implemented on top of ReTest, it can be easily used to guide any form of monkey testing. The results show that with only little training data, the ANN is able to reach an accuracy of 82 % and the resulting tests represent an improvement without reducing the overall code coverage and performance significantly.",
"title": ""
},
{
"docid": "6f46e0d6ea3fb99c6e6a1d5907995e87",
"text": "The study of financial markets has been addressed in many works during the last years. Different methods have been used in order to capture the non-linear behavior which is characteristic of these complex systems. The development of profitable strategies has been associated with the predictive character of the market movement, and special attention has been devoted to forecast the trends of financial markets. This work performs a predictive study of the principal index of the Brazilian stock market through artificial neural networks and the adaptive exponential smoothing method, respectively. The objective is to compare the forecasting performance of both methods on this market index, and in particular, to evaluate the accuracy of both methods to predict the sign of the market returns. Also the influence on the results of some parameters associated to both methods is studied. Our results show that both methods produce similar results regarding the prediction of the index returns. On the contrary, the neural networks outperform the adaptive exponential smoothing method in the forecasting of the market movement, with relative hit rates similar to the ones found in other developed markets. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6dc418a1d3316141e1f4da25698ff0f1",
"text": "Human identification at a distance has recently gained growing interest from computer vision researchers. Gait recognition aims essentially to address this problem by identifying people based on the way they walk. In this paper, a simple but efficient gait recognition algorithm using spatial-temporal silhouette analysis is proposed. For each image sequence, a background subtraction algorithm and a simple correspondence procedure are first used to segment and track the moving silhouettes of a walking figure. Then, eigenspace transformation based on Principal Component Analysis (PCA) is applied to time-varying distance signals derived from a sequence of silhouette images to reduce the dimensionality of the input feature space. Supervised pattern classification techniques are finally performed in the lower-dimensional eigenspace for recognition. This method implicitly captures the structural and transitional characteristics of gait. Extensive experimental results on outdoor image sequences demonstrate that the proposed algorithm has an encouraging recognition performance with relatively low computational cost.",
"title": ""
},
{
"docid": "e1690625108e0e377ce49b92fe22608d",
"text": "During times of disasters online users generate a significant amount of data, some of which are extremely valuable for relief efforts. In this paper, we study the nature of social-media content generated during two different natural disasters. We also train a model based on conditional random fields to extract valuable information from such content. We evaluate our techniques over our two datasets through a set of carefully designed experiments. We also test our methods over a non-disaster dataset to show that our extraction model is useful for extracting information from socially-generated content in general.",
"title": ""
},
{
"docid": "5bf2c4a187b35ad5c4e69aef5eb9ffea",
"text": "In the last decade, the research of the usability of mobile phones has been a newly evolving area with few established methodologies and realistic practices that ensure capturing usability in evaluation. Thus, there exists growing demand to explore appropriate evaluation methodologies that evaluate the usability of mobile phones quickly as well as comprehensively. This study aims to develop a task-based usability checklist based on heuristic evaluations in views of mobile phone user interface (UI) practitioners. A hierarchical structure of UI design elements and usability principles related to mobile phones were developed and then utilized to develop the checklist. To demonstrate the practical effectiveness of the proposed checklist, comparative experiments were conducted on the usability checklist and usability testing. The majority of usability problems found by usability testing and additional problems were discovered by the proposed checklist. It is expected that the usability checklist proposed in this study could be used quickly and efficiently by usability practitioners to evaluate the mobile phone UI in the middle of the mobile phone development process.",
"title": ""
},
{
"docid": "411d3048bd13f48f0c31259c41ff2903",
"text": "In computer vision, object detection is addressed as one of the most challenging problems as it is prone to localization and classification error. The current best-performing detectors are based on the technique of finding region proposals in order to localize objects. Despite having very good performance, these techniques are computationally expensive due to having large number of proposed regions. In this paper, we develop a high-confidence region-based object detection framework that boosts up the classification performance with less computational burden. In order to formulate our framework, we consider a deep network that activates the semantically meaningful regions in order to localize objects. These activated regions are used as input to a convolutional neural network (CNN) to extract deep features. With these features, we train a set of class-specific binary classifiers to predict the object labels. Our new region-based detection technique significantly reduces the computational complexity and improves the performance in object detection. We perform rigorous experiments on PASCAL, SUN, MIT-67 Indoor and MSRC datasets to demonstrate that our proposed framework outperforms other state-of-the-art methods in recognizing objects.",
"title": ""
},
{
"docid": "d0df1484ea03e91489e8916130392506",
"text": "Most of the conventional face hallucination methods assume the input image is sufficiently large and aligned, and all require the input image to be noise-free. Their performance degrades drastically if the input image is tiny, unaligned, and contaminated by noise. In this paper, we introduce a novel transformative discriminative autoencoder to 8X super-resolve unaligned noisy and tiny (16X16) low-resolution face images. In contrast to encoder-decoder based autoencoders, our method uses decoder-encoder-decoder networks. We first employ a transformative discriminative decoder network to upsample and denoise simultaneously. Then we use a transformative encoder network to project the intermediate HR faces to aligned and noise-free LR faces. Finally, we use the second decoder to generate hallucinated HR images. Our extensive evaluations on a very large face dataset show that our method achieves superior hallucination results and outperforms the state-of-the-art by a large margin of 1.82dB PSNR.",
"title": ""
},
{
"docid": "ef0b0d4b547820aa78e216f75506d436",
"text": "The DeLone & McLean IS Success Model has become a standard for the specification and justification of the measurement of the dependent variable in information systems research. Attempts to apply and test the model have resulted in both confirmation and challenges. This paper reviews and analyzes over 150 articles which have referenced the model over the past eight years in order to examine what more we have learned about measuring IS success. It highlights recent contributions to IS success measurement and proposes a Reformulated IS Success Model which recognizes and incorporates those contributions. The report concludes with recommendations for future IS success measurement.",
"title": ""
},
{
"docid": "b27d6f04073e381d69e958f536586a11",
"text": "The idea of robotic companions capable of establishing meaningful relationships with humans remains far from being accomplished. To achieve this, robots must interact with people in natural ways, employing social mechanisms that people use while interacting with each other. One such mechanism is empathy, often seen as the basis of social cooperation and prosocial behaviour. We argue that artificial companions capable of behaving in an empathic manner, which involves the capacity to recognise another’s affect and respond appropriately, are more successful at establishing and maintaining a positive relationship with users. This paper presents a study where an autonomous robot with empathic capabilities acts as a social companion to two players in a chess game. The robot reacts to the moves played on the chessboard by displaying several facial expressions and verbal utterances, showing empathic behaviours towards one player and behaving neutrally towards the other. Quantitative and qualitative results of 31 participants indicate that users towards whom the robot behaved empathically perceived the robot as friendlier, which supports our hypothesis that empathy plays a key role in human-robot interaction.",
"title": ""
},
{
"docid": "656a83e0cc9631a282e81f4143042ae7",
"text": "Mobility Management in future cellular networks is becoming more challenging due to transition from macro only to multi-tier deployments. In this framework, the massive use of small cells rendered traditional Handover algorithms inappropriate to deal effectively with frequent Handovers, especially for fast users, in dense urban scenarios. Studies in this area focus mainly on the adjustment of the Hysteresis Margin and on the Time-to-Trigger (TTT) selection in line with the Self-Organized Networks (SON), concept. In that sense, the ability of each node to adapt its parameters to the actual scenario is considered vital for the smooth operation of the network. This work contributes to the latter by analyzing the dependence of the Handover performance on the inter-site distance between the macro cell and the small cell. Specifically, the most common KPIs (i.e. Handover, Ping Pong and Radio Link Failure probabilities) are analyzed for different inter-site distances and TTT values to provide solid basis for the TTT selection.",
"title": ""
},
{
"docid": "5eaea95d0e1febd8ee37c3b92f962ca0",
"text": "In the last few decades, there have been extensive studies on analysis and investigation of disc brake vibrations done by many researchers around the world on the possibility of eliminating brake vibration to improve vehicle users’ comfort. Despite these efforts, still no general solution exists. Therefore, it is one of the most important issues that require a detailed and in-depth study for investigation brake noise and vibration. Research on brake noise and vibration has been conducted using theoretical, experimental and numerical approaches. Experimental methods can provide real measured data and they are trustworthy. This paper aims to focus on experimental investigations and summarized recent studies on automotive disc brake noise and vibration for measuring instable frequencies and mode shapes for the system in vibration and to verify possible numerical solutions. Finally, the critical areas where further research directions are needed for reducing vibration of disc brake are suggested in the conclusions.",
"title": ""
}
] |
scidocsrr
|
e5cf02cef44b41fdfb2250c1a2359818
|
Agents That Explain Their Own Actions
|
[
{
"docid": "47fd7f9c704f684437e116061895e270",
"text": "To generate multimedia explanations, a system must be able to coordinate the use of different media in a single explanation. In this paper, we present an architecture that we have developed for COMET (COordinated Multimedia Explanation Testbed), a system that generates directions for equipment maintenance and repair, and we show how it addresses the coordination problem. In particular, we focus on the use of a single content planner that produces a common content description used by multiple media-specific generators, a media coordinator that makes a f'me-grained division of information between media, and bidirectional interaction between media-specific generators to allow influence across media.",
"title": ""
}
] |
[
{
"docid": "44c9de5fbaac78125277a9995890b43c",
"text": "In the real world, speech is usually distorted by both reverberation and background noise. In such conditions, speech intelligibility is degraded substantially, especially for hearing-impaired (HI) listeners. As a consequence, it is essential to enhance speech in the noisy and reverberant environment. Recently, deep neural networks have been introduced to learn a spectral mapping to enhance corrupted speech, and shown significant improvements in objective metrics and automatic speech recognition score. However, listening tests have not yet shown any speech intelligibility benefit. In this paper, we propose to enhance the noisy and reverberant speech by learning a mapping to reverberant target speech rather than anechoic target speech. A preliminary listening test was conducted, and the results show that the proposed algorithm is able to improve speech intelligibility of HI listeners in some conditions. Moreover, we develop a masking-based method for denoising and compare it with the spectral mapping method. Evaluation results show that the masking-based method outperforms the mapping-based method.",
"title": ""
},
{
"docid": "37ffd85867e68db6eadc244b2d20a403",
"text": "This paper presents a distributed algorithm to direct evacuees to exits through arbitrarily complex building layouts in emergency situations. The algorithm finds the safest paths for evacuees taking into account predictions of the relative movements of hazards, such as fires, and evacuees. The algorithm is demonstrated on a 64 node wireless sensor network test platform and in simulation. The results of simulations are shown to demonstrate the navigation paths found by the algorithm.",
"title": ""
},
{
"docid": "7c2c2dc6ba53f08f48a2b45672f0002d",
"text": "In past few decades, development in the area power electronics which increases the demand for high performance industrial applications has contributed to rapid developments in digital motor control. High efficiency, reduced noise, extended reliability at optimum cost is the challenge facing by many industries which uses the electric motors. Now days, the demand of electronic motor control is increases rapidly, not only in the area of automotive, computer peripherals, but also in industrial, electrical applications. All these applications need cost-effective solutions without compromising reliability. The purpose of this system is to design, simulate and implement the most feasible motor control for use in industrial and electrical applications. The proposed design describes the designing and development of a three phase induction motor drive with speed sensing. It is based on PIC18F4431 microcontroller which is dedicated for motor control applications. The designed drive is a low cost motor control drive used for medium power three phase induction motor and is targeted for industrial and electric appliances e.g. washing machines, compressors, air conditioning units, electric pumps and some simple industrial drives. The designed motor drive has another advantage that it would converts single phase into three phases supply where three phase motors are operated on a single phase supply. So it is the best option for those applications where three phase supply is not available. In such applications, three phase motors are preferred because they are efficient, economical and require less severe starting current. This paper deals with PWM technique used for speed control of three phase induction motor using single phase supply with PIC microcontroller. The implemented system converts a single phase AC input into high DC. The high DC is converted into three phase AC voltage by using inverter circuit. The desired AC voltage can be obtained by changing the switching time of MOSFET’s using PWM signals. These PWM signals are generated by the PIC microcontroller. Different PWM schemes are used for firing 1. Ph.D. student, Department of Electronics, Shankarrao Mohite Mahavidyalaya, Akluj, Solapur. bnjamadar@yahoo.co.in 2. Department of Electronics, Willingdon College, Sangli. srkumbhar@yahoo.co.in 3. Department of Electronics, D.B.F. Dayanand College of Arts and Science, Solapur. dssutrave@gmail.com of MOSFET’s and harmonic profiles are recorded through simulation. Out of them, best PWM firing scheme is used for the better efficiency. Speed variation of the induction motor is then recorded by changing duty cycle of the firing pulse of an inverter.",
"title": ""
},
{
"docid": "1d03d6f7cd7ff9490dec240a36bf5f65",
"text": "Responses generated by neural conversational models tend to lack informativeness and diversity. We present a novel adversarial learning method, called Adversarial Information Maximization (AIM) model, to address these two related but distinct problems. To foster response diversity, we leverage adversarial training that allows distributional matching of synthetic and real responses. To improve informativeness, we explicitly optimize a variational lower bound on pairwise mutual information between query and response. Empirical results from automatic and human evaluations demonstrate that our methods significantly boost informativeness and diversity.",
"title": ""
},
{
"docid": "2176518448c89ba977d849f71c86e6a6",
"text": "iii I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. _______________________________________ L. Peter Deutsch I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Abstract Object-oriented programming languages confer many benefits, including abstraction, which lets the programmer hide the details of an object's implementation from the object's clients. Unfortunately, crossing abstraction boundaries often incurs a substantial run-time overhead in the form of frequent procedure calls. Thus, pervasive use of abstraction , while desirable from a design standpoint, may be impractical when it leads to inefficient programs. Aggressive compiler optimizations can reduce the overhead of abstraction. However, the long compilation times introduced by optimizing compilers delay the programming environment's responses to changes in the program. Furthermore, optimization also conflicts with source-level debugging. Thus, programmers are caught on the horns of two dilemmas: they have to choose between abstraction and efficiency, and between responsive programming environments and efficiency. This dissertation shows how to reconcile these seemingly contradictory goals by performing optimizations lazily. Four new techniques work together to achieve high performance and high responsiveness: • Type feedback achieves high performance by allowing the compiler to inline message sends based on information extracted from the runtime system. On average, programs run 1.5 times faster than the previous SELF system; compared to a commercial Smalltalk implementation, two medium-sized benchmarks run about three times faster. This level of performance is obtained with a compiler that is both simpler and faster than previous SELF compilers. • Adaptive optimization achieves high responsiveness without sacrificing performance by using a fast non-optimizing compiler to generate initial code while automatically recompiling heavily used parts of the program with an optimizing compiler. On a previous-generation workstation like the SPARCstation-2, fewer than 200 pauses exceeded 200 ms during a 50-minute interaction, and 21 pauses exceeded one second. …",
"title": ""
},
{
"docid": "08a7621fe99afba5ec9a78c76192f43d",
"text": "Orthogonal Frequency Division Multiple Access (OFDMA) as well as other orthogonal multiple access techniques fail to achieve the system capacity limit in the uplink due to the exclusivity in resource allocation. This issue is more prominent when fairness among the users is considered in the system. Current Non-Orthogonal Multiple Access (NOMA) techniques introduce redundancy by coding/spreading to facilitate the users' signals separation at the receiver, which degrade the system spectral efficiency. Hence, in order to achieve higher capacity, more efficient NOMA schemes need to be developed. In this paper, we propose a NOMA scheme for uplink that removes the resource allocation exclusivity and allows more than one user to share the same subcarrier without any coding/spreading redundancy. Joint processing is implemented at the receiver to detect the users' signals. However, to control the receiver complexity, an upper limit on the number of users per subcarrier needs to be imposed. In addition, a novel subcarrier and power allocation algorithm is proposed for the new NOMA scheme that maximizes the users' sum-rate. The link-level performance evaluation has shown that the proposed scheme achieves bit error rate close to the single-user case. Numerical results show that the proposed NOMA scheme can significantly improve the system performance in terms of spectral efficiency and fairness comparing to OFDMA.",
"title": ""
},
{
"docid": "3c3ae987e018322ca45b280c3d01eba8",
"text": "Boundary prediction in images as well as video has been a very active topic of research and organizing visual information into boundaries and segments is believed to be a corner stone of visual perception. While prior work has focused on predicting boundaries for observed frames, our work aims at predicting boundaries of future unobserved frames. This requires our model to learn about the fate of boundaries and extrapolate motion patterns. We experiment on established realworld video segmentation dataset, which provides a testbed for this new task. We show for the first time spatio-temporal boundary extrapolation in this challenging scenario. Furthermore, we show long-term prediction of boundaries in situations where the motion is governed by the laws of physics. We successfully predict boundaries in a billiard scenario without any assumptions of a strong parametric model or any object notion. We argue that our model has with minimalistic model assumptions derived a notion of “intuitive physics” that can be applied to novel scenes.",
"title": ""
},
{
"docid": "39340461bb4e7352ab6af3ce10460bd7",
"text": "This paper presents an 8 bit 1.8 V 500 MSPS digital- to analog converter using 0.18mum double poly five metal CMOS technology for frequency domain applications. The proposed DAC is composed of four unit cell matrix. A novel decoding logic is used to remove the inter block code transition (IBT) glitch. The proposed DAC shows less number of switching for a monotonic input and the product of number of switching and the current value associated with switching is also less than the segmented DAC. The SPICE simulated DNL and INL is 0.1373 LSB and 0.331 LSB respectively and are better than the segmented DAC. The proposed DAC also shows better SNDR and THD than the segmented DAC. The MATLAB simulated THD, SFDR and SNDR is more than 45 dB, 35 dB and 44 dB respectively at 500MS/s with a 10 MHz input sine wave with incoherent timing response between current switches.",
"title": ""
},
{
"docid": "fb173d15e079fcdf0cc222f558713f9c",
"text": "Structured data summarization involves generation of natural language summaries from structured input data. In this work, we consider summarizing structured data occurring in the form of tables as they are prevalent across a wide variety of domains. We formulate the standard table summarization problem, which deals with tables conforming to a single predefined schema. To this end, we propose a mixed hierarchical attention based encoderdecoder model which is able to leverage the structure in addition to the content of the tables. Our experiments on the publicly available WEATHERGOV dataset show around 18 BLEU (∼ 30%) improvement over the current state-of-the-art.",
"title": ""
},
{
"docid": "7c7ede7474d8ed55930aca8f42102449",
"text": "In colloquial English, a “grower” is a man whose phallus expands significantly in length from the flaccid to the erect state; a “shower” is a man whose phallus does not demonstrate such expansion. We sought to investigate various factors that might predict a man being either a grower or a shower. A retrospective review of 274 patients who underwent penile duplex Doppler ultrasound (PDDU) for erectile dysfunction between 2011 and 2013 was performed. Penile length was measured, both in the flaccid state prior to intracavernosal injection (ICI) of a vasodilating agent (prostaglandin E1), and at peak erection during PDDU. The collected data included patient demographics, vascular, and anatomic parameters. The median change in penile length from flaccid to erect state was 4.0 cm (1.0–7.0), and was used as a cut-off value defining a grower (≥4.0 cm) or a shower (4.0 cm). A total of 73 men (26%) fit the definition of a grower (mean change in length of 5.3 cm [SD 0.5]) and 205 (74%) were showers (mean change in length of 3.1 cm [SD 0.9]). There were no differences between the groups with regards to race, smoking history, co-morbidities, erectile function, flaccid penile length, degree of penile rigidity after ICI, or PDDU findings. Growers were significantly younger (mean age 47.5 vs. 55.9 years, p < 0.001), single (37% vs. 23%, p = 0.031), received less vasodilator dose (10.3 mcg vs. 11.0 mcg, p = 0.038) and had a larger erect phallus (15.5 cm vs. 13.1 cm, p < 0.001). On multivariate analysis, only younger age was significantly predictive of being a grower (p < 0.001). These results suggest that younger age and single status could be predictors of a man being a grower, rather than a shower. Larger, multicultural and multinational studies are needed to confirm these results.",
"title": ""
},
{
"docid": "341023e29855c4ec07aad6f98a04fa87",
"text": "The space that can be seen from any vantage point is called an isovist and the set of such spaces forms a visual field whose extent defines different isovist fields based on different geometric properties. I suggest that our perceptions of moving within such fields might be related to these geometric properties. I begin with a formal representation of isovists and their fields, introducing simple geometric measures based on distance, area, perimeter, compactness, and convexity. I suggest a feasible computational scheme for measuring such fields, and illustrate how we can visualize their spatial and statistical properties by using maps and frequency distributions. I argue that the classification of fields based on these measures must be a prerequisite to the proper analysis of architectural and urban morphologies. To this end, I present two hypothetical examples based on simple geometries and three real examples based on London's Tate Gallery, Regent Street, and the centre of the English town of Wolverhampton. Although such morphologies can often be understood in terms of basic geometrical elements such as corridors, streets, rooms, and squares, isovist analysis suggests that visual fields have their own form which results from the interaction of geometry and movement. To illustrate how such analysis can be used, I outline methods of partitioning space, covering it with a small number of relatively independent isovists, and perceiving space by recording properties of the isovist fields associated with paths through that space. DOI:10.1068/b2725",
"title": ""
},
{
"docid": "254f2ef4608ea3c959e049073ad063f8",
"text": "Recently, the long-term evolution (LTE) is considered as one of the most promising 4th generation (4G) mobile standards to increase the capacity and speed of mobile handset networks [1]. In order to realize the LTE wireless communication system, the diversity and multiple-input multiple-output (MIMO) systems have been introduced [2]. In a MIMO mobile user terminal such as handset or USB dongle, at least two uncorrelated antennas should be placed within an extremely restricted space. This task becomes especially difficult when a MIMO planar antenna is designed for LTE band 13 (the corresponding wavelength is 390 mm). Due to the limited space available for antenna elements, the antennas are strongly coupled with each other and have narrow bandwidth.",
"title": ""
},
{
"docid": "dc33d2edcfb124af607bcb817589f6e9",
"text": "In this letter, a novel coaxial line to substrate integrated waveguide (SIW) broadband transition is presented. The transition is designed by connecting the inner conductor of a coaxial line to an open-circuited SIW. The configuration directly transforms the TEM mode of a coaxial line to the fundamental TE10 mode of the SIW. A prototype back-to-back transition is fabricated for X-band operation using a 0.508 mm thick RO 4003C substrate with dielectric constant 3.55. Comparison with other reported transitions shows that the present structure provides lower passband insertion loss, wider bandwidth and most compact. The area of each transition is 0.08λg2 where λg is the guided wavelength at passband center frequency of f0 = 10.5 GHz. Measured 15 dB and 20 dB matching bandwidths are over 48% and 20%, respectively, at f0.",
"title": ""
},
{
"docid": "de21af25cede39d42c1064e626c621cb",
"text": "This study examined the polyphenol composition and antioxidant properties of methanolic extracts from amaranth, quinoa, buckwheat and wheat, and evaluated how these properties were affected following two types of processing: sprouting and baking. The total phenol content amongst the seed extracts were significantly higher in buckwheat (323.4 mgGAE/100 g) and decreased in the following order: buckwheat > quinoa > wheat > amaranth. Antioxidant capacity, measured by the radical 2,2-diphenyl-1-picylhydrazyl scavenging capacity and the ferric ion reducing antioxidant power assays was also highest for buckwheat seed extract (p < 0.01). Total phenol content and antioxidant activity was generally found to increase with sprouting, and a decrease in levels was observed following breadmaking. Analysis by liquid chromatography coupled with diode array detector revealed the presence of phenolic acids, catechins, flavanol, flavone and flavonol glycosides. Overall, quinoa and buckwheat seeds and sprouts represent potential rich sources of polyphenol compounds for enhancing the nutritive properties of foods such as gluten-free breads. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "713cec9de9becdc3e98004ea3514ffb7",
"text": "End-To-End speech recognition have become increasingly popular in mandarin speech recognition and achieved delightful performance. Mandarin is a tonal language which is different from English and requires special treatment for the acoustic modeling units. There have been several different kinds of modeling units for mandarin such as phoneme, syllable and Chinese character. In this work, we explore two major end-to-end models: connectionist temporal classification (CTC) model and attention based encoder-decoder model for mandarin speech recognition. We compare the performance of three different scaled modeling units: context dependent phoneme(CDP), syllable with tone and Chinese character. We find that all types of modeling units can achieve approximate character error rate (CER) in CTC model and the performance of Chinese character attention model is better than syllable attention model. Furthermore, we find that Chinese character is a reasonable unit for mandarin speech recognition. On DidiCallcenter task, Chinese character attention model achieves a CER of 5.68% and CTC model gets a CER of 7.29%, on the other DidiReading task, CER are 4.89% and 5.79%, respectively. Moreover, attention model achieves a better performance than CTC model on both datasets.",
"title": ""
},
{
"docid": "e75df6ff31c9840712cf1a4d7f6582cd",
"text": "Endotoxin, a constituent of Gram-negative bacteria, stimulates macrophages to release large quantities of tumor necrosis factor (TNF) and interleukin-1 (IL-1), which can precipitate tissue injury and lethal shock (endotoxemia). Antagonists of TNF and IL-1 have shown limited efficacy in clinical trials, possibly because these cytokines are early mediators in pathogenesis. Here a potential late mediator of lethality is identified and characterized in a mouse model. High mobility group-1 (HMG-1) protein was found to be released by cultured macrophages more than 8 hours after stimulation with endotoxin, TNF, or IL-1. Mice showed increased serum levels of HMG-1 from 8 to 32 hours after endotoxin exposure. Delayed administration of antibodies to HMG-1 attenuated endotoxin lethality in mice, and administration of HMG-1 itself was lethal. Septic patients who succumbed to infection had increased serum HMG-1 levels, suggesting that this protein warrants investigation as a therapeutic target.",
"title": ""
},
{
"docid": "baa14d5bf6e457487d3630f34b3818d1",
"text": "This paper is focused on modelling and control of nonlinear dynamical system Ball & Plate in language Matlab/Simulink. PID/PSD controller is used in closed loop feedback control structure for the purpose of control. The verification of designed PID control algorithms, the same as nonlinear model of dynamical system, is performed with functional blocks of Simulink environment. This paper includes testing of designed PID control algorithms on real model Ball & Plate using multifunction I/O card MF 614, which communicates with PC by the functions of Real Time Toolbox. Visualization of the simulation results is realized by internet applications, which use Matlab Web Server.",
"title": ""
},
{
"docid": "ec14996dd3ce3701db628348dfeb63f2",
"text": "Eye gaze interaction can provide a convenient and natural addition to user-computer dialogues. We have previously reported on our interaction techniques using eye gaze [10]. While our techniques seemed useful in demonstration, we now investigate their strengths and weaknesses in a controlled setting. In this paper, we present two experiments that compare an interaction technique we developed for object selection based on a where a person is looking with the most commonly used selection method using a mouse. We find that our eye gaze interaction technique is faster than selection with a mouse. The results show that our algorithm, which makes use of knowledge about how the eyes behave, preserves the natural quickness of the eye. Eye gaze interaction is a reasonable addition to computer interaction and is convenient in situations where it is important to use the hands for other tasks. It is particularly beneficial for the larger screen workspaces and virtual environments of the future, and it will become increasingly practical as eye tracker technology matures.",
"title": ""
},
{
"docid": "741078742178d09f911ef9633befeb9b",
"text": "We introduce a novel kernel for comparing two text documents. The kernel is an inner product in the feature space consisting of all subsequences of length k. A subsequence is any ordered sequence of k characters occurring in the text though not necessarily contiguously. The subsequences are weighted by an exponentially decaying factor of their full length in the text, hence emphasising those occurrences which are close to contiguous. A direct computation of this feature vector would involve a prohibitive amount of computation even for modest values of k, since the dimension of the feature space grows exponentially with k. The paper describes how despite this fact the inner product can be efficiently evaluated by a dynamic programming technique. A preliminary experimental comparison of the performance of the kernel compared with a standard word feature space kernel [4] is made showing encouraging results.",
"title": ""
}
] |
scidocsrr
|
a84bbe50db916ab41745fe8f6811d00c
|
Cloud Computing Features, Issues, and Challenges: A Big Picture
|
[
{
"docid": "30e229f91456c3d7eb108032b3470b41",
"text": "Software as a service (SaaS) is a rapidly growing model of software licensing. In contrast to traditional software where users buy a perpetual-use license, SaaS users buy a subscription from the publisher. Whereas traditional software publishers typically release new product features as part of new versions of software once in a few years, publishers using SaaS have an incentive to release new features as soon as they are completed. We show that this property of the SaaS licensing model leads to greater investment in product development under most conditions. This increased investment leads to higher software quality in equilibrium under SaaS compared to perpetual licensing. The software publisher earns greater profits under SaaS while social welfare is also higher",
"title": ""
},
{
"docid": "956799f28356850fda78a223a55169bf",
"text": "Despite increasing usage of mobile computing, exploiting its full potential is difficult due to its inherent problems such as resource scarcity, frequent disconnections, and mobility. Mobile cloud computing can address these problems by executing mobile applications on resource providers external to the mobile device. In this paper, we provide an extensive survey of mobile cloud computing research, while highlighting the specific concerns in mobile cloud computing. We present a taxonomy based on the key issues in this area, and discuss the different approaches taken to tackle these issues. We conclude the paper with a critical analysis of challenges that have not yet been fully met, and highlight directions for",
"title": ""
},
{
"docid": "5b34624e72b1ed936ddca775cca329ca",
"text": "The advent of Cloud computing as a newmodel of service provisioning in distributed systems encourages researchers to investigate its benefits and drawbacks on executing scientific applications such as workflows. One of the most challenging problems in Clouds is workflow scheduling, i.e., the problem of satisfying the QoS requirements of the user as well as minimizing the cost of workflow execution. We have previously designed and analyzed a two-phase scheduling algorithm for utility Grids, called Partial Critical Paths (PCP), which aims to minimize the cost of workflow execution while meeting a userdefined deadline. However, we believe Clouds are different from utility Grids in three ways: on-demand resource provisioning, homogeneous networks, and the pay-as-you-go pricing model. In this paper, we adapt the PCP algorithm for the Cloud environment and propose two workflow scheduling algorithms: a one-phase algorithmwhich is called IaaS Cloud Partial Critical Paths (IC-PCP), and a two-phase algorithm which is called IaaS Cloud Partial Critical Paths with Deadline Distribution (IC-PCPD2). Both algorithms have a polynomial time complexity which make them suitable options for scheduling large workflows. The simulation results show that both algorithms have a promising performance, with IC-PCP performing better than IC-PCPD2 in most cases. © 2012 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "1996fa0ce1c4dcf45c160bc0c2ebe403",
"text": "In this paper we present a framework that allows a human and a robot to perform simultaneous manipulation tasks safely in close proximity. The proposed framework is based on early prediction of the human's motion. The prediction system, which builds on previous work in the area of gesture recognition, generates a prediction of human workspace occupancy by computing the swept volume of learned human motion trajectories. The motion planner then plans robot trajectories that minimize a penetration cost in the human workspace occupancy while interleaving planning and execution. Multiple plans are computed in parallel, one for each robot task available at the current time, and the trajectory with the least cost is selected for execution. We test our framework in simulation using recorded human motions and a simulated PR2 robot. Our results show that our framework enables the robot to avoid the human while still accomplishing the robot's task, even in cases where the initial prediction of the human's motion is incorrect. We also show that taking into account the predicted human workspace occupancy in the robot's motion planner leads to safer and more efficient interactions between the user and the robot than only considering the human's current configuration.",
"title": ""
},
{
"docid": "02dab9e102d1b8f5e4f6ab66e04b3aad",
"text": "CHILD CARE PRACTICES ANTECEDING THREE PATTERNS OF PRESCHOOL BEHAVIOR. STUDIED SYSTEMATICALLY CHILD-REARING PRACTICES ASSOCIATED WITH COMPETENCE IN THE PRESCHOOL CHILD. 2015 American Psychological Association PDF documents require Adobe Acrobat Reader.Effects of Authoritative Parental Control on Child Behavior, Child. Child care practices anteceding three patterns of preschool behavior. Genetic.She is best known for her work on describing parental styles of child care and. Anteceding Three Patterns of Preschool Behavior, Genetic Psychology.Child care practices anteceding three patterns of preschool behavior.",
"title": ""
},
{
"docid": "49680e94843e070a5ed0179798f66f33",
"text": "Routing protocols for Wireless Sensor Networks (WSN) are designed to select parent nodes so that data packets can reach their destination in a timely and efficient manner. Typically neighboring nodes with strongest connectivity are more selected as parents. This Greedy Routing approach can lead to unbalanced routing loads in the network. Consequently, the network experiences the early death of overloaded nodes causing permanent network partition. Herein, we propose a framework for load balancing of routing in WSN. In-network path tagging is used to monitor network traffic load of nodes. Based on this, nodes are identified as being relatively overloaded, balanced or underloaded. A mitigation algorithm finds suitable new parents for switching from overloaded nodes. The routing engine of the child of the overloaded node is then instructed to switch parent. A key future of the proposed framework is that it is primarily implemented at the Sink and so requires few changes to existing routing protocols. The framework was implemented in TinyOS on TelosB motes and its performance was assessed in a testbed network and in TOSSIM simulation. The algorithm increased the lifetime of the network by 41 % as recorded in the testbed experiment. The Packet Delivery Ratio was also improved from 85.97 to 99.47 %. Finally a comparative study was performed using the proposed framework with various existing routing protocols.",
"title": ""
},
{
"docid": "b3f5176f49b467413d172134b1734ed8",
"text": "Commonsense reasoning is a long-standing challenge for deep learning. For example, it is difficult to use neural networks to tackle the Winograd Schema dataset [1]. In this paper, we present a simple method for commonsense reasoning with neural networks, using unsupervised learning. Key to our method is the use of language models, trained on a massive amount of unlabled data, to score multiple choice questions posed by commonsense reasoning tests. On both Pronoun Disambiguation and Winograd Schema challenges, our models outperform previous state-of-the-art methods by a large margin, without using expensive annotated knowledge bases or hand-engineered features. We train an array of large RNN language models that operate at word or character level on LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and a customized corpus for this task and show that diversity of training data plays an important role in test performance. Further analysis also shows that our system successfully discovers important features of the context that decide the correct answer, indicating a good grasp of commonsense knowledge.",
"title": ""
},
{
"docid": "7bd47ba6f139905b9cfa5af8cc66ddd3",
"text": "The impact of ICT (Information Communication Technology) hotel and hospitality industries has been widely recognized as one of the major changes in the last decade: new ways of communicating with guests, using ICT to improve services delivery to guest etc. The study tried to investigate the ICT Infrastructural Diffusion in hotels in Owerri, Imo State. In order to know the extent of spread, the study examine the current ICT infrastructures being used, the rate at which its being used and the factors affecting its adoption. The data collected was analyzed using SPSS Software and Regression model was estimated. The findings revealed that the rate at which hotels adopt and use ICT infrastructure is low and the most significant factor affecting the adoption and use of ICT is scope of activities the hotel is engaged in. It is therefore recommended that Government should increase the economic activities in the state so as to increase the adoption of ICT infrastructures.",
"title": ""
},
{
"docid": "d815d2bcc9436f9c9751ce18f87d2fe4",
"text": "Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10 °C.",
"title": ""
},
{
"docid": "c6283ee48fd5115d28e4ea0812150f25",
"text": "Stochastic regular bi-languages has been recently proposed to model the joint probability distributions appearing in some statistical approaches of Spoken Dialog Systems. To this end a deterministic and probabilistic finite state biautomaton was defined to model the distribution probabilities for the dialog model. In this work we propose and evaluate decision strategies over the defined probabilistic finite state bi-automaton to select the best system action at each step of the interaction. To this end the paper proposes some heuristic decision functions that consider both action probabilities learn from a corpus and number of known attributes at running time. We compare either heuristics based on a single next turn or based on entire paths over the automaton. Experimental evaluation was carried out to test the model and the strategies over the Let’s Go Bus Information system. The results obtained show good system performances. They also show that local decisions can lead to better system performances than best path-based decisions due to the unpredictability of the user behaviors.",
"title": ""
},
{
"docid": "5b07c3e3a8f91884f00cf728a2ef8772",
"text": "Human self-consciousness relies on the ability to distinguish between oneself and others. We sought to explore the neural correlates involved in self-other representations by investigating two critical processes: perspective taking and agency. Although recent research has shed light on the neural processes underlying these phenomena, little is known about how they overlap or interact at the neural level. In a two-factorial functional magnetic resonance imaging (fMRI) experiment, participants played a ball-tossing game with two virtual characters (avatars). During an active/agency (ACT) task, subjects threw a ball to one of the avatars by pressing a button. During a passive/nonagency (PAS) task, they indicated which of the other avatars threw the ball. Both tasks were performed from a first-person perspective (1PP), in which subjects interacted from their own perspective, and a third-person perspective (3PP), in which subjects interacted from the perspective of an avatar with another location in space. fMRI analyses revealed overlapping activity in medial prefrontal regions associated with representations of one's own perspective and actions (1PP and ACT), and overlapping activity in temporal-occipital, premotor, and inferior frontal, as well as posterior parietal regions associated with representation of others' perspectives and actions (3PP and PAS). These findings provide evidence for distinct neural substrates underlying representations of the self and others and provide support for the idea that the medial prefrontal cortex crucially contributes to a neural basis of the self. The lack of a statistically significant interaction suggests that perspective taking and agency represent independent constituents of self-consciousness.",
"title": ""
},
{
"docid": "589e44cebea65d07875842f2dec432d8",
"text": "Since the first demonstration of a production quality three-dimensional (3D) stacked-word-line NAND Flash memory [1], the 3b/cell 3D NAND Flash memory has seen areal density increases of more than 50% per year due to the aggressive development of 3D-wordline-stacking technology. This trend has been consistent for the last three consecutive years [2-4], however the storage market still requires higher density for diverse digital applications. A 4b/cell technology is one promising solution to increase bit density [5]. In this paper, we propose a 4b/cell 3D NAND Flash memory with a 12MB/s program throughput. The chip achieves a 5.63Gb/mm2 areal density, which is a 41.5% improvement as compared to a 3b/cell NAND Flash memory in the same 3D-NAND technology [4].",
"title": ""
},
{
"docid": "9b30a07edc14ed2d1132421d8f372cd2",
"text": "Even when the role of a conversational agent is well known users persist in confronting them with Out-of-Domain input. This often results in inappropriate feedback, leaving the user unsatisfied. In this paper we explore the automatic creation/enrichment of conversational agents’ knowledge bases by taking advantage of natural language interactions present in the Web, such as movies subtitles. Thus, we introduce Filipe, a chatbot that answers users’ request by taking advantage of a corpus of turns obtained from movies subtitles (the Subtle corpus). Filipe is based on Say Something Smart, a tool responsible for indexing a corpus of turns and selecting the most appropriate answer, which we fully describe in this paper. Moreover, we show how this corpus of turns can help an existing conversational agent to answer Out-of-Domain interactions. A preliminary evaluation is also presented.",
"title": ""
},
{
"docid": "b8e915263553222b24557c716ae73db4",
"text": "Computability logic (CL) is a systematic formal theory of computational tasks and resources, which, in a sense, can be seen as a semantics-based alternative to (the syntactically introduced) linear logic. With its expressive and flexible language, where formulas represent computational problems and “truth” is understood as algorithmic solvability, CL potentially offers a comprehensive logical basis for constructive applied theories and computing systems inherently requiring constructive and computationally meaningful underlying logics. Among the best known constructivistic logics is Heyting’s intuitionistic calculus INT, whose language can be seen as a special fragment of that of CL. The constructivistic philosophy of INT, however, just like the resource philosophy of linear logic, has never really found an intuitively convincing and mathematically strict semantical justification. CL has good claims to provide such a justification and hence a materialization of Kolmogorov’s known thesis “INT = logic of problems”. The present paper contains a soundness proof for INT with respect to the CL semantics. It is expected to constitute part 1 of a two-piece series on the intuitionistic fragment of CL, with part 2 containing an anticipated completeness proof.",
"title": ""
},
{
"docid": "7c36d7f2a9604470e0e97bd2425bbf0c",
"text": "Gamification, the use of game mechanics in non-gaming applications, has been applied to various systems to encourage desired user behaviors. In this paper, we examine patterns of user activity in an enterprise social network service after the removal of a points-based incentive system. Our results reveal that the removal of the incentive scheme did reduce overall participation via contribution within the SNS. We also describe the strategies by point leaders and observe that users geographically distant from headquarters tended to comment on profiles outside of their home country. Finally, we describe the implications of the removal of extrinsic rewards, such as points and badges, on social software systems, particularly those deployed within an enterprise.",
"title": ""
},
{
"docid": "69e4bb63a9041b3c95fba1a903bc0e5c",
"text": "Compressed sensing is a novel research area, which was introduced in 2006, and since then has already become a key concept in various areas of applied mathematics, computer science, and electrical engineering. It surprisingly predicts that high-dimensional signals, which allow a sparse representation by a suitable basis or, more generally, a frame, can be recovered from what was previously considered highly incomplete linear measurements by using efficient algorithms. This article shall serve as an introduction to and a survey about compressed sensing.",
"title": ""
},
{
"docid": "1b781833b9baaa393fc2d909be21c2c3",
"text": "BACKGROUND\nThe main aim of this study was to explore the relationships between personal self-concept and satisfaction with life, with the latter as the key indicator for personal adjustment. The study tests a structural model which encompasses four dimensions of self-concept: self-fulfillment, autonomy, honesty and emotions.\n\n\nMETHOD\nThe 801 participants in the study, all of whom were aged between 15 and 65 (M = 34.03, SD = 17.29), completed the Satisfaction with Life Scale (SWLS) and the Personal Self-Concept (APE) Questionnaire.\n\n\nRESULTS\nAlthough the four dimensions of personal self-concept differ in their weight, the results show that, taken together, they explain 46% of the differences observed in satisfaction with life. This implies a weight that is as significant as that observed for general self-esteem in previous research studies.\n\n\nCONCLUSIONS\nThis issue should be dealt with early on, during secondary education, in order to help prevent psychological distress or maladjustment.",
"title": ""
},
{
"docid": "8c58b608430e922284d8b4b8cd5cc51d",
"text": "At the end of the 19th century, researchers observed that biological substances have frequency- dependent electrical properties and that tissue behaves \"like a capacitor\" [1]. Consequently, in the first half of the 20th century, the permittivity of many types of cell suspensions and tissues was characterized up to frequencies of approximately 100 MHz. From the measurements, conclusions were drawn, in particular, about the electrical properties of the cell membranes, which are the main contributors to the tissue impedance at frequencies below 10 MHz [2]. In 1926, a study found a significant different permittivity for breast cancer tissue compared with healthy tissue at 20 kHz [3]. After World War II, new instrumentation enabled measurements up to 10 GHz, and a vast amount of data on the dielectric properties of different tissue types in the microwave range was published [4]-[6].",
"title": ""
},
{
"docid": "8d3e966453d230d956e7a112d93b2483",
"text": "ne of the fundamental challenges in theater ballistic missile defense (TBMD) is ascertaining which element in the threat complex is the lethal object. To classify the lethal object and other objects in the complex, it is necessary to model how these objects will appear to TBMD sensors. This article describes a generic parametric approach to building classifier models. The process is illustrated with an example of building a classifier for an infrared sensor. The formulas for probability of classification error are derived. The probability of error for a proposed classification scheme is vital to assessing its efficacy in system trade studies. (",
"title": ""
},
{
"docid": "0ce82ead0954b99d811b9f50eee76abc",
"text": "Convolutional Neural Networks (CNNs) dominate various computer vision tasks since Alex Krizhevsky showed that they can be trained effectively and reduced the top-5 error from 26.2 % to 15.3 % on the ImageNet large scale visual recognition challenge. Many aspects of CNNs are examined in various publications, but literature about the analysis and construction of neural network architectures is rare. This work is one step to close this gap. A comprehensive overview over existing techniques for CNN analysis and topology construction is provided. A novel way to visualize classification errors with confusion matrices was developed. Based on this method, hierarchical classifiers are described and evaluated. Additionally, some results are confirmed and quantified for CIFAR-100. For example, the positive impact of smaller batch sizes, averaging ensembles, data augmentation and test-time transformations on the accuracy. Other results, such as the positive impact of learned color transformation on the test accuracy could not be confirmed. A model which has only one million learned parameters for an input size of 32× 32× 3 and 100 classes and which beats the state of the art on the benchmark dataset Asirra, GTSRB, HASYv2 and STL-10 was developed.",
"title": ""
},
{
"docid": "1a5b5073f66c9f6717eec49875094977",
"text": "This paper reviews the principal approaches to using Artificial Intelligence in Music Education. Music is a challenging domain for Artificial Intelligence in Education (AI-ED) because music is, in general, an open-ended domain demanding creativity and problem-seeking on the part of learners and teachers. In addition, Artificial Intelligence theories of music are far from complete, and music education typically emphasises factors other than the communication of ‘knowledge’ to students. This paper reviews critically some of the principal problems and possibilities in a variety of AI-ED approaches to music education. Approaches considered include: Intelligent Tutoring Systems for Music; Music Logo Systems; Cognitive Support Frameworks that employ models of creativity; highly interactive interfaces that employ AI theories; AI-based music tools; and systems to support negotiation and reflection. A wide variety of existing music AI-ED systems are used to illustrate the key issues, techniques and methods associated with these approaches to AI-ED in Music.",
"title": ""
},
{
"docid": "29df932ae4fad0b70b909c5c8f72dad3",
"text": "Recently, non-fixed camera-based free viewpoint sports video synthesis has become very popular. Camera calibration is an indispensable step in free viewpoint video synthesis, and the calibration has to be done frame by frame for a non-fixed camera. Thus, calibration speed is of great significance in real-time application. In this paper, a fast self-calibration method for a non-fixed camera is proposed to estimate the homography matrix between a camera image and a soccer field model. As far as we know, it is the first time to propose constructing feature vectors by analyzing crossing points of field lines in both camera image and field model. Therefore, different from previous methods that evaluate all the possible homography matrices and select the best one, our proposed method only evaluates a small number of homography matrices based on the matching result of the constructed feature vectors. Experimental results show that the proposed method is much faster than other methods with only a slight loss of calibration accuracy that is negligible in final synthesized videos.",
"title": ""
},
{
"docid": "300cd3e2d8e21f0c8dcf5ecba72cf283",
"text": "Accurate and reliable traffic forecasting for complicated transportation networks is of vital importance to modern transportation management. The complicated spatial dependencies of roadway links and the dynamic temporal patterns of traffic states make it particularly challenging. To address these challenges, we propose a new capsule network (CapsNet) to extract the spatial features of traffic networks and utilize a nested LSTM (NLSTM) structure to capture the hierarchical temporal dependencies in traffic sequence data. A framework for network-level traffic forecasting is also proposed by sequentially connecting CapsNet and NLSTM. On the basis of literature review, our study is the first to adopt CapsNet and NLSTM in the field of traffic forecasting. An experiment on a Beijing transportation network with 278 links shows that the proposed framework with the capability of capturing complicated spatiotemporal traffic patterns outperforms multiple state-of-the-art traffic forecasting baseline models. The superiority and feasibility of CapsNet and NLSTM are also demonstrated, respectively, by visualizing and quantitatively evaluating the experimental results.",
"title": ""
}
] |
scidocsrr
|
76859ee1ea8aab4ee6f19630c912e515
|
Clustering algorithms for bank customer segmentation
|
[
{
"docid": "802f77b4e2b8c8cdfb68f80fe31d7494",
"text": "In this article, we use three clustering methods (K-means, self-organizing map, and fuzzy K-means) to find properly graded stock market brokerage commission rates based on the 3-month long total trades of two different transaction modes (representative assisted and online trading system). Stock traders for both modes are classified in terms of the amount of the total trade as well as the amount of trade of each transaction mode, respectively. Results of our empirical analysis indicate that fuzzy K-means cluster analysis is the most robust approach for segmentation of customers of both transaction modes. We then propose a decision tree based rule to classify three groups of customers and suggest different brokerage commission rates of 0.4, 0.45, and 0.5% for representative assisted mode and 0.06, 0.1, and 0.18% for online trading system, respectively. q 2003 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6b7a1ec7fe105dc7e83291e39e8664ec",
"text": "The clustering problem is well known in the database literature for its numerous applications in problems such as customer segmentation, classification and trend analysis. Unfortunately, all known algorithms tend to break down in high dimensional spaces because of the inherent sparsity of the points. In such high dimensional spaces not all dimensions may be relevant to a given cluster. One way of handling this is to pick the closely correlated dimensions and find clusters in the corresponding subspace. Traditional feature selection algorithms attempt to achieve this. The weakness of this approach is that in typical high dimensional data mining applications different sets of points may cluster better for different subsets of dimensions. The number of dimensions in each such cluster-specific subspace may also vary. Hence, it may be impossible to find a single small subset of dimensions for all the clusters. We therefore discuss a generalization of the clustering problem, referred to as the projected clustering problem, in which the subsets of dimensions selected are specific to the clusters themselves. We develop an algorithmic framework for solving the projected clustering problem, and test its performance on synthetic data.",
"title": ""
}
] |
[
{
"docid": "71d744aefd254acfc24807d805fb066b",
"text": "Bitcoin provides only pseudo-anonymous transactions, which can be exploited to link payers and payees -- defeating the goal of anonymous payments. To thwart such attacks, several Bitcoin mixers have been proposed, with the objective of providing unlinkability between payers and payees. However, existing Bitcoin mixers can be regarded as either insecure or inefficient.\n We present Obscuro, a highly efficient and secure Bitcoin mixer that utilizes trusted execution environments (TEEs). With the TEE's confidentiality and integrity guarantees for code and data, our mixer design ensures the correct mixing operations and the protection of sensitive data (i.e., private keys and mixing logs), ruling out coin theft and address linking attacks by a malicious service provider. Yet, the TEE-based implementation does not prevent the manipulation of inputs (e.g., deposit submissions, blockchain feeds) to the mixer, hence Obscuro is designed to overcome such limitations: it (1) offers an indirect deposit mechanism to prevent a malicious service provider from rejecting benign user deposits; and (2) scrutinizes blockchain feeds to prevent deposits from being mixed more than once (thus degrading anonymity) while being eclipsed from the main blockchain branch. In addition, Obscuro provides several unique anonymity features (e.g., minimum mixing set size guarantee, resistant to dropping user deposits) that are not available in existing centralized and decentralized mixers.\n Our prototype of Obscuro is built using Intel SGX and we demonstrate its effectiveness in Bitcoin Testnet. Our implementation mixes 1000 inputs in just 6.49 seconds, which vastly outperforms all of the existing decentralized mixers.",
"title": ""
},
{
"docid": "800493bbef15369b28c6278111c4db6e",
"text": "Monte Carlo is one of the most versatile and widely used numerical methods. Its convergence rate, O(N~^), is independent of dimension, which shows Monte Carlo to be very robust but also slow. This article presents an introduction to Monte Carlo methods for integration problems, including convergence theory, sampling methods and variance reduction techniques. Accelerated convergence for Monte Carlo quadrature is attained using quasi-random (also called low-discrepancy) sequences, which are a deterministic alternative to random or pseudo-random sequences. The points in a quasi-random sequence are correlated to provide greater uniformity. The resulting quadrature method, called quasi-Monte Carlo, has a convergence rate of approximately O((log N^N'). For quasi-Monte Carlo, both theoretical error estimates and practical limitations are presented. Although the emphasis in this article is on integration, Monte Carlo simulation of rarefied gas dynamics is also discussed. In the limit of small mean free path (that is, the fluid dynamic limit), Monte Carlo loses its effectiveness because the collisional distance is much less than the fluid dynamic length scale. Computational examples are presented throughout the text to illustrate the theory. A number of open problems are described.",
"title": ""
},
{
"docid": "aee2d68e1229b43469bd97d1b6519c89",
"text": "The calpain family of Ca(2+) -dependent cysteine proteases plays a vital role in many important biological processes which is closely related with a variety of pathological states. Activated calpains selectively cleave relevant substrates at specific cleavage sites, yielding multiple fragments that can have different functions from the intact substrate protein. Until now, our knowledge about the calpain functions and their substrate cleavage mechanisms are limited because the experimental determination and validation on calpain binding are usually laborious and expensive. In this work, we aim to develop a new computational approach (LabCaS) for accurate prediction of the calpain substrate cleavage sites from amino acid sequences. To overcome the imbalance of negative and positive samples in the machine-learning training which have been suffered by most of the former approaches when splitting sequences into short peptides, we designed a conditional random field algorithm that can label the potential cleavage sites directly from the entire sequences. By integrating the multiple amino acid features and those derived from sequences, LabCaS achieves an accurate recognition of the cleave sites for most calpain proteins. In a jackknife test on a set of 129 benchmark proteins, LabCaS generates an AUC score 0.862. The LabCaS program is freely available at: http://www.csbio.sjtu.edu.cn/bioinf/LabCaS. Proteins 2013. © 2012 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "f27275812c3ed966f2371fb8d5356375",
"text": "Tapered slot antennas are a class of traveling wave antenna having promising application due to their wide bandwidth and symmetric radiation patterns. This paper discusses the characteristics of a tapered slot antenna in antipodal configuration and printed with elliptically shaped radiators. Simulation and experimental studies show good radiation and impedance characteristics and its usefulness for pulsed application.",
"title": ""
},
{
"docid": "f1b8fde2d3faab672169921b97dc8422",
"text": "Pooling plays an important role in generating a discriminative video representation. In this paper, we propose a new semantic pooling approach for challenging event analysis tasks (e.g., event detection, recognition, and recounting) in long untrimmed Internet videos, especially when only a few shots/segments are relevant to the event of interest while many other shots are irrelevant or even misleading. The commonly adopted pooling strategies aggregate the shots indifferently in one way or another, resulting in a great loss of information. Instead, in this work we first define a novel notion of semantic saliency that assesses the relevance of each shot with the event of interest. We then prioritize the shots according to their saliency scores since shots that are semantically more salient are expected to contribute more to the final event analysis. Next, we propose a new isotonic regularizer that is able to exploit the constructed semantic ordering information. The resulting nearly-isotonic support vector machine classifier exhibits higher discriminative power in event analysis tasks. Computationally, we develop an efficient implementation using the proximal gradient algorithm, and we prove new and closed-form proximal steps. We conduct extensive experiments on three real-world video datasets and achieve promising improvements.",
"title": ""
},
{
"docid": "768a4839232a39f8c4fe15ca095217d1",
"text": "Advances in deep learning over the last decade have led to a flurry of research in the application of deep artificial neural networks to robotic systems, with at least thirty papers published on the subject between 2014 and the present. This review discusses the applications, benefits, and limitations of deep learning vis-\\`a-vis physical robotic systems, using contemporary research as exemplars. It is intended to communicate recent advances to the wider robotics community and inspire additional interest in and application of deep learning in robotics.",
"title": ""
},
{
"docid": "cba56fe59541f08bbb10674aaec81906",
"text": "Ubuntu dialogue corpus is the largest public available dialogue corpus to make it feasible to build end-to-end deep neural network models directly from the conversation data. One challenge of Ubuntu dialogue corpus is the large number of out-of-vocabulary words. In this paper we proposed a method which combines the general pre-trained word embedding vectors with those generated on the taskspecific training set to address this issue. We integrated character embedding into Chen et al’s Enhanced LSTM method (ESIM) and used it to evaluate the effectiveness of our proposed method. For the task of next utterance selection, the proposed method has demonstrated a significant performance improvement against original ESIM and the new model has achieved state-of-the-art results on both Ubuntu dialogue corpus and Douban conversation corpus. In addition, we investigated the performance impact of end-of-utterance and end-of-turn token tags.",
"title": ""
},
{
"docid": "496175f20823fa42c852060cf41f5095",
"text": "Currently, the use of virtual reality (VR) is being widely applied in different fields, especially in computer science, engineering, and medicine. Concretely, the engineering applications based on VR cover approximately one half of the total number of VR resources (considering the research works published up to last year, 2016). In this paper, the capabilities of different computational software for designing VR applications in engineering education are discussed. As a result, a general flowchart is proposed as a guide for designing VR resources in any application. It is worth highlighting that, rather than this study being based on the applications used in the engineering field, the obtained results can be easily extrapolated to other knowledge areas without any loss of generality. This way, this paper can serve as a guide for creating a VR application.",
"title": ""
},
{
"docid": "70ed5a9f324bfd601de3759ae0b94bd1",
"text": "BACKGROUND\nBiomarkers have many distinct purposes, and depending on their intended use, the validation process varies substantially.\n\n\nPURPOSE\nThe goal of this article is to provide an introduction to the topic of biomarkers, and then to discuss three specific types of biomarkers, namely, prognostic, predictive, and surrogate.\n\n\nRESULTS\nA principle challenge for biomarker validation from a statistical perspective is the issue of multiplicity. In general, the solution to this multiplicity challenge is well known to statisticians: pre-specification and replication. Critical requirements for prognostic marker validation include uniform treatment, complete follow-up, unbiased case selection, and complete ascertainment of the many possible confounders that exist in the context of an observational sample. In the case of predictive biomarker validation, observational data are clearly inadequate and randomized controlled trials are mandatory. Within the context of randomization, strategies for predictive marker validation can be grouped into two categories: retrospective versus prospective validation. The critical validation criteria for a surrogate endpoint is to ensure that if a trial uses a surrogate endpoint, the trial will result in the same inferences as if the trial had observed the true endpoint. The field of surrogate endpoint validation has now moved to the multi-trial or meta-analytic setting as the preferred method.\n\n\nCONCLUSIONS\nBiomarkers are a highly active research area. For all biomarker developmental and validation studies, the importance of fundamental statistical concepts remains the following: pre-specification of hypotheses, randomization, and replication. Further statistical methodology research in this area is clearly needed as we move forward.",
"title": ""
},
{
"docid": "be17532b93e28edb4f73462cfe17f96d",
"text": "OBJECTIVES\nThe purpose of this study was to conduct a review of randomized controlled trials (RCTs) to determine the treatment effectiveness of the combination of manual therapy (MT) with other physical therapy techniques.\n\n\nMETHODS\nSystematic searches of scientific literature were undertaken on PubMed and the Cochrane Library (2004-2014). The following terms were used: \"patellofemoral pain syndrome,\" \"physical therapy,\" \"manual therapy,\" and \"manipulation.\" RCTs that studied adults diagnosed with patellofemoral pain syndrome (PFPS) treated by MT and physical therapy approaches were included. The quality of the studies was assessed by the Jadad Scale.\n\n\nRESULTS\nFive RCTs with an acceptable methodological quality (Jadad ≥ 3) were selected. The studies indicated that MT combined with physical therapy has some effect on reducing pain and improving function in PFPS, especially when applied on the full kinetic chain and when strengthening hip and knee muscles.\n\n\nCONCLUSIONS\nThe different combinations of MT and physical therapy programs analyzed in this review suggest that giving more emphasis to proximal stabilization and full kinetic chain treatments in PFPS will help better alleviation of symptoms.",
"title": ""
},
{
"docid": "5b61b6d96b7a4af62bf30b535a18e14a",
"text": "schooling were as universally endorsed as homework. Educators, parents, and policymakers of all political and pedagogical stripes insisted that homework is good and more is better—a view that was promoted most visibly in A Nation at Risk (National Commission on Excellence in Education, 1983) and What Works (U.S. Department of Education, 1986).1 Indeed, never in the history of American education was there a stronger professional and public consensus in favor of homework (see Gill & Schlossman, 1996; Gill & Schlossman, 2000). Homework has been touted for academic and character-building purposes, and for promoting America’s international competitiveness (see, e.g., Cooper, 2001; Keith, 1986; Maeroff, 1992; Maeroff, 1989; The Economist, 1995). It has been viewed as a key symbol, method, and yardstick of serious commitment to educational re-",
"title": ""
},
{
"docid": "2462af24189262b0145a6559d4aa6b3d",
"text": "A 30-MHz voltage-mode buck converter using a delay-line-based pulse-width-modulation controller is proposed in this brief. Two voltage-to-delay cells are used to convert the voltage difference to delay-time difference. A charge pump is used to charge or discharge the loop filter, depending on whether the feedback voltage is larger or smaller than the reference voltage. A delay-line-based voltage-to-duty-cycle (V2D) controller is used to replace the classical ramp-comparator-based V2D controller to achieve wide duty cycle. A type-II compensator is implemented in this design with a capacitor and resistor in the loop filter. The prototype buck converter was fabricated using a 0.18-<inline-formula> <tex-math notation=\"LaTeX\">${\\mu }\\text{m}$ </tex-math></inline-formula> CMOS process. It occupies an active area of 0.834 mm<sup>2</sup> including the testing PADs. The tunable duty cycle ranges from 11.9%–86.3%, corresponding to 0.4 V–2.8 V output voltage with 3.3 V input. With a step of 400 mA in the load current, the settling time is around 3 <inline-formula> <tex-math notation=\"LaTeX\">${\\mu }\\text{s}$ </tex-math></inline-formula>. The peak efficiency is as high as 90.2% with 2.4 V output and the maximum load current is 800 mA.",
"title": ""
},
{
"docid": "ecdb4528849cf9605f14f8a69308a1a4",
"text": "In this paper we describe findings from two studies aimed at understanding how health monitoring technology affects the parent-child relationship, examining emotional response and barriers to using this type of technology. We present suggestions for the design of health monitoring technology intended to enhance self-care in children without creating parent-child conflict. Our recommendations integrate the study findings, developmental stage specific concerns, and prior HCI research aimed at children’s health. Author",
"title": ""
},
{
"docid": "2ae1dfeae3c6b8a1ca032198f2989aef",
"text": "This study enhances the existing literature on online trust by integrating the consumers’ product evaluations model and technology adoption model in e-commerce environments. In this study, we investigate how perceived value influences the perceptions of online trust among online buyers and their willingness to repurchase from the same website. This study proposes a research model that compares the relative importance of perceived value and online trust to perceived usefulness in influencing consumers’ repurchase intention. The proposed model is tested using data collected from online consumers of e-commerce. The findings show that although trust and ecommerce adoption components are critical in influencing repurchase intention, product evaluation factors are also important in determining repurchase intention. Perceived quality is influenced by the perceptions of competitive price and website reputation, which in turn influences perceived value; and perceived value, website reputation, and perceived risk influence online trust, which in turn influence repurchase intention. The findings also indicate that the effect of perceived usefulness on repurchase intention is not significant whereas perceived value and online trust are the major determinants of repurchase intention. Major theoretical contributions and practical implications are discussed.",
"title": ""
},
{
"docid": "8808c5f8ce726a9382facc63f9460e21",
"text": "With the booming of deep learning in the recent decade, deep neural network has achieved state-of-art performances on many machine learning tasks and has been applied to more and more research fields. Stock market prediction is an attractive research topic since the successful prediction on the market’s future movement leads to significant profit. In this thesis, we investigate to combine the conventional stock analysis techniques with the popular deep learning together and study the impact of deep neural network on stock market prediction. Traditional short term stock market predictions are usually based on the analysis of historical market data, such as stock prices, moving averages or daily returns. Whereas financial news also contains useful information on public companies and the market. In this thesis we apply the popular word embedding methods and deep neural networks to leverage financial news to predict stock price movements in the market. Experimental results have shown that our proposed methods are simple but very effective, which can significantly improve the stock prediction accuracy on a standard financial database over the baseline system using only the historical price information.",
"title": ""
},
{
"docid": "4c59e73611e04e830cbc2676a50ec8ca",
"text": "This paper proposes a model of neural network which can be used to combine Long Short Term Memory networks (LSTM) with Deep Neural Networks (DNN). Autocorrelation coefficient is added to model to improve the accuracy of prediction model. It can provide better than the other traditional precision of the model. And after considering the autocorrelation features, the neural network of LSTM and DNN has certain advantages in the accuracy of the large granularity data sets. Several experiments were held using real-world data to show effectivity of LSTM model and accuracy were improve with autocorrelation considered.",
"title": ""
},
{
"docid": "49e1dc71e71b45984009f4ee20740763",
"text": "The ecosystem of open source software (OSS) has been growing considerably in size. In addition, code clones - code fragments that are copied and pasted within or between software systems - are also proliferating. Although code cloning may expedite the process of software development, it often critically affects the security of software because vulnerabilities and bugs can easily be propagated through code clones. These vulnerable code clones are increasing in conjunction with the growth of OSS, potentially contaminating many systems. Although researchers have attempted to detect code clones for decades, most of these attempts fail to scale to the size of the ever-growing OSS code base. The lack of scalability prevents software developers from readily managing code clones and associated vulnerabilities. Moreover, most existing clone detection techniques focus overly on merely detecting clones and this impairs their ability to accurately find \"vulnerable\" clones. In this paper, we propose VUDDY, an approach for the scalable detection of vulnerable code clones, which is capable of detecting security vulnerabilities in large software programs efficiently and accurately. Its extreme scalability is achieved by leveraging function-level granularity and a length-filtering technique that reduces the number of signature comparisons. This efficient design enables VUDDY to preprocess a billion lines of code in 14 hour and 17 minutes, after which it requires a few seconds to identify code clones. In addition, we designed a security-aware abstraction technique that renders VUDDY resilient to common modifications in cloned code, while preserving the vulnerable conditions even after the abstraction is applied. This extends the scope of VUDDY to identifying variants of known vulnerabilities, with high accuracy. In this study, we describe its principles and evaluate its efficacy and effectiveness by comparing it with existing mechanisms and presenting the vulnerabilities it detected. VUDDY outperformed four state-of-the-art code clone detection techniques in terms of both scalability and accuracy, and proved its effectiveness by detecting zero-day vulnerabilities in widely used software systems, such as Apache HTTPD and Ubuntu OS Distribution.",
"title": ""
},
{
"docid": "10571a65808fb1253b6ad7f3a43c2e69",
"text": "Many studies have shown evidence for syntactic priming during language production (e.g., Bock, 1986). It is often assumed that comprehension and production share similar mechanisms and that priming also occurs during comprehension (e.g., Pickering & Garrod, 2004). Research investigating priming during comprehension (e.g., Branigan, Pickering, & McLean, 2005; Scheepers & Crocker, 2004) has mainly focused on syntactic ambiguities that are very different from the meaning-equivalent structures used in production research. In two experiments, we investigated whether priming during comprehension occurs in ditransitive sentences similar to those used in production research. When the verb was repeated between prime and target, we observed a priming effect similar to that in production. However, we observed no evidence for priming when the verbs were different. Thus, priming during comprehension occurs for very similar structures as priming during production, but in contrast to production, the priming effect is completely lexically dependent.",
"title": ""
},
{
"docid": "241f5a88f53c929cc11ce0edce191704",
"text": "Enabled by mobile and wearable technology, personal health data delivers immense and increasing value for healthcare, benefiting both care providers and medical research. The secure and convenient sharing of personal health data is crucial to the improvement of the interaction and collaboration of the healthcare industry. Faced with the potential privacy issues and vulnerabilities existing in current personal health data storage and sharing systems, as well as the concept of self-sovereign data ownership, we propose an innovative user-centric health data sharing solution by utilizing a decentralized and permissioned blockchain to protect privacy using channel formation scheme and enhance the identity management using the membership service supported by the blockchain. A mobile application is deployed to collect health data from personal wearable devices, manual input, and medical devices, and synchronize data to the cloud for data sharing with healthcare providers and health insurance companies. To preserve the integrity of health data, within each record, a proof of integrity and validation is permanently retrievable from cloud database and is anchored to the blockchain network. Moreover, for scalable and performance considerations, we adopt a tree-based data processing and batching method to handle large data sets of personal health data collected and uploaded by the mobile platform.",
"title": ""
}
] |
scidocsrr
|
0deddd2d6da144c291b6e2b240f98067
|
StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points
|
[
{
"docid": "4f9b66eb63cd23cd6364992759269a2c",
"text": "In this paper, we present the concept of diffusing models to perform image-to-image matching. Having two images to match, the main idea is to consider the objects boundaries in one image as semi-permeable membranes and to let the other image, considered as a deformable grid model, diffuse through these interfaces, by the action of effectors situated within the membranes. We illustrate this concept by an analogy with Maxwell's demons. We show that this concept relates to more traditional ones, based on attraction, with an intermediate step being optical flow techniques. We use the concept of diffusing models to derive three different non-rigid matching algorithms, one using all the intensity levels in the static image, one using only contour points, and a last one operating on already segmented images. Finally, we present results with synthesized deformations and real medical images, with applications to heart motion tracking and three-dimensional inter-patients matching.",
"title": ""
},
{
"docid": "1dbb3a49f6c0904be9760f877b7270b7",
"text": "We propose a geographical visualization to support operators of coastal surveillance systems and decision making analysts to get insights in vessel movements. For a possibly unknown area, they want to know where significant maritime areas, like highways and anchoring zones, are located. We show these features as an overlay on a map. As source data we use AIS data: Many vessels are currently equipped with advanced GPS devices that frequently sample the state of the vessels and broadcast them. Our visualization is based on density fields that are derived from convolution of the dynamic vessel positions with a kernel. The density fields are shown as illuminated height maps. Combination of two fields, with a large and small kernel provides overview and detail. A large kernel provides an overview of area usage revealing vessel highways. Details of speed variations of individual vessels are shown with a small kernel, highlighting anchoring zones where multiple vessels stop. Besides for maritime applications we expect that this approach is useful for the visualization of moving object data in general.",
"title": ""
}
] |
[
{
"docid": "2578d87c9d30187566a46586acaa6d09",
"text": "Abstract Virtual reality (VR)-based therapy has emerged as a potentially useful means to treat post-traumatic stress disorder (PTSD), but randomized studies have been lacking for Service Members from Iraq or Afghanistan. This study documents a small, randomized, controlled trial of VR-graded exposure therapy (VR-GET) versus treatment as usual (TAU) for PTSD in Active Duty military personnel with combat-related PTSD. Success was gauged according to whether treatment resulted in a 30 percent or greater improvement in the PTSD symptom severity as assessed by the Clinician Administered PTSD Scale (CAPS) after 10 weeks of treatment. Seven of 10 participants improved by 30 percent or greater while in VR-GET, whereas only 1 of the 9 returning participants in TAU showed similar improvement. This is a clinically and statistically significant result (χ(2) = 6.74, p < 0.01, relative risk 3.2). Participants in VR-GET improved an average of 35 points on the CAPS, whereas those in TAU averaged a 9-point improvement (p < 0.05). The results are limited by small size, lack of blinding, a single therapist, and comparison to a relatively uncontrolled usual care condition, but did show VR-GET to be a safe and effective treatment for combat-related PTSD.",
"title": ""
},
{
"docid": "df331d60ab6560808e28e3813766b67b",
"text": "Analyzing large graphs provides valuable insights for social networking and web companies in content ranking and recommendations. While numerous graph processing systems have been developed and evaluated on available benchmark graphs of up to 6.6B edges, they often face significant difficulties in scaling to much larger graphs. Industry graphs can be two orders of magnitude larger hundreds of billions or up to one trillion edges. In addition to scalability challenges, real world applications often require much more complex graph processing workflows than previously evaluated. In this paper, we describe the usability, performance, and scalability improvements we made to Apache Giraph, an open-source graph processing system, in order to use it on Facebook-scale graphs of up to one trillion edges. We also describe several key extensions to the original Pregel model that make it possible to develop a broader range of production graph applications and workflows as well as improve code reuse. Finally, we report on real-world operations as well as performance characteristics of several large-scale production applications.",
"title": ""
},
{
"docid": "e235a9eb5df7c5cf1487ae03cc6bc4d3",
"text": "The objective of the proposed scheme is to extract the maximum power at different wind turbine speed. In order to achieve this, MPPT controller is implemented on the rectifier side for the extraction of maximum power. On the inverter side normal closed loop PWM control is carried out. MPPT controller is implemented using fuzzy logic control technique. The fuzzy controller's role here is to track the speed reference of the generator. By doing so and keeping the generator speed at an optimal reference value, maximum power can be attained. This procedure is repeated for various wind turbine speeds, When the wind speed increases the real power generated by the PMSG based WECS increases with the aid of MPPT controller.",
"title": ""
},
{
"docid": "1e3136f97585c985153b3ed43ac8db6c",
"text": "In this report, we organize and reflect on recent advances and challenges in the field of sports data visualization. The exponentially-growing body of visualization research based on sports data is a prime indication of the importance and timeliness of this report. Sports data visualization research encompasses the breadth of visualization tasks and goals: exploring the design of new visualization techniques; adapting existing visualizations to a novel domain; and conducting design studies and evaluations in close collaboration with experts, including practitioners, enthusiasts, and journalists. Frequently this research has impact beyond sports in both academia and in industry because it is i) grounded in realistic, highly heterogeneous data, ii) applied to real-world problems, and iii) designed in close collaboration with domain experts. In this report, we analyze current research contributions through the lens of three categories of sports data: box score data (data containing statistical summaries of a sport event such as a game), tracking data (data about in-game actions and trajectories), and meta-data (data about the sport and its participants but not necessarily a given game). We conclude this report with a high-level discussion of sports visualization research informed by our analysis—identifying critical research gaps and valuable opportunities for the visualization community. More information is available at the STAR’s website: https://sportsdataviz.github.io/.",
"title": ""
},
{
"docid": "2913460e0fe1f17a0fc291de8886e2f7",
"text": "Residual Networks (ResNets) have become stateof-the-art models in deep learning and several theoretical studies have been devoted to understanding why ResNet works so well. One attractive viewpoint on ResNet is that it is optimizing the risk in a functional space by combining an ensemble of effective features. In this paper, we adopt this viewpoint to construct a new gradient boosting method, which is known to be very powerful in data analysis. To do so, we formalize the gradient boosting perspective of ResNet mathematically using the notion of functional gradients and propose a new method called ResFGB for classification tasks by leveraging ResNet perception. Two types of generalization guarantees are provided from the optimization perspective: one is the margin bound and the other is the expected risk bound by the sample-splitting technique. Experimental results show superior performance of the proposed method over state-of-the-art methods such as LightGBM.",
"title": ""
},
{
"docid": "ce848a090d33763e4612aa04437b7ebd",
"text": "Loving-kindness meditation is a practice designed to enhance feelings of kindness and compassion for self and others. Loving-kindness meditation involves repetition of phrases of positive intention for self and others. We undertook an open pilot trial of loving-kindness meditation for veterans with posttraumatic stress disorder (PTSD). Measures of PTSD, depression, self-compassion, and mindfulness were obtained at baseline, after a 12-week loving-kindness meditation course, and 3 months later. Effect sizes were calculated from baseline to each follow-up point, and self-compassion was assessed as a mediator. Attendance was high; 74% attended 9-12 classes. Self-compassion increased with large effect sizes and mindfulness increased with medium to large effect sizes. A large effect size was found for PTSD symptoms at 3-month follow-up (d = -0.89), and a medium effect size was found for depression at 3-month follow-up (d = -0.49). There was evidence of mediation of reductions in PTSD symptoms and depression by enhanced self-compassion. Overall, loving-kindness meditation appeared safe and acceptable and was associated with reduced symptoms of PTSD and depression. Additional study of loving-kindness meditation for PTSD is warranted to determine whether the changes seen are due to the loving-kindness meditation intervention versus other influences, including concurrent receipt of other treatments.",
"title": ""
},
{
"docid": "080f76412f283fb236c28678bf9dada8",
"text": "We describe a new algorithm for robot localization, efficient both in terms of memory and processing time. It transforms a stream of laser range sensor data into a probabilistic calculation of the robot’s position, using a bidirectional Long Short-Term Memory (LSTM) recurrent neural network (RNN) to learn the structure of the environment and to answer queries such as: in which room is the robot? To achieve this, the RNN builds an implicit map of the environment.",
"title": ""
},
{
"docid": "64723e2bb073d0ba4412a9affef16107",
"text": "The debate on the entrepreneurial university has raised questions about what motivates academics to engage with industry. This paper provides evidence, based on survey data for a comprehensive sample of UK investigators in the physical and engineering sciences. Our results suggest that most academics engage with industry to further their research rather than to commercialize their knowledge. However, there are differences in terms of the channels of engagement. While patenting and spin-off company formation is motivated exclusively by commercialization, joint research, contract research and consulting are strongly informed by research-related motives. We conclude that policy should refrain from focusing on monetary incentives for industry engagement and consider a broader range of incentives for promoting interaction between academia and industry.",
"title": ""
},
{
"docid": "22cdfb6170fab44905a8f79b282a1313",
"text": "CONTEXT\nInteprofessional collaboration (IPC) between biomedically trained doctors (BMD) and traditional, complementary and alternative medicine practitioners (TCAMP) is an essential element in the development of successful integrative healthcare (IHC) services. This systematic review aims to identify organizational strategies that would facilitate this process.\n\n\nMETHODS\nWe searched 4 international databases for qualitative studies on the theme of BMD-TCAMP IPC, supplemented with a purposive search of 31 health services and TCAM journals. Methodological quality of included studies was assessed using published checklist. Results of each included study were synthesized using a framework approach, with reference to the Structuration Model of Collaboration.\n\n\nFINDINGS\nThirty-seven studies of acceptable quality were included. The main driver for developing integrative healthcare was the demand for holistic care from patients. Integration can best be led by those trained in both paradigms. Bridge-building activities, positive promotion of partnership and co-location of practices are also beneficial for creating bonding between team members. In order to empower the participation of TCAMP, the perceived power differentials need to be reduced. Also, resources should be committed to supporting team building, collaborative initiatives and greater patient access. Leadership and funding from central authorities are needed to promote the use of condition-specific referral protocols and shared electronic health records. More mature IHC programs usually formalize their evaluation process around outcomes that are recognized both by BMD and TCAMP.\n\n\nCONCLUSIONS\nThe major themes emerging from our review suggest that successful collaborative relationships between BMD and TCAMP are similar to those between other health professionals, and interventions which improve the effectiveness of joint working in other healthcare teams with may well be transferable to promote better partnership between the paradigms. However, striking a balance between the different practices and preserving the epistemological stance of TCAM will remain the greatest challenge in successful integration.",
"title": ""
},
{
"docid": "cd587b4f35290bf779b0c7ee0214ab72",
"text": "Time series data is perhaps the most frequently encountered type of data examined by the data mining community. Clustering is perhaps the most frequently used data mining algorithm, being useful in it's own right as an exploratory technique, and also as a subroutine in more complex data mining algorithms such as rule discovery, indexing, summarization, anomaly detection, and classification. Given these two facts, it is hardly surprising that time series clustering has attracted much attention. The data to be clustered can be in one of two formats: many individual time series, or a single time series, from which individual time series are extracted with a sliding window. Given the recent explosion of interest in streaming data and online algorithms, the latter case has received much attention.In this work we make a surprising claim. Clustering of streaming time series is completely meaningless. More concretely, clusters extracted from streaming time series are forced to obey a certain constraint that is pathologically unlikely to be satisfied by any dataset, and because of this, the clusters extracted by any clustering algorithm are essentially random. While this constraint can be intuitively demonstrated with a simple illustration and is simple to prove, it has never appeared in the literature.We can justify calling our claim surprising, since it invalidates the contribution of dozens of previously published papers. We will justify our claim with a theorem, illustrative examples, and a comprehensive set of experiments on reimplementations of previous work. Although the primary contribution of our work is to draw attention to the fact that an apparent solution to an important problem is incorrect and should no longer be used, we also introduce a novel method which, based on the concept of time series motifs, is able to meaningfully cluster some streaming time series datasets.",
"title": ""
},
{
"docid": "2a7c77985e3fca58ee8a69dd9b6f36d2",
"text": "New types of machine learning hardware in development and entering the market hold the promise of revolutionizing deep learning in a manner as profound as GPUs. However, existing software frameworks and training algorithms for deep learning have yet to evolve to fully leverage the capability of the new wave of silicon. We already see the limitations of existing algorithms for models that exploit structured input via complex and instancedependent control flow, which prohibits minibatching. We present an asynchronous model-parallel (AMP) training algorithm that is specifically motivated by training on networks of interconnected devices. Through an implementation on multi-core CPUs, we show that AMP training converges to the same accuracy as conventional synchronous training algorithms in a similar number of epochs, but utilizes the available hardware more efficiently even for small minibatch sizes, resulting in significantly shorter overall training times. Our framework opens the door for scaling up a new class of deep learning models that cannot be efficiently trained today.",
"title": ""
},
{
"docid": "bba813ba24b8bc3a71e1afd31cf0454d",
"text": "Betweenness-Centrality measure is often used in social and computer communication networks to estimate the potential monitoring and control capabilities a vertex may have on data flowing in the network. In this article, we define the Routing Betweenness Centrality (RBC) measure that generalizes previously well known Betweenness measures such as the Shortest Path Betweenness, Flow Betweenness, and Traffic Load Centrality by considering network flows created by arbitrary loop-free routing strategies.\n We present algorithms for computing RBC of all the individual vertices in the network and algorithms for computing the RBC of a given group of vertices, where the RBC of a group of vertices represents their potential to collaboratively monitor and control data flows in the network. Two types of collaborations are considered: (i) conjunctive—the group is a sequences of vertices controlling traffic where all members of the sequence process the traffic in the order defined by the sequence and (ii) disjunctive—the group is a set of vertices controlling traffic where at least one member of the set processes the traffic. The algorithms presented in this paper also take into consideration different sampling rates of network monitors, accommodate arbitrary communication patterns between the vertices (traffic matrices), and can be applied to groups consisting of vertices and/or edges.\n For the cases of routing strategies that depend on both the source and the target of the message, we present algorithms with time complexity of O(n2m) where n is the number of vertices in the network and m is the number of edges in the routing tree (or the routing directed acyclic graph (DAG) for the cases of multi-path routing strategies). The time complexity can be reduced by an order of n if we assume that the routing decisions depend solely on the target of the messages.\n Finally, we show that a preprocessing of O(n2m) time, supports computations of RBC of sequences in O(kn) time and computations of RBC of sets in O(n3n) time, where k in the number of vertices in the sequence or the set.",
"title": ""
},
{
"docid": "9c887109d71605053ecb1732a1989a35",
"text": "In this paper, we develop a new approach called DeepText for text region proposal generation and text detection in natural images via a fully convolutional neural network (CNN). First, we propose the novel inception region proposal network (Inception-RPN), which slides an inception network with multi-scale windows over the top of convolutional feature maps and associates a set of text characteristic prior bounding boxes with each sliding position to generate high recall word region proposals. Next, we present a powerful text detection network that embeds ambiguous text category (ATC) information and multi-level region-of-interest pooling (MLRP) for text and non-text classification and accurate localization refinement. Our approach achieves an F-measure of 0.83 and 0.85 on the ICDAR 2011 and 2013 robust text detection benchmarks, outperforming previous state-of-the-art results.",
"title": ""
},
{
"docid": "ee377d6087c66b617ed3667499685d34",
"text": "This paper is aimed to propose a noise power ratio (NPR) measurement method with fewer tones than traditionally used. Accurate measurement of NPR distortion is achieved by averaging distortion power measured at the notch frequency excited by multi-tone signals with different random phases. Automatic measurement software is developed to perform all NPR measurement procedures. Measurement results show that the variance is below 0.4 dB after averaging 100 NPR distortions excited by 60-tone. Compared to the NPR measurement results obtained by a more-typical 10000-tone stimulus, the measurement error is 0.23 dB using only 60-tone signals with average.",
"title": ""
},
{
"docid": "b75dd43655a70eaf0aaef43826de4337",
"text": "Plagiarism detection has been considered as a classification problem which can be approximated with intrinsic strategies, considering self-based information from a given document, and external strategies, considering comparison techniques between a suspicious document and different sources. In this work, both intrinsic and external approaches for plagiarism detection are presented. First, the main contribution for intrinsic plagiarism detection is associated to the outlier detection approach for detecting changes in the author’s style. Then, the main contribution for the proposed external plagiarism detection is the space reduction technique to reduce the complexity of this plagiarism detection task. Results shows that our approach is highly competitive with respect to the leading research teams in plagiarism detection.",
"title": ""
},
{
"docid": "60a977556ad78d2e955f750bc4a98707",
"text": "We propose a novel technique for faster Neural Network (NN) training by systematically approximating all the constituent matrix multiplications and convolutions. This approach is complementary to other approximation techniques, requires no changes to the dimensions of the network layers, hence compatible with existing training frameworks. We first analyze the applicability of the existing methods for approximating matrix multiplication to NN training, and extend the most suitable column-row sampling algorithm to approximating multi-channel convolutions. We apply approximate tensor operations to training MLP, CNN and LSTM network architectures on MNIST, CIFAR-100 and Penn Tree Bank datasets and demonstrate 30%-80% reduction in the amount of computations while maintaining little or no impact on the test accuracy. Our promising results encourage further study of general methods for approximating tensor operations and their application to NN training.",
"title": ""
},
{
"docid": "69fb72937745829046379800649b4f6f",
"text": "For a plane wave incident on either a Luneburg lens or a modified Luneburg lens, the magnitude and phase of the transmitted electric field are calculated as a function of the scattering angle in the context of ray theory. It is found that the ray trajectory and the scattered intensity are not uniformly convergent in the vicinity of edge ray incidence on a Luneburg lens, which corresponds to the semiclassical phenomenon of orbiting. In addition, it is found that rays transmitted through a large-focal-length modified Luneburg lens participate in a far-zone rainbow, the details of which are exactly analytically soluble in ray theory. Using these results, the Airy theory of the modified Luneburg lens is derived and compared with the Airy theory of the rainbows of a homogeneous sphere.",
"title": ""
},
{
"docid": "6cad42e549f449c7156b0a07e2e02726",
"text": "Fog computing extends the cloud computing paradigm by placing resources close to the edges of the network to deal with the upcoming growth of connected devices. Smart city applications, such as health monitoring and predictive maintenance, will introduce a new set of stringent requirements, such as low latency, since resources can be requested on-demand simultaneously by multiple devices at different locations. It is then necessary to adapt existing network technologies to future needs and design new architectural concepts to help meet these strict requirements. This article proposes a fog computing framework enabling autonomous management and orchestration functionalities in 5G-enabled smart cities. Our approach follows the guidelines of the European Telecommunications Standards Institute (ETSI) NFV MANO architecture extending it with additional software components. The contribution of our work is its fully-integrated fog node management system alongside the foreseen application layer Peer-to-Peer (P2P) fog protocol based on the Open Shortest Path First (OSPF) routing protocol for the exchange of application service provisioning information between fog nodes. Evaluations of an anomaly detection use case based on an air monitoring application are presented. Our results show that the proposed framework achieves a substantial reduction in network bandwidth usage and in latency when compared to centralized cloud solutions.",
"title": ""
},
{
"docid": "2098191fad9a065bcb117f6cd7299dd7",
"text": "The growth of both IT technology and the Internet Communication has involved the development of lot of encrypted information. Among others techniques of message hiding, stenography is one them but more suspicious as no one cannot see the secret message. As we always use the MS Office, there are many ways to hide secret messages by using PowerPoint as normal file. In this paper, we propose a new technique to find a hidden message by analysing the in PowerPoint file using EnCase Transcript. The result analysis shows that Steganography technique had hidden a certain number of message which are invisible to naked eye.",
"title": ""
},
{
"docid": "b55fa34c0a969e93c3a02edccf4d9dcd",
"text": "This paper describes the Flexible Navigation system that extends the ROS Navigation stack and compatible libraries to separate computation from decision making, and integrates the system with FlexBE — the Flexible Behavior Engine, which provides intuitive supervision with adjustable autonomy. Although the ROS Navigation plugin model offers some customization, many decisions are internal to move_base. In contrast, the Flexible Navigation system separates global planning from local planning and control, and uses a hierarchical finite state machine to coordinate behaviors. The Flexible Navigation system includes Python-based state implementations and ROS nodes derived from the move_base plugin model to provide compatibility with existing libraries as well as future extensibility. The paper concludes with complete system demonstrations in both simulation and hardware using the iRobot Create and Kobuki-based Turtlebot running under ROS Kinetic. The system supports multiple independent robots.",
"title": ""
}
] |
scidocsrr
|
7e2af48fc319eecb15d2803c614fd278
|
Identifying confounders using additive noise models
|
[
{
"docid": "a8dcddea10d4c5468618d233a4b2081e",
"text": "Dimensionality reduction is an important task in machine learning, for it facilitates classification, compression, and visualization of high-dimensional data by mitigating undesired properties of high-dimensional spaces. Over the last decade, a large number of new (nonlinear) techniques for dimensionality reduction have been proposed. Most of these techniques are based on the intuition that data lies on or near a complex low-dimensional manifold that is embedded in the high-dimensional space. New techniques for dimensionality reduction aim at identifying and extracting the manifold from the high-dimensional space. A systematic empirical evaluation of a large number of dimensionality reduction techniques has been presented in [86]. This work has led to the development of the Matlab Toolbox for Dimensionality Reduction, which contains implementations of 27 techniques for dimensionality reduction. In addition, the toolbox contains implementation of 6 intrinsic dimensionality estimators and functions for out-of-sample extension, data generation, and data prewhitening. The report describes the techniques that are implemented in the toolbox in detail. Furthermore, it presents a number of examples that illustrate the functionality of the toolbox, and provide insight in the capabilities of state-of-the-art techniques for dimensionality reduction.",
"title": ""
},
{
"docid": "959ba9c0929e36a8ef4a22a455ed947a",
"text": "The discovery of causal relationships between a set of observed variables is a fundamental problem in science. For continuous-valued data linear acyclic causal models with additive noise are often used because these models are well understood and there are well-known methods to fit them to data. In reality, of course, many causal relationships are more or less nonlinear, raising some doubts as to the applicability and usefulness of purely linear methods. In this contribution we show that the basic linear framework can be generalized to nonlinear models. In this extended framework, nonlinearities in the data-generating process are in fact a blessing rather than a curse, as they typically provide information on the underlying causal system and allow more aspects of the true data-generating mechanisms to be identified. In addition to theoretical results we show simulations and some simple real data experiments illustrating the identification power provided by nonlinearities.",
"title": ""
}
] |
[
{
"docid": "6858c559b78c6f2b5000c22e2fef892b",
"text": "Graph clustering is one of the key techniques for understanding the structures present in graphs. Besides cluster detection, identifying hubs and outliers is also a key task, since they have important roles to play in graph data mining. The structural clustering algorithm SCAN, proposed by Xu et al., is successfully used in many application because it not only detects densely connected nodes as clusters but also identifies sparsely connected nodes as hubs or outliers. However, it is difficult to apply SCAN to large-scale graphs due to its high time complexity. This is because it evaluates the density for all adjacent nodes included in the given graphs. In this paper, we propose a novel graph clustering algorithm named SCAN++. In order to reduce time complexity, we introduce new data structure of directly two-hop-away reachable node set (DTAR). DTAR is the set of two-hop-away nodes from a given node that are likely to be in the same cluster as the given node. SCAN++ employs two approaches for efficient clustering by using DTARs without sacrificing clustering quality. First, it reduces the number of the density evaluations by computing the density only for the adjacent nodes such as indicated by DTARs. Second, by sharing a part of the density evaluations for DTARs, it offers efficient density evaluations of adjacent nodes. As a result, SCAN++ detects exactly the same clusters, hubs, and outliers from large-scale graphs as SCAN with much shorter computation time. Extensive experiments on both real-world and synthetic graphs demonstrate the performance superiority of SCAN++ over existing approaches.",
"title": ""
},
{
"docid": "280acc4e653512fabf7b181be57b31e2",
"text": "BACKGROUND\nHealth care workers incur frequent injuries resulting from patient transfer and handling tasks. Few studies have evaluated the effectiveness of mechanical lifts in preventing injuries and time loss due to these injuries.\n\n\nMETHODS\nWe examined injury and lost workday rates before and after the introduction of mechanical lifts in acute care hospitals and long-term care (LTC) facilities, and surveyed workers regarding lift use.\n\n\nRESULTS\nThe post-intervention period showed decreased rates of musculoskeletal injuries (RR = 0.82, 95% CI: 0.68-1.00), lost workday injuries (RR = 0.56, 95% CI: 0.41-0.78), and total lost days due to injury (RR = 0.42). Larger reductions were seen in LTC facilities than in hospitals. Self-reported frequency of lift use by registered nurses and by nursing aides were higher in the LTC facilities than in acute care hospitals. Observed reductions in injury and lost day injury rates were greater on nursing units that reported greater use of the lifts.\n\n\nCONCLUSIONS\nImplementation of patient lifts can be effective in reducing occupational musculoskeletal injuries to nursing personnel in both LTC and acute care settings. Strategies to facilitate greater use of mechanical lifting devices should be explored, as further reductions in injuries may be possible with increased use.",
"title": ""
},
{
"docid": "28f145c48cc50c61de6a764fdd357375",
"text": "In this communication, a circularly polarized (CP) substrate-integrated waveguide horn antenna is proposed and studied. The CP horn antenna is implemented on a single-layer substrate with a thickness of $0.12\\lambda _{\\mathrm {\\mathbf {0}}}$ at the center frequency (1.524 mm) for 24 GHz system applications. It comprises of an integrated phase controlling and power dividing structure, two waveguide antennas, and an antipodal linearly tapered slot antenna. With such a phase controlling and power dividing structure fully integrated inside the horn antenna, two orthogonal electric fields of the equal amplitude with 90° phase difference are achieved at the aperture plane of the horn antenna, thus, yielding an even effective circular polarization in a compact single-layered geometry. The measured results of the prototyped horn antenna exhibit a 5% bandwidth (23.7–24.9 GHz) with an axial ratio below 3 dB and a VSWR below 2. The gain of the antenna is around 8.5 dBi.",
"title": ""
},
{
"docid": "d84abd378e3756052ede68731d73ca45",
"text": "A major difficulty in applying word vector embeddings in information retrieval is in devising an effective and efficient strategy for obtaining representations of compound units of text, such as whole documents, (in comparison to the atomic words), for the purpose of indexing and scoring documents. Instead of striving for a suitable method to obtain a single vector representation of a large document of text, we aim to develop a similarity metric that makes use of the similarities between the individual embedded word vectors in a document and a query. More specifically, we represent a document and a query as sets of word vectors, and use a standard notion of similarity measure between these sets, computed as a function of the similarities between each constituent word pair from these sets. We then make use of this similarity measure in combination with standard information retrieval based similarities for document ranking. The results of our initial experimental investigations show that our proposed method improves MAP by up to 5.77%, in comparison to standard text-based language model similarity, on the TREC 6, 7, 8 and Robust ad-hoc test collections.",
"title": ""
},
{
"docid": "16fe3567780f3c3f2d8951b4db76f792",
"text": "Despite the well documented and emerging insider threat to information systems, there is currently no substantial effort devoted to addressing the problem of internal IT misuse. In fact, the great majority of misuse countermeasures address forms of abuse originating from external factors (i.e. the perceived threat from unauthorized users). This paper suggests a new and innovative approach of dealing with insiders that abuse IT systems. The proposed solution estimates the level of threat that is likely to originate from a particular insider by introducing a threat evaluation system based on certain profiles of user behaviour. However, a substantial amount of work is required, in order to materialize and validate the proposed solutions.",
"title": ""
},
{
"docid": "34557bc145ccd6d83edfc80da088f690",
"text": "This thesis is dedicated to my mother, who taught me that success is not the key to happiness. Happiness is the key to success. If we love what we are doing, we will be successful. This thesis is dedicated to my father, who taught me that luck is not something that is given to us at random and should be waited for. Luck is the sense to recognize an opportunity and the ability to take advantage of it. iii ACKNOWLEDGEMENTS I would like to thank my thesis committee –",
"title": ""
},
{
"docid": "f022f9fcc42ec2c919fcead0e8e0cf83",
"text": "Object recognition and pose estimation is an important task in computer vision. A pose estimation algorithm using only depth information is proposed in this paper. Foreground and background points are distinguished based on their relative positions with boundaries. Model templates are selected using synthetic scenes to make up for the point pair feature algorithm. An accurate and fast pose verification method is introduced to select result poses from thousands of poses. Our algorithm is evaluated against a large number of scenes and proved to be more accurate than algorithms using both color information and depth information.",
"title": ""
},
{
"docid": "e389aec1a2cbd7373452915703eddbc2",
"text": "Information-centric networking (ICN) proposes to redesign the Internet by replacing its host centric design wit h an information centric one, by establishing communication at the naming level, with the receiver side acting as the driving force beh ind content delivery. Such design promises great advantages for the del ivery of content to and from mobile hosts. This, however, is at the exp ense of increased networking overhead, specifically in the case o f Nameddata Networking (NDN) due to use of flooding for path recovery. In this paper, we propose a mobility centric solution to address the overhead and scalability problems in NDN by introducing a novel forwarding architecture that leverages decentralized serverassisted routing over flooding based strategies. We present an indepth study of the proposed architecture and provide demons trative results on its throughput and overhead performance at different levels of mobility proving its scalability and effectiveness, when compared to the current NDN based forwarding strategies.",
"title": ""
},
{
"docid": "7e10aa210d6985d757a21b8b6c49ae53",
"text": "Haptic devices for computers and video-game consoles aim to reproduce touch and to engage the user with `force feedback'. Although physical touch is often associated with proximity and intimacy, technologies of touch can reproduce such sensations over a distance, allowing intricate and detailed operations to be conducted through a network such as the Internet. The `virtual handshake' between Boston and London in 2002 is given as an example. This paper is therefore a critical investigation into some technologies of touch, leading to observations about the sociospatial framework in which this technological touching takes place. Haptic devices have now become routinely included with videogame consoles, and have started to be used in computer-aided design and manufacture, medical simulation, and even the cybersex industry. The implications of these new technologies are enormous, as they remould the human ^ computer interface from being primarily audiovisual to being more truly multisensory, and thereby enhance the sense of `presence' or immersion. But the main thrust of this paper is the development of ideas of presence over a large distance, and how this is enhanced by the sense of touch. By using the results of empirical research, including interviews with key figures in haptics research and engineering and personal experience of some of the haptic technologies available, I build up a picture of how `presence', `copresence', and `immersion', themselves paradoxically intangible properties, are guiding the design, marketing, and application of haptic devices, and the engendering and engineering of a set of feelings of interacting with virtual objects, across a range of distances. DOI:10.1068/d394t",
"title": ""
},
{
"docid": "ec4b7c50f3277bb107961c9953fe3fc4",
"text": "A blockchain is a linked-list of immutable tamper-proof blocks, which is stored at each participating node. Each block records a set of transactions and the associated metadata. Blockchain transactions act on the identical ledger data stored at each node. Blockchain was first perceived by Satoshi Nakamoto (Satoshi 2008), as a peer-to-peer money exchange system. Nakamoto referred to the transactional tokens exchanged among clients in his system, as Bitcoins. Overview",
"title": ""
},
{
"docid": "15fc4abd2491b57c55c4ce339f41067e",
"text": "A series of pyrazole analogues of natural piperine were synthesized by removing the basic piperidine moiety from the piperine nucleus. Piperine upon hydrolysis and oxidation, converted to piperonal and allowed to condense with substituted acetophenone gave chalcone derivative and cyclized finally with thiosemicarbazide to form pyrazole derivatives of piperine. Docking studies were carried out against different targets like Cyclooxygenase, farnasyl transferase receptors. Majority of the synthesized chemical compounds showed good fit with the active site of all the docked targets.Compound 6a have shown significant anti inflammatory activity and 6d and 6c have shown significant anticancer activity when compared with standard drugs.",
"title": ""
},
{
"docid": "46410be2730753051c4cb919032fad6f",
"text": "categories. That is, since cue validity is the probability of being in some category given some property, this probability will increase (or at worst not decrease) as the size of the category increases (e.g. the probability of being an animal given the property of flying is greater than the probability of bird given flying, since there must be more animals that fly than birds that fly).6 The idea that cohesive categories maximize the probability of particular properties given the category fares no better. In this case, the most specific categories will always be picked out. Medin (1982) has analyzed a variety of formal measures of category cohe siveness and pointed out problems with all of them. For example, one possible principle is to have concepts such that they minimize the similarity between contrasting categories; but minimizing between-category similarity will always lead one to sort a set of n objects into exactly two categories. Similarly, functions based on maximizing within-category similarity while minimizing between-category similarity lead to a variety of problems and counterintuitive expectations about when to accept new members into existent categories versus when to set up new categories. At a less formal but still abstract level, Sternberg (1982) has tried to translate some of Goodman's (e.g. 1983) ideas about induction into possible constraints on natural concepts. Sternberg suggests that the apparent naturalness of a concept increases with the familiarity of the concept (where familiarity is related to Goodman's notion of entrenchment), and decreases with the number of transformations specified in the concept (e.g. aging specifies certain trans",
"title": ""
},
{
"docid": "31996310254c69e62f4971db09499485",
"text": "This paper studies P2P lending and the factors explaining loan default. This is an important issue because in P2P lending individual investors bear the credit risk, instead of financial institutions, which are experts in dealing with this risk. P2P lenders suffer a severe problem of information asymmetry, because they are at a disadvantage facing the borrower. For this reason, P2P lending sites provide potential lenders with information about borrowers and their loan purpose. They also assign a grade to each loan. The empirical study is based on loans' data collected from Lending Club (N = 24,449) from 2008 to 2014 that are first analyzed by using univariate means tests and survival analysis. Factors explaining default are loan purpose, annual income, current housing situation, credit history and indebtedness. Secondly, a logistic regression model is developed to predict defaults. The grade assigned by the P2P lending site is the most predictive factor of default, but the accuracy of the model is improved by adding other information, especially the borrower's debt level.",
"title": ""
},
{
"docid": "0aabb07ef22ef59d6573172743c6378b",
"text": "Learning from multiple sources of information is an important problem in machine-learning research. The key challenges are learning representations and formulating inference methods that take into account the complementarity and redundancy of various information sources. In this paper we formulate a variational autoencoder based multi-source learning framework in which each encoder is conditioned on a different information source. This allows us to relate the sources via the shared latent variables by computing divergence measures between individual source’s posterior approximations. We explore a variety of options to learn these encoders and to integrate the beliefs they compute into a consistent posterior approximation. We visualise learned beliefs on a toy dataset and evaluate our methods for learning shared representations and structured output prediction, showing trade-offs of learning separate encoders for each information source. Furthermore, we demonstrate how conflict detection and redundancy can increase robustness of inference in a multi-source setting.",
"title": ""
},
{
"docid": "1b05959625fb8b733e9b9ecf3dcef22e",
"text": "Relational agents—computational artifacts designed to build and maintain longterm social-emotional relationships with users—may provide an effective interface modality for older adults. This is especially true when the agents use simulated face-toface conversation as the primary communication medium, and for applications in which repeated interactions over long time periods are required, such as in health behavior change. In this article we discuss the design of a relational agent for older adults that plays the role of an exercise advisor, and report on the results of a longitudinal study involving 21 adults aged 62 to 84, half of whom interacted with the agent daily for two months in their homes and half who served as a standard-of-care control. Results indicate the agent was accepted and liked, and was significantly more efficacious at increasing physical activity (daily steps walked) than the control.",
"title": ""
},
{
"docid": "0d9340dc849332af5854380fa460cfd5",
"text": "Many scientific datasets archive a large number of variables over time. These timeseries data streams typically track many variables over relatively long periods of time, and therefore are often both wide and deep. In this paper, we describe the Visual Query Language (VQL) [3], a technology for locating time series patterns in historical or real time data. The user interactively specifies a search pattern, VQL finds similar shapes, and returns a ranked list of matches. VQL supports both univariate and multivariate queries, and allows the user to interactively specify the the quality of the match, including temporal warping, amplitude warping, and temporal constraints between features.",
"title": ""
},
{
"docid": "2e02a16fa9c40bfb7e498bef8927e5ff",
"text": "There exist two broad approaches to information retrieval (IR) in the legal domain: those based on manual knowledge engineering (KE) and those based on natural language processing (NLP). The KE approach is grounded in artificial intelligence (AI) and case-based reasoning (CBR), whilst the NLP approach is associated with open domain statistical retrieval. We provide some original arguments regarding the focus on KE-based retrieval in the past and why this is not sustainable in the long term. Legal approaches to questioning (NLP), rather than arguing (CBR), are proposed as the appropriate jurisprudential and cognitive underpinning for legal IR. Recall within the context of precision is proposed as a better fit to law than the ‘total recall’ model of the past, wherein conceptual and contextual search are combined to improve retrieval performance for both parties in a dispute.",
"title": ""
},
{
"docid": "f0057666e16f7a0a05b4890d48fdbf42",
"text": "BACKGROUND\nThe aim of this review was to systematically assess and meta-analyze the effects of yoga on modifiable biological cardiovascular disease risk factors in the general population and in high-risk disease groups.\n\n\nMETHODS\nMEDLINE/PubMed, Scopus, the Cochrane Library, and IndMED were screened through August 2013 for randomized controlled trials (RCTs) on yoga for predefined cardiovascular risk factors in healthy participants, non-diabetic participants with high risk for cardiovascular disease, or participants with type 2 diabetes mellitus. Risk of bias was assessed using the Cochrane risk of bias tool.\n\n\nRESULTS\nForty-four RCTs with a total of 3168 participants were included. Risk of bias was high or unclear for most RCTs. Relative to usual care or no intervention, yoga improved systolic (mean difference (MD)=-5.85 mm Hg; 95% confidence interval (CI)=-8.81, -2.89) and diastolic blood pressure (MD=-4.12 mm Hg; 95%CI=-6.55, -1.69), heart rate (MD=-6.59 bpm; 95%CI=-12.89, -0.28), respiratory rate (MD=-0.93 breaths/min; 95%CI=-1.70, -0.15), waist circumference (MD=-1.95 cm; 95%CI=-3.01, -0.89), waist/hip ratio (MD=-0.02; 95%CI=-0.03, -0.00), total cholesterol (MD=-13.09 mg/dl; 95%CI=-19.60, -6.59), HDL (MD=2.94 mg/dl; 95%CI=0.57, 5.31), VLDL (MD=-5.70 mg/dl; 95%CI=-7.36, -4.03), triglycerides (MD=-20.97 mg/dl; 95%CI=-28.61, -13.32), HbA1c (MD=-0.45%; 95%CI=-0.87, -0.02), and insulin resistance (MD=-0.19; 95%CI=-0.30, -0.08). Relative to exercise, yoga improved HDL (MD=3.70 mg/dl; 95%CI=1.14, 6.26).\n\n\nCONCLUSIONS\nThis meta-analysis revealed evidence for clinically important effects of yoga on most biological cardiovascular disease risk factors. Despite methodological drawbacks of the included studies, yoga can be considered as an ancillary intervention for the general population and for patients with increased risk of cardiovascular disease.",
"title": ""
},
{
"docid": "662c6a0e2d9a10a9e1fd1046e827adc0",
"text": "Counterfactuals are mental representations of alternatives to the past and produce consequences that are both beneficial and aversive to the individual. These apparently contradictory effects are integrated in a functionalist model of counterfactual thinking. The author reviews research in support of the assertions that (a) counterfactual thinking is activated automatically in response to negative affect, (b) the content of counterfactuals targets particularly likely causes of misfortune, (c) counterfactuals produce negative affective consequences through a contrast-effect mechanism and positive inferential consequences through a causal-inference mechanism, and (d) the net effect of counterfactual thinking is beneficial.",
"title": ""
},
{
"docid": "c12cd99e8f1184fb77c7027c71a8dace",
"text": "This paper reports on a wearable gesture-based controller fabricated using the sensing capabilities of the flexible thin-film piezoelectric polymer polyvinylidene fluoride (PVDF) which is shown to repeatedly and accurately discern, in real time, between right and left hand gestures. The PVDF is affixed to a compression sleeve worn on the forearm to create a wearable device that is flexible, adaptable, and highly shape conforming. Forearm muscle movements, which drive hand motions, are detected by the PVDF which outputs its voltage signal to a developed microcontroller-based board and processed by an artificial neural network that was trained to recognize the generated voltage profile of right and left hand gestures. The PVDF has been spatially shaded (etched) in such a way as to increase sensitivity to expected deformations caused by the specific muscles employed in making the targeted right and left gestures. The device proves to be exceptionally accurate both when positioned as intended and when rotated and translated on the forearm.",
"title": ""
}
] |
scidocsrr
|
8ab04339a6ecce73557db6cf97966376
|
Bottleneck Conditional Density Estimation
|
[
{
"docid": "b6a8f45bd10c30040ed476b9d11aa908",
"text": "PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications.",
"title": ""
}
] |
[
{
"docid": "77233d4f7a7bb0150b5376c7bb93c108",
"text": "In-filled frame structures are commonly used in buildings, even in those located in seismically active regions. Precent codes unfortunately, do not have adequate guidance for treating the modelling, analysis and design of in-filled frame structures. This paper addresses this need and first develops an appropriate technique for modelling the infill-frame interface and then uses it to study the seismic response of in-filled frame structures. Finite element time history analyses under different seismic records have been carried out and the influence of infill strength, openings and soft-storey phenomenon are investigated. Results in terms of tip deflection, fundamental period, inter-storey drift ratio and stresses are presented and they will be useful in the seismic design of in-filled frame structures.",
"title": ""
},
{
"docid": "407561ea1df1544c94e2516d66a40dcc",
"text": "This paper reviews current technological developments in polarization engineering and the control of the quantum-confined Stark effect (QCSE) for InxGa1- xN-based quantum-well active regions, which are generally employed in visible LEDs for solid-state lighting applications. First, the origin of the QCSE in III-N wurtzite semiconductors is introduced, and polarization-induced internal fields are discussed in order to provide contextual background. Next, the optical and electrical properties of InxGa1- xN-based quantum wells that are affected by the QCSE are described. Finally, several methods for controlling the QCSE of InxGa1- xN-based quantum wells are discussed in the context of performance metrics of visible light emitters, considering both pros and cons. These strategies include doping control, strain/polarization field/electronic band structure control, growth direction control, and crystalline structure control.",
"title": ""
},
{
"docid": "1945d4663a49a5e1249e43dc7f64d15b",
"text": "The current generation of adolescents grows up in a media-saturated world. However, it is unclear how media influences the maturational trajectories of brain regions involved in social interactions. Here we review the neural development in adolescence and show how neuroscience can provide a deeper understanding of developmental sensitivities related to adolescents’ media use. We argue that adolescents are highly sensitive to acceptance and rejection through social media, and that their heightened emotional sensitivity and protracted development of reflective processing and cognitive control may make them specifically reactive to emotion-arousing media. This review illustrates how neuroscience may help understand the mutual influence of media and peers on adolescents’ well-being and opinion formation. The current generation of adolescents grows up in a media-saturated world. Here, Crone and Konijn review the neural development in adolescence and show how neuroscience can provide a deeper understanding of developmental sensitivities related to adolescents’ media use.",
"title": ""
},
{
"docid": "ea73c0a2ef6196429a29591a758bc4ca",
"text": "Broadband and planar microstrip-to-waveguide transitions are developed in the millimeter-wave band. Novel printed pattern is applied to the microstrip substrate in the ordinary back-short-type transition to operate over extremely broad frequency bandwidth. Furthermore, in order to realize flat and planar transition which does not need back-short waveguide, the transition is designed in multi-layer substrate. Both transitions are fabricated and their performances are measured and simulated in the millimeter-wave band.",
"title": ""
},
{
"docid": "05e3d07db8f5ecf3e446a28217878b56",
"text": "In this paper, we investigate the topic of gender identification for short length, multi-genre, content-free e-mails. We introduce for the first time (to our knowledge), psycholinguistic and gender-linked cues for this problem, along with traditional stylometric features. Decision tree and Support Vector Machines learning algorithms are used to identify the gender of the author of a given e-mail. The experiment results show that our approach is promising with an average accuracy of 82.2%.",
"title": ""
},
{
"docid": "46f30695d32360ce8fb9a361ca557956",
"text": "This paper addresses the control problem of transportation of a slung load by a multi-rotor. Control of the slung load system cannot be achieved with usual controllers for multi-rotors because motion of the load is highly coupled with the multi-rotor. Although many controllers have been developed for the system, there still exist several problems. For instance, in trajectory-following control, many previous works make trajectory beforehand, and perform trajectory-following control afterward. Since the time spent to generate trajectories is too long for real-time control applications, alternative efficient algorithms are required. With this in mind, we adopt a fast nonlinear Model Predictive Control (MPC) algorithm. This control framework solves a nonlinear optimal control problem with a finite time horizon using Sequential Linear Quadratic (SLQ) solver, in MPC. The originally proposed work optimizes the implementation time and successfully conducts real-time waypoint flight of general multi-rotor. Here, we apply the framework to the slung load system. Also, an obstacle-avoidance algorithm suitable for the slung load system is developed for cluttered environments. Numerical simulations are conducted to validate the developed method.",
"title": ""
},
{
"docid": "0d0f0c42a8ef6a00fc5143e0f70c86e7",
"text": "Performing recognition tasks using latent fingerprint samples is often challenging for automated identification systems due to poor quality, distortion, and partially missing information from the input samples. We propose a direct latent fingerprint reconstruction model based on conditional generative adversarial networks (cGANs). Two modifications are applied to the cGAN to adapt it for the task of latent fingerprint reconstruction. First, the model is forced to generate three additional maps to the ridge map to ensure that the orientation and frequency information are considered in the generation process, and prevent the model from filling large missing areas and generating erroneous minutiae. Second, a perceptual ID preservation approach is developed to force the generator to preserve the ID information during the reconstruction process. Using a synthetically generated database of latent fingerprints, the deep network learns to predict missing information from the input latent samples. We evaluate the proposed method in combination with two different fingerprint matching algorithms on several publicly available latent fingerprint datasets. We achieved rank-10 accuracy of 88.02% on the IIIT-Delhi latent fingerprint database for the task of latent-to-latent matching and rank-50 accuracy of 70.89% on the IIIT-Delhi MOLF database for the task of latent-to-sensor matching. Experimental results of matching reconstructed samples in both latent-to-sensor and latent-to-latent frameworks indicate that the proposed method significantly increases the matching accuracy of the fingerprint recognition systems for the latent samples.",
"title": ""
},
{
"docid": "96344ccc2aac1a7e7fbab96c1355fa10",
"text": "A highly sensitive field-effect sensor immune to environmental potential fluctuation is proposed. The sensor circuit consists of two sensors each with a charge sensing field effect transistor (FET) and an extended sensing gate (SG). By enlarging the sensing gate of an extended gate ISFET, a remarkable sensitivity of 130mV/pH is achieved, exceeding the conventional Nernst limit of 59mV/pH. The proposed differential sensing circuit consists of a pair of matching n-channel and p-channel ion sensitive sensors connected in parallel and biased at a matched transconductance bias point. Potential fluctuations in the electrolyte appear as common mode signal to the differential pair and are cancelled by the matched transistors. This novel differential measurement technique eliminates the need for a true reference electrode such as the bulky Ag/AgCl reference electrode and enables the use of the sensor for autonomous and implantable applications.",
"title": ""
},
{
"docid": "59693182ac2803d821c508e92383d499",
"text": "We introduce the notion of image-driven simplification, a framework that uses images to decide which portions of a model to simplify. This is a departure from approaches that make polygonal simplification decisions based on geometry. As with many methods, we use the edge collapse operator to make incremental changes to a model. Unique to our approach, however, is the use at comparisons between images of the original model against those of a simplified model to determine the cost of an ease collapse. We use common graphics rendering hardware to accelerate the creation of the required images. As expected, this method produces models that are close to the original model according to image differences. Perhaps more surprising, however, is that the method yields models that have high geometric fidelity as well. Our approach also solves the quandary of how to weight the geometric distance versus appearance properties such as normals, color, and texture. All of these trade-offs are balanced by the image metric. Benefits of this approach include high fidelity silhouettes, extreme simplification of hidden portions of a model, attention to shading interpolation effects, and simplification that is sensitive to the content of a texture. In order to better preserve the appearance of textured models, we introduce a novel technique for assigning texture coordinates to the new vertices of the mesh. This method is based on a geometric heuristic that can be integrated with any edge collapse algorithm to produce high quality textured surfaces.",
"title": ""
},
{
"docid": "c6a97ab04490f8deecb31c6e429f1953",
"text": "In this paper we describe the new security features of the international standard DLMS/COSEM that came along with its new Green Book Ed. 8. We compare them with those of the German Smart Meter Gateway approach that uses TLS to protect the privacy of connections. We show that the security levels of the cryptographic core methods are similar in both systems. However, there are several aspects concerning the security on which the German approach provides more concrete implementation instructions than DLMS/COSEM does (like lifetimes of certificates and random generators). We describe the differences in security and architecture of the two systems.",
"title": ""
},
{
"docid": "abbdc23d1c8833abda16f477dddb45fd",
"text": "Recently introduced generative adversarial networks (GANs) have been shown numerous promising results to generate realistic samples. In the last couple of years, it has been studied to control features in synthetic samples generated by the GAN. Auxiliary classifier GAN (ACGAN), a conventional method to generate conditional samples, employs a classification layer in discriminator to solve the problem. However, in this paper, we demonstrate that the auxiliary classifier can hardly provide good guidance for training of the generator, where the classifier suffers from overfitting. Since the generator learns from classification loss, such a problem has a chance to hinder the training. To overcome this limitation, here, we propose a controllable GAN (ControlGAN) structure. By separating a feature classifier from the discriminator, the classifier can be trained with data augmentation technique, which can support to make a fine classifier. Evaluated with the CIFAR-10 dataset, ControlGAN outperforms AC-WGAN-GP which is an improved version of the ACGAN, where Inception score of the ControlGAN is 8.61 ± 0.10. Furthermore, we demonstrate that the ControlGAN can generate intermediate features and opposite features for interpolated input and extrapolated input labels that are not used in the training process. It implies that the ControlGAN can significantly contribute to the variety of generated samples.",
"title": ""
},
{
"docid": "946a5835970a54c748031f2c9945a661",
"text": "There is a general move in the aerospace industry to increase the amount of electrically powered equipment on future aircraft. This is generally referred to as the \"more electric aircraft\" and brings on a number of technical challenges that need to be addressed and overcome. Recent advancements in power electronics technology are enabling new systems to be developed and applied to aerospace applications. The growing trend is to connect the AC generator to the aircraft engine via a direct connection or a fixed ratio transmission thus, resulting in the generator providing a variable frequency supply. This move offers benefits to the aircraft such as reducing the weight and improving the reliability. Many aircraft power systems are now operating with a variable frequency over a typical range of 350 Hz to 800 Hz which varies with the engine speed[1,2]. This paper presents the results from a simple scheme for an adaptive control algorithm which could be suitable for use with an electric actuator (or other) aircraft load. The design of this system poses significant challenges due to the nature of the load range and supply frequency variation and requires many features such as: 1) Small input current harmonics to minimize losses., 2) Minimum size and weight to maximize portability and power density. Details will be given on the design methodology and simulation results obtained.",
"title": ""
},
{
"docid": "b04a1c4a52cfe9310ff1e895ccdec35c",
"text": "The problem of recovering the sparse and low-rank components of a matrix captures a broad spectrum of applications. Authors in [4] proposed the concept of ”rank-sparsity incoherence” to characterize the fundamental identifiability of the recovery, and derived practical sufficient conditions to ensure the high possibility of recovery. This exact recovery is achieved via solving a convex relaxation problem where the l1 norm and the nuclear norm are utilized for being surrogates of the sparsity and low-rank. Numerically, this convex relaxation problem was reformulated into a semi-definite programming (SDP) problem whose dimension is considerably enlarged, and this SDP reformulation was proposed to be solved by generic interior-point solvers in [4]. This paper focuses on the algorithmic improvement for the sparse and low-rank recovery. In particular, we observe that the convex relaxation problem generated by the approach of [4] is actually well-structured in both the objective function and constraint, and it fits perfectly the applicable range of the classical alternating direction method (ADM). Hence, we propose the ADM approach for accomplishing the sparse and low-rank recovery, by taking full exploitation to the high-level separable structure of the convex relaxation problem. Preliminary numerical results are reported to verify the attractive efficiency of the ADM approach for recovering sparse and low-rank components of matrices.",
"title": ""
},
{
"docid": "dd52742343462b3106c18274c143928b",
"text": "This paper presents a descriptive account of the social practices surrounding the iTunes music sharing of 13 participants in one organizational setting. Specifically, we characterize adoption, critical mass, and privacy; impression management and access control; the musical impressions of others that are created as a result of music sharing; the ways in which participants attempted to make sense of the dynamic system; and implications of the overlaid technical, musical, and corporate topologies. We interleave design implications throughout our results and relate those results to broader themes in a music sharing design space.",
"title": ""
},
{
"docid": "c61470e2c1310a9c6fa09dc96659d4ab",
"text": "Selenium IDE Locating Elements There is a great responsibility for developers and testers to ensure that web software exhibits high reliability and speed. Somewhat recently, the software community has seen a rise in the usage of AJAX in web software development to achieve this goal. The advantage of AJAX applications is that they are typically very responsive. The vEOC is an Emergency Management Training application which requires this level of interactivity. Selenium is great in that it is an open source testing tool that can handle the amount of JavaScript present in AJAX applications, and even gives the tester the freedom to add their own features. Since web software is so frequently modified, the main goal for any test developer is to create sustainable tests. How can Selenium tests be made more maintainable?",
"title": ""
},
{
"docid": "df36496e721bf3f0a38791b6a4b99b2d",
"text": "Support for an extremist entity such as Islamic State (ISIS) somehow manages to survive globally online despite considerable external pressure and may ultimately inspire acts by individuals having no history of extremism, membership in a terrorist faction, or direct links to leadership. Examining longitudinal records of online activity, we uncovered an ecology evolving on a daily time scale that drives online support, and we provide a mathematical theory that describes it. The ecology features self-organized aggregates (ad hoc groups formed via linkage to a Facebook page or analog) that proliferate preceding the onset of recent real-world campaigns and adopt novel adaptive mechanisms to enhance their survival. One of the predictions is that development of large, potentially potent pro-ISIS aggregates can be thwarted by targeting smaller ones.",
"title": ""
},
{
"docid": "6168009d570d5e959a2dbaf2b7766028",
"text": "Acetyl salicylic acid (ASA) is commonly known as aspirin. It is one of the most important anti-inflammatory drugs in the world [1-3]. Aspirin is a white powder stable in a dry environment but that is hydrolyzed to Salicylic acid and acetic acid under humid or moist conditions. Hydrolysis also can occur when aspirin is combined with alkaline salts or with salts containing water of hydration [4]. In 1960’s Felix Hofmann of the Bayer Company in Germany prepared aspirin [5]. This was found to good medicinal properties, low membrane irritation and a reasonable taste. He called the new medicine as aspirin (‘a’ for acetylthe systematic name for the compound at the time of was acetyl salicylic acid, ‘spir’ for spirea, the meadow sweat plant). The active ingredient of aspirin was first discovered from the bark of the willow tree in 1763 by Edward stone. Aspirin is parts of a group of medications called as non steroidal-inflammatory drugs (NSAIDs), but differ from most of other NSAIDs in the mechanism of action [3-5].",
"title": ""
},
{
"docid": "e9006af64364e6dcd1ea4684642539de",
"text": "Since the publication of the PDP volumes in 1986,1 learning by backpropagation has become the most popular method of training neural networks. The reason for the popularity is the underlying simplicity and relative power of the algorithm. Its power derives from the fact that, unlike its precursors, the perceptron learning rule and the Widrow-Hoff learning rule, it can be employed for training nonlinear networks of arbitrary connectivity. Since such networks are often required for real-world applications, such a learning procedure is critical. Nearly as important as its power in explaining its popularity is its simplicity. The basic igea is old and simple; namely define an error function and use hill climbing (or gradient descent if you prefer going downhill) to find a set of weights which optimize performance on a particular task. The algorithm is so simple that it can be implemented in a few lines' of code, and there have been no doubt many thousands of implementations of the algorithm by now. The name back propagation actually comes from the term employed by Rosenblatt (1962) for his attempt to generalize the perceptron learning algorithm to the multilayer case. There were many attempts to generalize the perceptron learning procedure to multiple layers during the 1960s and 1970s, but none of them were especially successful. There appear to have been at least three independent inventions of the modem version of the back-propagation algorithm: Paul Werbos developed the basic idea in 1974 in a Ph.D. dissertation entitled",
"title": ""
},
{
"docid": "0962dfe13c1960b345bb0abb480f1520",
"text": "This electronic document presents the application of a novel method of bipedal walking pattern generation assured by “the liquid level model” and the preview control of zero-moment-point (ZMP). In this method, the trajectory of the center of mass (CoM) of the robot is generated assured by the preview controller to maintain the ZMP at the desired location knowing that the robot is modeled as a running liquid level model on a tank. The proposed approach combines the preview control theory with simple model “the liquid level model”, to assure a stable dynamic walking. Simulations results show that the proposed pattern generator guarantee not only to walk dynamically stable but also good performance.",
"title": ""
},
{
"docid": "547f8fc80017e3c63911e2ea4b2eeadd",
"text": "This work investigates the effectiveness of learning to rank methods for entity search. Entities are represented by multi-field documents constructed from their RDF triples, and field-based text similarity features are extracted for query-entity pairs. State-of-the-art learning to rank methods learn models for ad-hoc entity search. Our experiments on an entity search test collection based on DBpedia confirm that learning to rank methods are as powerful for ranking entities as for ranking documents, and establish a new state-of-the-art for accuracy on this benchmark dataset.",
"title": ""
}
] |
scidocsrr
|
ff811d0c6627812b007a2e36cf8389b2
|
The conceptual model to solve the problem of interoperability in health information systems
|
[
{
"docid": "35c75b3be4667f50b703702ef9496ab1",
"text": "Software analysis and evaluation becomes a well-established practice inside the architecting community of the software systems. The development effort, the time and costs of complex systems are considerably high. In order to assess system's quality against the requirements of its customers, the architects and the developers need methods and tools to support them during the evaluation process. Different research groups have taken such initiatives and are proposing various methods for software architecture quality evaluation.",
"title": ""
}
] |
[
{
"docid": "b0747e6cbc20a8e4d9dec0ef75386701",
"text": "The US Vice President, Al Gore, in a speech on the information superhighway, suggested that it could be used to remotely control a nuclear reactor. We do not have enough confidence in computer software, hardware, or networks to attempt this experiment, but have instead built a Internet-accessible, remote-controlled model car that provides a race driver's view via a video camera mounted on the model car. The remote user can see live video from the car, and, using a mouse, control the speed and direction of the car. The challenge was to build a car that could be controlled by novice users in narrow corridors, and that would work not only with the full motion video that the car natively provides, but also with the limited size and frame rate video available over the Internet multicast backbone. We have built a car that has been driven from a site 50 miles away over a 56-kbps IP link using $\\mbox{{\\tt nv}}$ format video at as little as one frame per second and at as low as $100\\times 100$ pixels resolution. We also built hardware to control the car, using a slightly modified voice grade channel videophone. Our experience leads us to believe that it is now possible to put together readily available hardware and software components to build a cheap and effective telepresence.",
"title": ""
},
{
"docid": "e6b27bb9f2b74791af5e74c16c7c47da",
"text": "Due to the storage and retrieval efficiency, hashing has been widely deployed to approximate nearest neighbor search for large-scale multimedia retrieval. Supervised hashing, which improves the quality of hash coding by exploiting the semantic similarity on data pairs, has received increasing attention recently. For most existing supervised hashing methods for image retrieval, an image is first represented as a vector of hand-crafted or machine-learned features, followed by another separate quantization step that generates binary codes. However, suboptimal hash coding may be produced, because the quantization error is not statistically minimized and the feature representation is not optimally compatible with the binary coding. In this paper, we propose a novel Deep Hashing Network (DHN) architecture for supervised hashing, in which we jointly learn good image representation tailored to hash coding and formally control the quantization error. The DHN model constitutes four key components: (1) a subnetwork with multiple convolution-pooling layers to capture image representations; (2) a fully-connected hashing layer to generate compact binary hash codes; (3) a pairwise crossentropy loss layer for similarity-preserving learning; and (4) a pairwise quantization loss for controlling hashing quality. Extensive experiments on standard image retrieval datasets show the proposed DHN model yields substantial boosts over latest state-of-the-art hashing methods.",
"title": ""
},
{
"docid": "76e75c4549cbaf89796355b299bedfdc",
"text": "Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixellevel brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, stateof-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based pre-processing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the latest event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of processing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20.",
"title": ""
},
{
"docid": "02d8c55750904b7f4794139bcfa51693",
"text": "BACKGROUND\nMore than one-third of deaths during the first five years of life are attributed to undernutrition, which are mostly preventable through economic development and public health measures. To alleviate this problem, it is necessary to determine the nature, magnitude and determinants of undernutrition. However, there is lack of evidence in agro-pastoralist communities like Bule Hora district. Therefore, this study assessed magnitude and factors associated with undernutrition in children who are 6-59 months of age in agro-pastoral community of Bule Hora District, South Ethiopia.\n\n\nMETHODS\nA community based cross-sectional study design was used to assess the magnitude and factors associated with undernutrition in children between 6-59 months. A structured questionnaire was used to collect data from 796 children paired with their mothers. Anthropometric measurements and determinant factors were collected. SPSS version 16.0 statistical software was used for analysis. Bivariate and multivariate logistic regression analyses were conducted to identify factors associated to nutritional status of the children Statistical association was declared significant if p-value was less than 0.05.\n\n\nRESULTS\nAmong study participants, 47.6%, 29.2% and 13.4% of them were stunted, underweight, and wasted respectively. Presence of diarrhea in the past two weeks, male sex, uneducated fathers and > 4 children ever born to a mother were significantly associated with being underweight. Presence of diarrhea in the past two weeks, male sex and pre-lacteal feeding were significantly associated with stunting. Similarly, presence of diarrhea in the past two weeks, age at complementary feed was started and not using family planning methods were associated to wasting.\n\n\nCONCLUSION\nUndernutrition is very common in under-five children of Bule Hora district. Factors associated to nutritional status of children in agro-pastoralist are similar to the agrarian community. Diarrheal morbidity was associated with all forms of Protein energy malnutrition. Family planning utilization decreases the risk of stunting and underweight. Feeding practices (pre-lacteal feeding and complementary feeding practice) were also related to undernutrition. Thus, nutritional intervention program in Bule Hora district in Ethiopia should focus on these factors.",
"title": ""
},
{
"docid": "9e243ada78a3920a9af58f9958408399",
"text": "The problem of non-iterative one-shot and non-destructive correction of unavoidable mistakes arises in all Artificial Intelligence applications in the real world. Its solution requires robust separation of samples with errors from samples where the system works properly. We demonstrate that in (moderately) high dimension this separation could be achieved with probability close to one by linear discriminants. Based on fundamental properties of measure concentration, we show that for M1-ϑ, where 1>ϑ>0 is a given small constant. Exact values of a,b>0 depend on the probability distribution that determines how the random M-element sets are drawn, and on the constant ϑ. These stochastic separation theorems provide a new instrument for the development, analysis, and assessment of machine learning methods and algorithms in high dimension. Theoretical statements are illustrated with numerical examples.",
"title": ""
},
{
"docid": "1d1f93011e83bcefd207c845b2edafcd",
"text": "Although single dialyzer use and reuse by chemical reprocessing are both associated with some complications, there is no definitive advantage to either in this respect. Some complications occur mainly at the first use of a dialyzer: a new cellophane or cuprophane membrane may activate the complement system, or a noxious agent may be introduced to the dialyzer during production or generated during storage. These agents may not be completely removed during the routine rinsing procedure. The reuse of dialyzers is associated with environmental contamination, allergic reactions, residual chemical infusion (rebound release), inadequate concentration of disinfectants, and pyrogen reactions. Bleach used during reprocessing causes a progressive increase in dialyzer permeability to larger molecules, including albumin. Reprocessing methods without the use of bleach are associated with progressive decreases in membrane permeability, particularly to larger molecules. Most comparative studies have not shown differences in mortality between centers reusing and those not reusing dialyzers, however, the largest cluster of dialysis-related deaths occurred with single-use dialyzers due to the presence of perfluorohydrocarbon introduced during the manufacturing process and not completely removed during preparation of the dialyzers before the dialysis procedure. The cost savings associated with reuse is substantial, especially with more expensive, high-flux synthetic membrane dialyzers. With reuse, some dialysis centers can afford to utilize more efficient dialyzers that are more expensive; consequently they provide a higher dose of dialysis and reduce mortality. Some studies have shown minimally higher morbidity with chemical reuse, depending on the method. Waste disposal is definitely decreased with the reuse of dialyzers, thus environmental impacts are lessened, particularly if reprocessing is done by heat disinfection. It is safe to predict that dialyzer reuse in dialysis centers will continue because it also saves money for the providers. Saving both time for the patient and money for the provider were the main motivations to design a new machine for daily home hemodialysis. The machine, developed in the 1990s, cleans and heat disinfects the dialyzer and lines in situ so they do not need to be changed for a month. In contrast, reuse of dialyzers in home hemodialysis patients treated with other hemodialysis machines is becoming less popular and is almost extinct.",
"title": ""
},
{
"docid": "d735547a7b3a79f5935f15da3e51f361",
"text": "We propose a new approach for locating forged regions in a video using correlation of noise residue. In our method, block-level correlation values of noise residual are extracted as a feature for classification. We model the distribution of correlation of temporal noise residue in a forged video as a Gaussian mixture model (GMM). We propose a two-step scheme to estimate the model parameters. Consequently, a Bayesian classifier is used to find the optimal threshold value based on the estimated parameters. Two video inpainting schemes are used to simulate two different types of forgery processes for performance evaluation. Simulation results show that our method achieves promising accuracy in video forgery detection.",
"title": ""
},
{
"docid": "634509a9d6484ba51d01f9c049551df5",
"text": "In this paper, we propose a joint training approach to voice activity detection (VAD) to address the issue of performance degradation due to unseen noise conditions. Two key techniques are integrated into this deep neural network (DNN) based VAD framework. First, a regression DNN is trained to map the noisy to clean speech features similar to DNN-based speech enhancement. Second, the VAD part to discriminate speech against noise backgrounds is also a DNN trained with a large amount of diversified noisy data synthesized by a wide range of additive noise types. By stacking the classification DNN on top of the enhancement DNN, this integrated DNN can be jointly trained to perform VAD. The feature mapping DNN serves as a noise normalization module aiming at explicitly generating the “clean” features which are easier to be correctly recognized by the following classification DNN. Our experiment results demonstrate the proposed noise-universal DNNbased VAD algorithm achieves a good generalization capacity to unseen noises, and the jointly trained DNNs consistently and significantly outperform the conventional classification-based DNN for all the noise types and signal-to-noise levels tested.",
"title": ""
},
{
"docid": "fcb9614925e939898af060b9ee52f357",
"text": "The authors present a method for constructing a feedforward neural net implementing an arbitrarily good approximation to any L/sub 2/ function over (-1, 1)/sup n/. The net uses n input nodes, a single hidden layer whose width is determined by the function to be implemented and the allowable mean square error, and a linear output neuron. Error bounds and an example are given for the method.<<ETX>>",
"title": ""
},
{
"docid": "73dcc6b12b7c50e3699cc6a1230859b5",
"text": "The role of leadership in digital business transformation is a topical issue in need of more in-depth research. Based on an empirical investigation of eight Finnish organizations operating in the service sector, we gain understanding of the role and focus of leadership in the context of digital business transformation. Through a qualitative content analysis of data from 46 interviews, the four main leadership foci of digital business transformation are found: strategic vision and action, leading cultural change, enabling, and leading networks. The findings are discussed in the context of extant research on leadership and digital business development.",
"title": ""
},
{
"docid": "a1f05b8954434a782f9be3d9cd10bb8b",
"text": "Because of their avid use of new media and their increased spending power, children and teens have become primary targets of a new \"media and marketing ecosystem.\" The digital marketplace is undergoing rapid innovation as new technologies and software applications continue to reshape the media landscape and user behaviors. The advertising industry, in many instances led by food and beverage marketers, is purposefully exploiting the special relationship that youth have with new media, as online marketing campaigns create unprecedented intimacies between adolescents and the brands and products that now literally surround them.",
"title": ""
},
{
"docid": "c26667ae2ee8dbbf4743a70e9826667e",
"text": "Two studies compared college students’ interpersonal interaction online, face-to-face, and on the telephone. A communication diary assessed the relative amount of social interactions college students conducted online compared to face-to-face conversation and telephone calls. Results indicated that while the internet was integrated into college students’ social lives, face-to-face communication remained the dominant mode of interaction. Participants reported using the internet as often as the telephone. A survey compared reported use of the internet within local and long distance social circles to the use of other media within those circles, and examined participants’ most recent significant social interactions conducted across media in terms of purposes, contexts, and quality. Internet interaction was perceived as high in quality, but slightly lower than other media. Results were compared to previous conceptualizations of the roles of internet in one’s social life. new media & society Copyright © 2004 SAGE Publications London, Thousand Oaks, CA and New Delhi Vol6(3):299–318 DOI: 10.1177/1461444804041438 ........................................................................................................................................................................................................................................................",
"title": ""
},
{
"docid": "f291c66ebaa6b24d858103b59de792b7",
"text": "In this study, the authors investigated the hypothesis that women's sexual orientation and sexual responses in the laboratory correlate less highly than do men's because women respond primarily to the sexual activities performed by actors, whereas men respond primarily to the gender of the actors. The participants were 20 homosexual women, 27 heterosexual women, 17 homosexual men, and 27 heterosexual men. The videotaped stimuli included men and women engaging in same-sex intercourse, solitary masturbation, or nude exercise (no sexual activity); human male-female copulation; and animal (bonobo chimpanzee or Pan paniscus) copulation. Genital and subjective sexual arousal were continuously recorded. The genital responses of both sexes were weakest to nude exercise and strongest to intercourse. As predicted, however, actor gender was more important for men than for women, and the level of sexual activity was more important for women than for men. Consistent with this result, women responded genitally to bonobo copulation, whereas men did not. An unexpected result was that homosexual women responded more to nude female targets exercising and masturbating than to nude male targets, whereas heterosexual women responded about the same to both sexes at each activity level.",
"title": ""
},
{
"docid": "60eff31e8f742873cec993f1499385b5",
"text": "There is an increasing interest in employing multiple sensors for surveillance and communications. Some of the motivating factors are reliability, survivability, increase in the number of targets under consideration, and increase in required coverage. Tenney and Sandell have recently treated the Bayesian detection problem with distributed sensors. They did not consider the design of data fusion algorithms. We present an optimum data fusion structure given the detectors. Individual decisions are weighted according to the reliability of the detector and then a threshold comparison is performed to obtain the global decision.",
"title": ""
},
{
"docid": "4d4fdd2956ee315d39a94e7501b077ad",
"text": "While in recent years machine learning (ML) based approaches have been the popular approach in developing endto-end question answering systems, such systems often struggle when additional knowledge is needed to correctly answer the questions. Proposed alternatives involve translating the question and the natural language text to a logical representation and then use logical reasoning. However, this alternative falters when the size of the text gets bigger. To address this we propose an approach that does logical reasoning over premises written in natural language text. The proposed method uses recent features of Answer Set Programming (ASP) to call external NLP modules (which may be based on ML) which perform simple textual entailment. To test our approach we develop a corpus based on the life cycle questions and showed that Our system achieves up to 18% performance gain when compared to standard MCQ solvers. Developing intelligent agents that can understand natural language, reason and use commonsense knowledge has been one of the long term goals of AI. To track the progress towards this goal, several question answering challenges have been proposed (Levesque, Davis, and Morgenstern 2012; Clark et al. 2018; Richardson, Burges, and Renshaw 2013; Rajpurkar et al. 2016). Our work here is related to the school level science question answering challenge, ARISTO (Clark 2015; Clark et al. 2018). As shown in (Clark et al. 2018) existing IR based and end-to-end machine learning systems work well on a subset of science questions but there exists a significant amount of questions that appears to be hard for existing solvers. In this work we focus on one particular genre of such questions, namely questions about life cycles (and more generally, sequences), even though they have a small presence in the corpus. To get a better understanding of the “life cycle” questions and the “hard” ones among them consider the questions from Table 1. The text in Table 1, which describes the life cycle of a frog does not contain all the knowledge that is necessary to answer the questions. In fact, all the questions require some additional knowledge that is not given in the text. Question 1 requires knowing the definition of “middle” of a sequence. Question 2 requires the knowledge of “between”. Question Copyright c © 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Life Cycle of a Frog order: egg→ tadpole→ tadpole with legs→ adult egg Tiny frog eggs are laid in masses in the water by a female frog. The eggs hatch into tadpoles. tadpole (also called the polliwog) This stage hatches from the egg. The tadpole spends its time swimming in the water, eating and growing. Tadpoles breathe using gills and have a tail. tadpole with legs In this stage the tadpole sprouts legs (and then arms), has a longer body, and has a more distinct head. It still breathes using gills and has a tail. froglet In this stage, the almost mature frog breathes with lungs and still has some of its tail. adult The adult frog breathes with lungs and has no tail (it has been absorbed by the body). 1. What is the middle stage in a frogs life? (A) tadpole with legs (B) froglet 2. What is a stage that comes between tadpole and adult in the life cycle of a frog? (A) egg (B) froglet 3. What best indicates that a frog has reached the adult stage? (A) When it has lungs (B) When its tail has been absorbed by the body Table 1: A text for life cycle of a Frog with few questions. 3 on other hand requires the knowledge of “a good indicator”. Note that for question 3, knowing whether an adult frog has lungs or if it is the adult stage where the frog loses its tail is not sufficient to decide if option (A) is the indicator or option (B). In fact an adult frog satisfies both the conditions. An adult frog has lungs and the tail gets absorbed in the adult stage. It is the uniqueness property that decides that option (B) is an indicator for the adult stage. We believe to answer these questions the system requires access to this knowledge. Since this additional knowledge of “middle”, “between”, “indicator” (and some related ones which are shown later) is applicable to any sequence in general and is not specific to only life cycles, we aim to provide this knowledge to the question answering system and then plan to train it so that it can recognize the question types. The paradigm of declarative programming provides a natural solution for adding background knowledge. Also the existing semantic parsers perform well on recognizing questions categories. However the existing declarative programming based question answering methods demand the premises (here the life cycle text) to be given in a logical form. For the domain of life cycle question answering this seems a very demanding and impractical requirement due to the wide variety of sentences that can be present in a life cycle text. Also a life cycle text in our dataset contains 25 lines on average which makes the translation more challenging. The question that we then address is, “can the system utilize the additional knowledge (for e.g. the knowledge of an “indicator”) without requiring the entire text to be given in a formal language?” We show that by using Answer Set Programming and some of its recent features (function symbols) to call external modules that are trained to do simple textual entailment, it is possible do declaratively reasoning over text. We have developed a system following this approach that answers questions from life cycle text by declaratively reasoning about concepts such as “middle”, “between”, “indicator” over premises given in natural language text. To evaluate our method a new dataset has been created with the help of Amazon Mechanical Turk. The entire dataset contains 5811 questions that are created from 41 life cycle texts. A part of this dataset is used for testing. Our system achieved up to 18% performance improvements when compared to standard baselines. Our contributions in this work are two-fold: (a) we propose a novel declarative programming method that accepts natural language texts as premises, which as a result extends the range of applications where declarative programming can be applied and also brings down the development time significantly; (b) we create a new dataset of life cycle texts and questions (https://goo.gl/YmNQKp), which contains annotated logical forms for each question. Background Answer Set Programming An Answer Set Program is a collection of rules of the form, L0 :L1, ..., Lm,not Lm+1, ...,not Ln. where each of the Li’s is a literal in the sense of classical logic. Intuitively, the above rule means that if L1, ..., Lm are true and if Lm+1, ..., Ln can be safely assumed to be false then L0 must be true (Gelfond and Lifschitz 1988). The lefthand side of an ASP rule is called the head and the righthand side is called the body. The symbol :(“if”) is dropped if the body is empty; such rules are called facts. Throughout this paper, predicates and constants in a rule start with a lower case letter, while variables start with a capital letter. The following ASP program represents question 3 from Table 1 with three facts and one rule. Listing 1: a sample question representation qIndicator(frog,adult). option(a, has(lungs)). option(b, hasNo(tail)). ans(X):option(X,V), indicator(O,S,V),",
"title": ""
},
{
"docid": "6d110ceb82878e13014ee9b9ab63a7d1",
"text": "The fuzzy control algorithm that carries on the intelligent control twelve phases three traffic lanes single crossroads traffic light, works well in the real-time traffic flow under flexible operation. The procedures can be described as below: first, the number of vehicles of all the lanes can be received through the sensor, and the phase with the largest number is stipulated to be highest priority, while the phase turns to the next one from the previous, it transfers into the highest priority. Then the best of the green light delay time can be figured out under the fuzzy rules reasoning on the current waiting formation length and general formation length. The simulation result indicates the fuzzy control method on vehicle delay time compared with the traditional timed control method is greatly improved.",
"title": ""
},
{
"docid": "b8bd0e7a31e4ae02f845fa5f57a5297f",
"text": "In this paper, we formalize and model context in terms of a set of concepts grounded in the sensorimotor interactions of a robot. The concepts are modeled as a web using Markov Random Field (MRF), inspired from the concept web hypothesis for representing concepts in humans. On this concept web, we treat context as a latent variable of Latent Dirichlet Allocation (LDA), which is a widely-used method in computational linguistics for modeling topics in texts. We extend the standard LDA method in order to make it incremental so that: 1) it does not relearn everything from scratch given new interactions (i.e., it is online); and 2) it can discover and add a new context into its model when necessary. We demonstrate on the iCub platform that, partly owing to modeling context on top of the concept web, our approach is adaptive, online, and robust: it is adaptive and online since it can learn and discover a new context from new interactions. It is robust since it is not affected by irrelevant stimuli and it can discover contexts after a few interactions only. Moreover, we show how to use the context learned in such a model for two important tasks: object recognition and planning.",
"title": ""
},
{
"docid": "3682143e9cfe7dd139138b3b533c8c25",
"text": "In brushless excitation systems, the rotating diodes can experience open- or short-circuits. For a three-phase synchronous generator under no-load, we present theoretical development of effects of diode failures on machine output voltage. Thereby, we expect the spectral response faced with each fault condition, and we propose an original algorithm for state monitoring of rotating diodes. Moreover, given experimental observations of the spectral behavior of stray flux, we propose an alternative technique. Laboratory tests have proven the effectiveness of the proposed methods for detection of fault diodes, even when the generator has been fully loaded. However, their ability to distinguish between cases of diodes interrupted and short-circuited, has been limited to the no-load condition, and certain loads of specific natures.",
"title": ""
},
{
"docid": "da87c8385ac485fe5d2903e27803c801",
"text": "It's not surprisingly when entering this site to get the book. One of the popular books now is the polygon mesh processing. You may be confused because you can't find the book in the book store around your city. Commonly, the popular book will be sold quickly. And when you have found the store to buy the book, it will be so hurt when you run out of it. This is why, searching for this popular book in this website will give you benefit. You will not run out of this book.",
"title": ""
},
{
"docid": "39ed08e9a08b7d71a4c177afe8f0056a",
"text": "This paper proposes an anticipation model of potential customers’ purchasing behavior. This model is inferred from past purchasing behavior of loyal customers and the web server log files of loyal and potential customers by means of clustering analysis and association rules analysis. Clustering analysis collects key characteristics of loyal customers’ personal information; these are used to locate other potential customers. Association rules analysis extracts knowledge of loyal customers’ purchasing behavior, which is used to detect potential customers’ near-future interest in a star product. Despite using offline analysis to filter out potential customers based on loyal customers’ personal information and generate rules of loyal customers’ click streams based on loyal customers’ web log data, an online analysis which observes potential customers’ web logs and compares it with loyal customers’ click stream rules can more readily target potential customers who may be interested in the star products in the near future. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
b2a7d807ccab127e4e311bb4e01df53b
|
Picture Recommendation System Built on Instagram
|
[
{
"docid": "9625b24acc9c0de66c65b0ae843b7dad",
"text": "SenticNet is currently one of the most comprehensive freely available semantic resources for opinion mining. However, it only provides numerical polarity scores, while more detailed sentiment-related information for its concepts is often desirable. Another important resource for opinion mining and sentiment analysis is WordNet-Affect, which in turn lacks quantitative information. We report a work on automatically merging these two resources by assigning emotion labels to more than 2700 concepts.",
"title": ""
},
{
"docid": "164fd7be21190314a27bacb4dec522c5",
"text": "The relative ineffectiveness of information retrieval systems is largely caused by the inaccuracy with which a query formed by a few keywords models the actual user information need. One well known method to overcome this limitation is automatic query expansion (AQE), whereby the user’s original query is augmented by new features with a similar meaning. AQE has a long history in the information retrieval community but it is only in the last years that it has reached a level of scientific and experimental maturity, especially in laboratory settings such as TREC. This survey presents a unified view of a large number of recent approaches to AQE that leverage various data sources and employ very different principles and techniques. The following questions are addressed. Why is query expansion so important to improve search effectiveness? What are the main steps involved in the design and implementation of an AQE component? What approaches to AQE are available and how do they compare? Which issues must still be resolved before AQE becomes a standard component of large operational information retrieval systems (e.g., search engines)?",
"title": ""
},
{
"docid": "8865cc715acd8960b6e6287610cf0d15",
"text": "There are more than twenty distinct software engineering tasks addressed with text retrieval (TR) techniques, such as, traceability link recovery, feature location, refactoring, reuse, etc. A common issue with all TR applications is that the results of the retrieval depend largely on the quality of the query. When a query performs poorly, it has to be reformulated and this is a difficult task for someone who had trouble writing a good query in the first place. \n We propose a recommender (called Refoqus) based on machine learning, which is trained with a sample of queries and relevant results. Then, for a given query, it automatically recommends a reformulation strategy that should improve its performance, based on the properties of the query. We evaluated Refoqus empirically against four baseline approaches that are used in natural language document retrieval. The data used for the evaluation corresponds to changes from five open source systems in Java and C++ and it is used in the context of TR-based concept location in source code. Refoqus outperformed the baselines and its recommendations lead to query performance improvement or preservation in 84% of the cases (in average).",
"title": ""
}
] |
[
{
"docid": "1b5dd28d1cb6fedeb24d7ac5195595c6",
"text": "Modulation recognition algorithms have recently received a great deal of attention in academia and industry. In addition to their application in the military field, these algorithms found civilian use in reconfigurable systems, such as cognitive radios. Most previously existing algorithms are focused on recognition of a single modulation. However, a multiple-input multiple-output two-way relaying channel (MIMO TWRC) with physical-layer network coding (PLNC) requires the recognition of the pair of sources modulations from the superposed constellation at the relay. In this paper, we propose an algorithm for recognition of sources modulations for MIMO TWRC with PLNC. The proposed algorithm is divided in two steps. The first step uses the higher order statistics based features in conjunction with genetic algorithm as a features selection method, while the second step employs AdaBoost as a classifier. Simulation results show the ability of the proposed algorithm to provide a good recognition performance at acceptable signal-to-noise values.",
"title": ""
},
{
"docid": "0051a8eae3f4889fccd54b6e9f6a4b5f",
"text": "We propose a simple model for textual matching problems. Starting from a Siamese architecture, we augment word embeddings with two features based on exact and paraphrase match between words in the two sentences being considered. We train the model using four types of regularization on datasets for textual entailment, paraphrase detection and semantic relatedness. Our model performs comparably or better than more complex architectures; achieving state-of-the-art results for paraphrase detection on the SICK dataset and for textual entailment on the SNLI dataset.",
"title": ""
},
{
"docid": "a7fef5640016a6e8be3b3a08be7846c8",
"text": "Neural machine translation (NMT) heavily relies on parallel bilingual data for training. Since large-scale, high-quality parallel corpora are usually costly to collect, it is appealing to exploit monolingual corpora to improve NMT. Inspired by the law of total probability, which connects the probability of a given target-side monolingual sentence to the conditional probability of translating from a source sentence to the target one, we propose to explicitly exploit this connection to learn from and regularize the training of NMT models using monolingual data. The key technical challenge of this approach is that there are exponentially many source sentences for a target monolingual sentence while computing the sum of the conditional probability given each possible source sentence. We address this challenge by leveraging the dual translation model (target-to-source translation) to sample several mostly likely source-side sentences and avoid enumerating all possible candidate source sentences. That is, we transfer the knowledge contained in the dual model to boost the training of the primal model (source-to-target translation), and we call such an approach dual transfer learning. Experiment results on English→French and German→English tasks demonstrate that dual transfer learning achieves significant improvement over several strong baselines and obtains new state-of-the-art results.",
"title": ""
},
{
"docid": "41188fd8eb608a801a6a6cc8b9984cc4",
"text": "In voting theory, bribery is a form of manipulative behavior in which an external actor (the briber) offers to pay the voters to change their votes in order to get her preferred candidate elected. We investigate a model of bribery where the price of each vote depends on the amount of change that the voter is asked to implement. Specifically, in our model the briber can change a voter’s preference list by paying for a sequence of swaps of consecutive candidates. Each swap may have a different price; the price of a bribery is the sum of the prices of all swaps that it involves. We prove complexity results for this model, which we call swap bribery, for a broad class of election systems, including variants of approval and k-approval, Borda, Copeland, and maximin.",
"title": ""
},
{
"docid": "cf5b5083d982a1dd4b0c6bb4efb630ca",
"text": "It has been postulated that a good representation is one that disentangles the underlying explanatory factors of variation. However, it remains an open question what kind of training framework could potentially achieve that. Whereas most previous work focuses on the static setting (e.g., with images), we postulate that some of the causal factors could be discovered if the learner is allowed to interact with its environment. The agent can experiment with different actions and observe their effects. More specifically, we hypothesize that some of these factors correspond to aspects of the environment which are independently controllable, i.e., that there exists a policy and a learnable feature for each such aspect of the environment, such that this policy can yield changes in that feature with minimal changes to other features that explain the statistical variations in the observed data. We propose a specific objective function to find such factors and verify experimentally that it can indeed disentangle independently controllable aspects of the environment without any extrinsic reward signal.",
"title": ""
},
{
"docid": "34a8b0d8b7ef49fa5d2e0b9071235d41",
"text": "A MANET is a collection of mobile nodes communicating and cooperating with each other to route a packet from the source to their destinations. A MANET is proposed to support dynamic routing strategies in absence of wired infrastructure and centralized administration. In such networks, limited power in mobile nodes is a big challenge. So energy efficient techniques should be implemented with existing routing protocols to reduce link failure and improve the network lifetime. This paper is presenting an Energy-Efficient Routing protocol that will improve the utilization of link by balancing the energy consumption between utilized and underutilized nodes to meet the above challenge. The protocol deals with various parameters as Residual Energy, Bandwidth, Load and Hop Count for route discovery. The failure of any node in the route when the transmission of data packet is in progress leads to the degradation of the QoS (Quality of Service). To overcome with this issue, the paper proposes two methods for maintenance of the route. The simulation results show that the proposed protocol achieves objectives like minimizing overheads, fast convergence speed, high reliability and gives enhanced results than previous techniques like DSR.",
"title": ""
},
{
"docid": "052a2f536d21bff1a27a096a75ef61ca",
"text": "We test the effect of foreign direct investment (FDI) on economic growth in a cross-country regression framework, utilizing data on FDI flows from industrial countries to 69 developing countries over the last two decades. Our results suggest that FDI is an important vehicle for the transfer of technology, contributing relatively more to growth than domestic investment. However, the higher productivity of FDI holds only when the host country has a minimum threshold stock of human capital. Thus, FDI contributes to economic growth only when a sufficient absorptive capability of the advanced technologies is available in the host economy. 1998 Elsevier Science B.V.",
"title": ""
},
{
"docid": "5f2b29b87d4d5d9c9eeb176d044c00f3",
"text": "Automated pose estimation is a fundamental task in computer vision. In this paper, we investigate the generic framework of Cascaded Pose Regression (CPR), which demonstrates practical effectiveness in pose estimation on deformable and articulated objects. In particular, we focus on the use of CPR for face alignment by exploring existing techniques and verifying their performances on different public facial datasets. We show that the correct selection of pose-invariant features is critical to encode the geometric arrangement of landmarks and crucial for the overall regressor learnability. Furthermore, by incorporating strategies that are commonly used among the state-of-the-art, we interpret the CPR training procedure as a repeated clustering problem with explicit regressor representation, which is complementary to the original CPR algorithm. In our experiment, the qualitative evaluation of existing alignment techniques demonstrates the success of CPR for facial pose inference that can be conveniently adopted to video detection and tracking applications.",
"title": ""
},
{
"docid": "84d8ff8724df86ce100ddfbb150e7446",
"text": "Adaptive Gaussian mixtures have been used for modeling nonstationary temporal distributions of pixels in video surveillance applications. However, a common problem for this approach is balancing between model convergence speed and stability. This paper proposes an effective scheme to improve the convergence rate without compromising model stability. This is achieved by replacing the global, static retention factor with an adaptive learning rate calculated for each Gaussian at every frame. Significant improvements are shown on both synthetic and real video data. Incorporating this algorithm into a statistical framework for background subtraction leads to an improved segmentation performance compared to a standard method.",
"title": ""
},
{
"docid": "1e55f802a805ca93dd02bf5709aa4e4b",
"text": "BACKGROUND\nThe recombinant BCG ΔureC::hly (rBCG) vaccine candidate induces improved protection against tuberculosis over parental BCG (pBCG) in preclinical studies and has successfully completed a phase 2a clinical trial. However, the mechanisms responsible for the superior vaccine efficacy of rBCG are still incompletely understood. Here, we investigated the underlying biological mechanisms elicited by the rBCG vaccine candidate relevant to its protective efficacy.\n\n\nMETHODS\nTHP-1 macrophages were infected with pBCG or rBCG, and inflammasome activation and autophagy were evaluated. In addition, mice were vaccinated with pBCG or rBCG, and gene expression in the draining lymph nodes was analyzed by microarray at day 1 after vaccination.\n\n\nRESULTS\nBCG-derived DNA was detected in the cytosol of rBCG-infected macrophages. rBCG infection was associated with enhanced absent in melanoma 2 (AIM2) inflammasome activation, increased activation of caspases and production of interleukin (IL)-1β and IL-18, as well as induction of AIM2-dependent and stimulator of interferon genes (STING)-dependent autophagy. Similarly, mice vaccinated with rBCG showed early increased expression of Il-1β, Il-18, and Tmem173 (transmembrane protein 173; also known as STING).\n\n\nCONCLUSIONS\nrBCG stimulates AIM2 inflammasome activation and autophagy, suggesting that these cell-autonomous functions should be exploited for improved vaccine design.",
"title": ""
},
{
"docid": "4b99770d2c9750d89f6250d4af6c7fd5",
"text": "Domain-specific thesauri are high-cost, high-maintenance, high-value knowledge structures. We show how the classic thesaurus structure of terms and links can be mined automatically from Wikipedia. In a comparison with a professional thesaurus for agriculture we find that Wikipedia contains a substantial proportion of its concepts and semantic relations; furthermore it has impressive coverage of contemporary documents in the domain. Thesauri derived using our techniques capitalize on existing public efforts and tend to reflect contemporary language usage better than their costly, painstakingly-constructed manual counterparts",
"title": ""
},
{
"docid": "6b8329ef59c6811705688e48bf6c0c08",
"text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.",
"title": ""
},
{
"docid": "25822c79792325b86a90a477b6e988a1",
"text": "As the social networking sites get more popular, spammers target these sites to spread spam posts. Twitter is one of the most popular online social networking sites where users communicate and interact on various topics. Most of the current spam filtering methods in Twitter focus on detecting the spammers and blocking them. However, spammers can create a new account and start posting new spam tweets again. So there is a need for robust spam detection techniques to detect the spam at tweet level. These types of techniques can prevent the spam in real time. To detect the spam at tweet level, often features are defined, and appropriate machine learning algorithms are applied in the literature. Recently, deep learning methods are showing fruitful results on several natural language processing tasks. We want to use the potential benefits of these two types of methods for our problem. Toward this, we propose an ensemble approach for spam detection at tweet level. We develop various deep learning models based on convolutional neural networks (CNNs). Five CNNs and one feature-based model are used in the ensemble. Each CNN uses different word embeddings (Glove, Word2vec) to train the model. The feature-based model uses content-based, user-based, and n-gram features. Our approach combines both deep learning and traditional feature-based models using a multilayer neural network which acts as a meta-classifier. We evaluate our method on two data sets, one data set is balanced, and another one is imbalanced. The experimental results show that our proposed method outperforms the existing methods.",
"title": ""
},
{
"docid": "d7065dccb396b0a47526fc14e0a9e796",
"text": "A modified compact antipodal Vivaldi antenna is proposed with good performance for different applications including microwave and millimeter wave imaging. A step-by-step procedure is applied in this design including conventional antipodal Vivaldi antenna (AVA), AVA with a periodic slit edge, and AVA with a trapezoid-shaped dielectric lens to feature performances including wide bandwidth, small size, high gain, front-to-back ratio and directivity, modification on E-plane beam tilt, and small sidelobe levels. By adding periodic slit edge at the outer brim of the antenna radiators, lower-end limitation of the conventional AVA extended twice without changing the overall dimensions of the antenna. The optimized antenna is fabricated and tested, and the results show that S11 <; -10 dB frequency band is from 3.4 to 40 GHz, and it is in good agreement with simulation one. Gain of the antenna has been elevated by the periodic slit edge and the trapezoid dielectric lens at lower frequencies up to 8 dB and at higher frequencies up to 15 dB, respectively. The E-plane beam tilts and sidelobe levels are reduced by the lens.",
"title": ""
},
{
"docid": "4419d61684dff89f4678afe3b8dc06e0",
"text": "Reason and emotion have long been considered opposing forces. However, recent psychological and neuroscientific research has revealed that emotion and cognition are closely intertwined. Cognitive processing is needed to elicit emotional responses. At the same time, emotional responses modulate and guide cognition to enable adaptive responses to the environment. Emotion determines how we perceive our world, organise our memory, and make important decisions. In this review, we provide an overview of current theorising and research in the Affective Sciences. We describe how psychological theories of emotion conceptualise the interactions of cognitive and emotional processes. We then review recent research investigating how emotion impacts our perception, attention, memory, and decision-making. Drawing on studies with both healthy participants and clinical populations, we illustrate the mechanisms and neural substrates underlying the interactions of cognition and emotion.",
"title": ""
},
{
"docid": "681394e4cdb92de142f1bb9447d02110",
"text": "Generating adversarial examples is a critical step for evaluating and improving the robustness of learning machines. So far, most existing methods only work for classification and are not designed to alter the true performance measure of the problem at hand. We introduce a novel flexible approach named Houdini for generating adversarial examples specifically tailored for the final performance measure of the task considered, be it combinatorial and non-decomposable. We successfully apply Houdini to a range of applications such as speech recognition, pose estimation and semantic segmentation. In all cases, the attacks based on Houdini achieve higher success rate than those based on the traditional surrogates used to train the models while using a less perceptible adversarial perturbation.",
"title": ""
},
{
"docid": "1a34642809ce718c777d4d3956fdfe48",
"text": "We propose a simple, efficient and effective method using deep convolutional activation features (CNNs) to achieve stat- of-the-art classification and segmentation for the MICCAI 2014 Brain Tumor Digital Pathology Challenge. Common traits of such medical image challenges are characterized by large image dimensions (up to the gigabyte size of an image), a limited amount of training data, and significant clinical feature representations. To tackle these challenges, we transfer the features extracted from CNNs trained with a very large general image database to the medical image challenge. In this paper, we used CNN activations trained by ImageNet to extract features (4096 neurons, 13.3% active). In addition, feature selection, feature pooling, and data augmentation are used in our work. Our system obtained 97.5% accuracy on classification and 84% accuracy on segmentation, demonstrating a significant performance gain over other participating teams.",
"title": ""
},
{
"docid": "7b4a66d354443dbe560a933c9c8dd8d4",
"text": "Skin color is a well-recognized adaptive trait and has been studied extensively in humans. Understanding the genetic basis of adaptation of skin color in various populations has many implications in human evolution and medicine. Impressive progress has been made recently to identify genes associated with skin color variation in a wide range of geographical and temporal populations. In this review, we discuss what is currently known about the genetics of skin color variation. We enumerated several cases of skin color adaptation in global modern humans and archaic hominins, and illustrated why, when, and how skin color adaptation occurred in different populations. Finally, we provided a summary of the candidate loci associated with pigmentation, which could be a valuable reference for further evolutionary and medical studies. Previous studies generally indicated a complex genetic mechanism underlying the skin color variation, expanding our understanding of the role of population demographic history and natural selection in shaping genetic and phenotypic diversity in humans. Future work is needed to dissect the genetic architecture of skin color adaptation in numerous ethnic minority groups around the world, which remains relatively obscure compared with that of major continental groups, and to unravel the exact genetic basis of skin color adaptation.",
"title": ""
},
{
"docid": "37b8114afeba61ac1e381405f2503ced",
"text": "Measurements of the phases of free jet waves relative to an acoustic excitation, and of the pattern and time phase of the sound pressure produced by the same jet impinging on an edge, provide a consistent model for Stage I frequencies of edge tones and of an organ pipe with identical geometry. Both systems are explained entirely in terms of volume displacement of air by the jet. During edge-tone oscillation, 180 ø of phase delay occur on the jet. Peak positive acoustic pressure on a given side of the edge occurs at the instant the jet profile crosses the edge and starts into that side. For the pipe, additional phase shifts occur that depend on the driving points for the jet current, the Q of the pipe, and the frequency of oscillation. Introduction of this additional phase shift yields an accurate prediction of the frequencies of a blown pipe and the blowing pressure at which mode jumps will occur.",
"title": ""
},
{
"docid": "0b74c1fbfe8ad31d2c73c8db6ce8b411",
"text": "To investigate fast human reaching movements in 3D, we asked 11 right-handed persons to catch a tennis ball while we tracked the movements of their arms. To ensure consistent trajectories of the ball, we used a catapult to throw the ball from three different positions. Tangential velocity profiles of the hand were in general bell-shaped and hand movements in 3D coincided with well known results for 2D point-to-point movements such as minimum jerk theory or the 2/3rd power law. Furthermore, two phases, consisting of fast reaching and slower fine movements at the end of hand placement could clearly be seen. The aim of this study was to find a way to generate human-like (catching) trajectories for a humanoid robot.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.